Jonathan: Hey folks, this week Aaron and I talk with Joao Correa about TuxCare, LivePatching, and NET 6. It's a lot of fun, you don't want to miss it, so stay tuned. This is Floss Weekly, episode 823, recorded Tuesday, March 4th. TuxCare, 10 years without rebooting.
It's time for Floss Weekly, that's the show about free, libre, and open source software. I'm your host, Jonathan Bennett. And we've got something a little out of the box today. We are leaning heavily into our Linux roots today. Going to talk about something Microsoft did, something Microsoft stopped doing that someone else's it's going to be fun.
We're talking about TuxCare and specifically NET 6, which a lot of you might not be big into the NET ecosystem. I am certainly not. I've got a co host and I'm curious Aaron. Are you a NET user? Is this something that's in your wheelhouse?
Aaron: Not really. Although from a, not as a user, I would say, but I do remember I don't know when, but probably in the 2000s, the aughts as they're also known, you know, there was.
net and then there was mono. So that was more of a, of a open source Linuxy project that people, you know, it was kind of like the Java wars as well, when you open JDK and some of those things started coming out as well, trying to give open source alternatives to closed source Platforms like that.
But yeah, I don't really, the only thing I know about NET these days is when I'm installing a game or something. And as part of the install process, I get, Oh, you have to install a NET version of blah, blah, blah to make your game work. And I'm like, okay, this is the longest part of the install. Just get it over with and let me play my game or, or my, use my application or whatever.
So yeah, these days I've never been a NET. Programmer. So I don't know a ton about it. It just seems like it gets in the way a lot. So it'll be interesting to see maybe how how we can fix that. Yeah, I
Jonathan: remember back in the Windows XP era waiting on the dot net framework. Updates when doing windows updates and some of the old machines that I was trying to keep working man I would sit there for like literally an hour just trying to wait for dotnet to finish whatever it was doing Exactly.
I I know that the story really changed in the 20 teens when Microsoft released I think they called it originally NET Core. And that's when they went and they open sourced a whole bunch of the stuff around NET. And I know that today because I was checking this out for another project, today, even in Fedora, you can just, you can go do a DNF install NET, and it will just happily pull those packages down.
Because Microsoft actually open sourced them. I think that's fairly important to what we're talking about today. But I'm also very interested in the, the tux care side of this conversation. So, let's go ahead and bring our guest on. He gave us a quick primer on how to pronounce his name, and I'm sure I'm still gonna get it wrong.
It's essentially Joao Correa. Close? Yeah, it's close enough. Close enough, okay. It's close enough.
Joao: Hi, thank you for having me. Hey, it's good to have you, yeah. Yeah, so, yeah, we at Tuxeer, we do lots of different things. We essentially support open source when the original maintainers don't want anything else to do with it.
But specifically on NET, and we'll cover this more in detail. It's actually interesting that, and I was looking at some JetBrains data, the guys behind ReSharper and a lot of other tools. About 20 percent of their users actually come from the Linux world, are using their tools on Linux to develop C sharp.
When you're using C sharp, it's very, it's a very good guess that you're going to rely on some stuff from dot net. You don't have to, but you're essentially going to find your basic libraries there. So the presence of dot net on Linux, it's. Let's rewind this a bit. We actually have to go through another phase that Microsoft went through in the early 2000s, late 90s, where Linux and open source was a cancer and attached to everything and it broke their licenses and they really hated it at Redmond.
They actually turned around and they changed their minds and they started to embrace it and they started to embrace open source and they started contributing to lots of processes. And It tells something that whenever you're talking about Microsoft in the Linux space, you always have to make this distinction between that period.
That's way back there. And the way that they're doing change that they're doing their things right now. So the, the thing with. net, like you were saying with on the XP days, it was. net framework. It went through lots of revisions. There are. To version 4. 8, something like that, which I think is still supported on the Windows side of things, but it was very Windows centric.
It was closed source. It was very Windows centric. The idea behind it was to provide a standardized look and feel for the components that you use on your, or UI and your user experience components. So the. net framework gave you that. With subsequent releases with NET Core and now just NET. What they're doing is that they're giving you components, not just focused on the, on the interface.
So you have components for messaging between processes, for talking to other systems, for distributing load, for lots of different tasks, not just related to them, to the UI. And historically, one of the holdovers at the enterprise level. The prevented companies from moving fully to open source and fully to Linux was that they would always have this in house developed tool, this line of business application that had been built in the days of NET Framework or NET Core, which they couldn't cleanly migrate To Linux and the cost of doing the full refactoring of that code of getting things to run on top of Linux was just too expensive.
So even if the majority of the systems were moved to Linux, they would always have a cluster or two or three just running those line of business applications because they couldn't get rid of them. So dotnet comes and helps out in that dotnet is cross platform. You can run the same software. With some caveats on Windows and Linux and Mac OS if you're very adventurous.
But yeah, it aims to fix that. It's not just that technology that works on Windows and it's not just Windows centric anymore.
Jonathan: Yeah, I've got a, I've got a programming buddy that is a dot net developer. And he has kind of shown me the light on, you know, this is actually worth using these days. Like you said, particularly if you want to do anything with C Sharp.
It's, it's kind of the, the standard lib for C Sharp. And it, it works. And like I said during the, the intro, you can actually, Fedora is kind of my litmus test because they are very strongly open source only. And so, you know, if I can just go into a vanilla Fedora install and do a DNF install of the NET packages, then they must be set up fairly well, they must actually be open source, like everything must be kosher there.
And so I've, you know, it's one of those things where, like you said, Microsoft used to be just absolutely the bad guys in the Linux world. And so much has changed in the last 20 years.
Joao: It absolutely has, it absolutely has, their stance has turned around. I mean, they probably just started looking at their usage data from Azure that says that Linux is the most used system there.
So, I mean, they're not blind, they know their customers, they know what they're running, they know the workloads. If the majority of the customers are running Linux, then they have to be able to provide service to those Linux users. And the way that they do this. Essentially, so far has been open sourcing stuff.
I mean, you have NET, you can run PowerShell on your Linux systems for whatever reason you might want to. want to, but you can, yes. You can deploy it, it can run. It actually does a fairly decent job at letting you run exactly the same scripts on Windows and Linux for a variety of tasks, and that helps with automation if you have mixed environments.
But yeah, being on Linux and natively a Linux user, I do prefer my bash scripts every day. But again, the option is there if you want it.
Jonathan: You know, I can't help but think that Microsoft is aware of what happened with like IIS when it went head to head with Apache and how Apache just literally decimated, like reduced it by, by 90%.
You know, the actual definition of decimated, probably more than that. Apache just absolutely owned the internet there for a while. And that was one of the big things that really made Linux popular, particularly for servers back in the day. Because Apache let you do some things that IIS just could not touch.
And that really killed that section of Microsoft's business. And I can't, I can't help but think that they were aware of that and going We don't want to lose, you know, our compiler stuff and our, our library stuff too. And so they were sort of forced into this of, well, I guess we're going to play nice, right?
Joao: Yeah, so they essentially had to go with the flow. It's interesting what you said there that Apache killed the IIS. It did. The usage statistics don't lie. It really did. The majority of the top million websites are running on Linux and that's just factual and nobody can actually dispute that. But the fact that the Windows ecosystem is so large, even 10 percent of that ecosystem still represents a huge amount of systems they still pull a lot of weight.
It's still useful to, to develop applications that run on their platform. Having those same applications now be able to run on Linux and on other, on other platforms. That's just icing on the cake at this point.
Jonathan: Yeah, agreed. So let's, let's go into, for just a bit, the history of TuxCare, because I don't know much about TuxCare.
Somebody mentioned this to us in an interview we did a couple of weeks ago, was talking about some of the, the big Linux what, what was it? It was the companies that were doing support for Linux machines, and just kind of mentioned in passing, like, there's this one, and this one, and the TuxCare.
And that was the first I'd heard of it for a while. So what, like, what, Generally, does TuxCare do, but also, what's its history? Where did it come from?
Joao: Okay, so Tech Care started tech Care is a brand, so to say of Cloud Linux. Cloud Linux is a company in the, in the open source space. It has a cloud Linux os, which is very popular with with hosts and that side of things.
It has tools for production of Word Press and all of that. And Taxpayer Tech Care specifically focuses on the enterprise side. We focus specifically on providing support for. Out of service distributions into past the end of life date we offer support if you're running say 8, we continue to offer you security updates for that.
We also provide you support with many different distributions like that. Many different things that you're using and you're using just fine and you want to continue using because they're working just perfectly. So we give you the security updates to be able to do that securely. We also have life patching, and this is where the conversation usually gets interesting.
We have life patching in a way that's different from all of the other life patching vendors. For starters, we have life patching for many different distributions, not just vendor specific stuff. Like, say, Red Hat will offer you life patching for RHEL, Canonical will offer you life patching for Ubuntu. Each vendor has Some type of live patching for their own distribution, Oracle as well.
We actually support live patching on all different distributions and we'll provide you with live patch for all of those. And we actually treat live patching seriously. All of those vendors, they will add that just as an extra line item on your support agreement, and they will. Pretend to be very important for them, but then they will only patch like a handful of CVs every year.
We cover hundreds of them. In fact, if you have a CV for the kernel, we will live patch it. We've been doing that for many, many, many years now. We've deployed thousands and thousands of life patches. We have customers running systems. I believe the top one is a heating 10 years now without a single reboot.
Yeah. And they're not doing that. If you had told me 10 years ago that you were running a system for 10 years without any support, that's no longer your system. That has been hacked all over again many times. But the way that we let you do this is that you continue to receive your updates. Your kernel will be running at the equivalent of the kernel that is out right now.
And you didn't have to reboot that system to apply those updates. We'll let you live patch stuff like glibc and OpenSSL without having to restart the services that are using it. We'll go ahead, we'll find all the instances where it's running in memory, we'll apply the patches and all the services to just continue to run.
And we've been doing that consistently for many, many years. Now, the company was started, I believe, in 2009. We've been offering live patching since about 2015. We've been doing that. We have lots and lots of customers using our solutions. And a critical component of that work is that we have to work with the open source projects.
We have to backport a lot of stuff. We have to pull stuff from upstream projects and contribute back to them. And we do that. We, we have. A deep understanding of how that works and how we operate in that space. And that's how NET gets tied back into the picture of what we're discussing. As an open source project, we are fully able to integrate it into the workflow that we already have and continue to provide support for it after the original vendor, in this case, Microsoft, it's, it's an open source project, but everybody knows that it's Microsoft behind it.
We can continue to look at the vulnerabilities that are out. We can continue to develop fixes and distribute those. We have the pipelines just working perfectly by this point.
Aaron: Yeah, that's pretty amazing. I guess, I guess I didn't realize it, but I should have just given my background trying to, trying to support things, or working with companies that support things, because, you know, these days, for modern systems, you'd probably just have a container running, right, and you I Create a patch for something and then just redeploy that container and Everybody's happy because of the way that you've architected your your systems and your software to handle that sort of things but for legacy environments where You know, I'm assuming correct me if I'm wrong but I mean my ancient history came out of a regular regulated environment for pharmaceuticals and Where there was requirements to keep things around and keep things up and available with a certain period of uptime.
I mean, I'm assuming that at this point that those live patching Like, like when a, when a vulnerability comes out, for example, I mean, how do you even keep up with? The request, because I'm assuming that when a, when a really bad vulnerability comes out for something that's old, but still has to be available, you guys must have to really like jump into emergency mode, right?
Joao: I mean, we don't jump into emergency mode because that's essentially what we do. So it's not different from any other day. Essentially, right now, since the Linux kernel became a CNA, and I'm sure you guys have talked about this. I mean, the number of CVs just went through the roof, everything gets a CV. And now everybody's in trouble for not complying with patching that within X amount of days.
Yeah. That meant a lot more work for us as well. When the CV comes out, when people are struggling to find the fix and rebooting and getting their maintenance windows scheduled and all of that to reboot the systems, to apply the patches. That doesn't need to happen with live patching. The update is made available.
We have it on our repos. It gets picked up by the, the agent that deploys them. Sorry, not an agent. It's a scheduled task that every now and then checks to see if there's a, there are patches available if there are, and you're configured to do it completely automatically. As you can, it gets applied immediately.
It's the fastest way that you can respond to new CVs right now. There's no other technology that lets you do this faster. As soon as the patch is available, you can apply it. That's minutes. That's not even hours or days or weeks like you see in many places. Oh, that's awesome. The patch is applied.
Aaron: Yeah, I'm thinking like, you know, there's code monkeys out there, like actually manually going in, SSH ing into these servers and like trying to figure out something, a manual, deploy a manual patch for something on the fly, but the way that you're set up with your agents and everything, it just kind of happens.
Automatically if, if the user chooses to, to make it happen, assuming
Joao: you can still have your different tiers, you can still have your lab environment where you receive the patches first. That's all you can all you can accommodate that with life patching. But if you so choose, you can receive them completely automatically.
Jonathan: I, I can't help but think of some of the other vendors out there that do not live patching. None, none, it's exactly what you do, but do like hardening for the Linux kernel. And there's, there's one that comes to mind that is rather infamous. And that is, I believe it's GR security is the name of them that have the, the hardened Linux patches and the, the The very strained relationship with the upstream kernel.
I'm just curious, has TuxCare managed to do live patching and all of this? What is that relationship with the upstream kernel, but then also like all these distros that you support? Is that sort of a friendly relationship still, or is there some strife that happens?
Joao: We try to be friendly with everybody.
Pretty sure that's the general stance. Here's the thing. We try to get our patches to have a proper live patch that applies cleanly and doesn't introduce any side effects. You have to compile it with exactly the same configuration that the kernel that you're running has. So we do that by distribution and inside it's.
distribution. We do that by version, by kernel version, because they might change the flags that get compiled in on different versions. So we have this build matrix that's very huge, as you can imagine, it has lots of different kernel versions for every single distribution out there. And we need to backport the patches and make that compile the same way for everybody.
So we get patches from upstream developers and in the distribution side so that it aligns cleanly with what they're releasing officially on their distribution packages. But when that's not manageable, either because they're taking too long to deploy the patches or they just don't patch a given vulnerability and that happens.
Or they decide that they're not going to patch it until a subsequent version. We go upstream, or if not, we go directly to the upstream projects in the case of the kernel or the GLibc or OpenSSL, which is not the kernel, but we also livepatch it varies a lot. We don't have to my knowledge. We don't have any feud with anybody over this.
We don't Do any underhanded stuff to getting the source code. We don't essentially steal the code from anywhere. This is open source projects. We contribute back when we create the fixes ourselves. I do believe we have a healthy relationship with everybody in the environment.
Jonathan: Yeah. Yeah. Very good. So let's talk about T net then.
So T net six, I was looking and I was actually really surprised by this. T Net six was released yeah. Released state for T net six was 2021. This thing is only three years old. And Microsoft already pulled the plug on it back in November of 2024, like it did not, it did not last long. I'm, I'm real, I, I'm very surprised by this.
What's the what's the story with the NET lifecycle?
Joao: And that was a long term release. Yes. Yeah. The long term releases get three years of support. The regular ones get 18 months. That's how they roll. We're already seeing previews for NET 10 coming out right now. The thing with that. is that while Microsoft does a pretty decent job, and you have to acknowledge this, you can run stuff on Windows 11 that was built for Windows 3.
11 or something like that. Yes. And that's like 30 or 35 years ago. Yes. And you still have a way to run that. And I mean, without using emulation. So they have a pretty decent track record of maintaining backwards compatibility whenever they release something. And the thing with T net is that T net comprises a lot of namespace.
Namespace is the name that they give to the individual stuff inside of T net. You have stuff for files, you have stuff for communications, for signaling, for different subsystems in there. There will be breaking changes on some of those subsystems. So if you happen to have an application that's written and uses that.
You can't cleanly move to something more recent because you will have to refactor a lot of the code to accommodate those changes, or you'll have to find alternative libraries to fill in the gap of what you no longer have. So having this short life cycle is tricky in that regard. When you develop code against it, you should already be prepared that in at least three years you're going to have to rewrite some of it, but most companies don't.
Jonathan: That's that's interesting. I'm It it reminds me of I mean so like other open source projects do this sometimes But it's because it's it's all open to look at this kind of life cycle works when you're talking about open source code Right because it's constantly being worked on and it's being worked on in the clear And so, like, a distro maintainer, it's easy enough to just pull the patch down to be able to update it.
I'm just, I'm just really surprised that Microsoft decided to go with this route when people are writing, I assume, proprietary code against the NET framework, against NET 6. It's just, I don't know, it's just such an odd choice. Do we have, do we have, like, some insight on why they break things every three years or year and a half?
Joao: Sure. Their licensing their licensing agreement, and they need you to get new updates. Other than that it's just the way that the, the Windows ecosystem works. You have to remember that Microsoft is a Windows company. They're still focused on that, even if they do contribute to open source the way things happen on that side of the fence is not exactly the way that you're used to on the open source world.
It's just how everybody does business on the Microsoft space. It's how things are done and you either go with it or you just shout at the wind, but it will get you nowhere. So you either have to adapt or you have to to be ready to face the consequences of that. But there's no, there's no particular insight on this.
It's just the way that things are set up. It's the way that it's actually the way that companies expect this to work. In some sense, if you're forced to upgrade for some reason, then it might force you to actually look at the code. And if you have the ability to do that, that's great. The problem is that at most places you don't have that ability, especially when we're seeing the environment right now where developers are being fired left and right and development.
Teams are being are struggling just to keep up. It's really difficult to be to do that when you have internal internal applications that your team developed five, 10 years ago, they might be running fine. They might be running your accounting system and that's just fine. They might be running at say university and they're running the scores or the students enrollment and all of that and it's working fine.
And now you're forced to go back and work on it. Most teams can't accommodate the
Jonathan: extra work. So What are the what are the, what are the alternatives? Like, what, what does a company do if they have a 10 year old program that was written for NET 6? Obviously it's not going to be 10 years old. The math doesn't work there, but
Joao: yeah,
Jonathan: can they just keep running NET 6?
Is there, is there any downside to doing that?
Joao: Okay, so since NET 6 launched in November, something like that, 2021 there have been an average of about two new CVEs per month since that time. So it's pretty easy to run the numbers. If you continue to run that application without any support, you're going to be seeing CVEs that will eventually affect the code that you're using.
It will eventually affect whatever functionality you're pulling from NET and your application, no matter how well written it is, how secure it is, how your code is great. If the language fails, if you have issues in the language, then your application will suffer. You need to find an alternative source for the updates, in this case, the service we provide, or you need to refactor the code.
If you go down the route of refactoring the code, which is perfectly valid, you, you'll need to look at whatever new functionality was brought in. You'll need to rewrite code around the changes. You'll need to make sure that everything still works and that your code continues to be great. Sometimes that's doable.
Other times that's not doable because of the amount of extra work that it takes and it's extra work that it's very difficult to actually justify. Extra work means extra costs and it's all just to be at the exact same position that you were when you started this endeavor with the same application running at the exact same level, doing the exact same thing that it was doing.
You're not taking advantage of any new functionality. Taking advantage of any new tooling that comes with dotnet. You just have your application running as it was running before. And for some companies that's not a very healthy proposition.
Jonathan: And so what about just staying with dotnet six? Like if it's installed and it's working?
Is there, is there any problem? Does it really matter that Microsoft's not pushing out updates?
Joao: Well, that's the same reason why you're not running CentOS 5 these days, because ever since it was launched, it has accrued like 10, 000 new vulnerabilities. So you can continue to run it. I mean, you can bite the bullet and just do it.
It won't be your application for much longer, but hey, if that suits you, then. Go ahead. But honestly, you shouldn't be facing that proposition. That's not really something that you should be contemplating running without support, especially at the enterprise environment. That's not something that you can do.
For starters, you're going to lose all compliance. There's no compliance regulation out there that says you're going to be running an application with no support at all, knowing that there are vulnerabilities that are going to be coming out. That just doesn't fly. Your auditors, they're going to flag that you're going to get in trouble and you're.
I'm going to get in very trouble. If you say you're in healthcare or you're in finance or something like that, where you have very strict compliance regulations and you need a way around that. So again, your option is rewrite the codes or get support from somewhere else.
Jonathan: So there, there are actually vulnerability.
There are CVEs that get found in the. net frame, the, in, in. net itself. I keep calling it framework How, how how severe and how often do we get those CVEs? Like, there, there, there are, I cover, I cover security, right? On a weekly basis, and so there's like two broad categories of CVEs. And, you know, you've got the one where it's like, Okay, yes, I guess technically that could be a problem.
And then you've got the other category, it's like, Oh my goodness, my hair is on fire.
Joao: You get both, obviously you get the ones that affect say the installer, but that's something that you run one time only and you're not going to run again. So who really cares if you have the installer at the extreme case, you just unplug the network cable, you install it and you plug it back in again and then the wiser, but then there's remote code execution CVS and those have happened.
Those have been released out there for dot net. And those are more dangerous if you happen to use that specific part of the code, if you happen to rely on that specific function that was found to be abusable or exploitable, then you're in trouble again, it covers the whole, the whole spectrum here.
Jonathan: One of our listeners that's listening to us live mashed potato says it wouldn't be a Microsoft product without a good healthy bunch of CVS to go along with it.
Which is hilarious, but not actually fair because we have our, we have our own share of CVEs and in other open source projects. So it's not just Microsoft, as fun as it is to say.
Joao: Saying that when the Linux kernel had 3, 400 just last year or 4, 000 or something like that, that's not true. Probably a good road to, to go down.
Jonathan: I am convinced that the Linux kernel is publishing CVEs out of malicious compliance. I am, I am convinced that they are, they are sort of playing a game because nobody wanted to use their long term releases. And so they're sort of playing a game saying, you don't want, you don't want to use the LTSs that we provided.
Okay, that's fine. I guess because of the way the laws are written, you're going to use the most up to date kernel then, huh?
Joao: You know, I've written about this in many different outlets. I don't want to believe that's the case, but they're making it really hard to continue to believe it's not the case.
Jonathan: I, I am, I am convinced that maybe malicious is, is a little strong, but yes, I am, I am convinced that there is a bit of strategy to that.
It's, it's not an accident.
Joao: And what's up with those year identifiers? I mean, the, the kernel CNA has been issuing about half of the CVS that they put out have you have your identifiers prior to the start of the CNA activity. They've been issuing CVS for. 2016, 2017, in the extreme, and I don't know if this is true or not, I haven't checked.
If there's any code still remaining in the kernel from the nineties, you're going to run into this great situation where you're going to have a CVE issued for a year prior to having the CVE system in place, which is amazing. That's great. And we're running that risk right now.
Jonathan: It, it's fun. I'm sure it's a headache for you guys, but it's.
At the same time, it's very fun to watch as an outside observer. It's
Joao: also fun for auditors when they look at your security compliance reports and you say, Oh, you're missing patches for this CVE from 2010. What are you doing? Yeah, it was released last week.
Jonathan: Yes. Yes. That's hilarious.
Aaron: I'm kind of curious if I could jump in for a second.
Yeah, go for it. What, what other, besides NET. I mean, what, what other languages basically or programming environments have this type of issue? I'm thinking Python, maybe like people have like a Python 2 script, but for some reason they can't update to 3.
Joao: Python is one of those that has incredibly Incredibly descriptive instructions on how to take your code from one version to the next.
It's like 50 or 60 pages that you need to go through with weird examples all the way. Yeah. But yeah, Python, PHP, Java, Spring itself, the Java framework. Those are all also services that we provide support exactly so that you have, you can avoid going through all of the hassle of rewriting the code just to accommodate all of the changes.
So it would be insane to be running an out of date PHP version on your website, for example. We all love PHP like that, but yeah, if you have the updates that fix the security problems, then you can continue to do that. It's not typical, but you can continue.
Aaron: I mean, what about if dependencies change? I mean, I know that's probably a little bit more difficult problem to solve, but I know I've, I've had old Python scripts that I thought would still work, but for some reason, something happened with one of the dependencies or modules I was using and, and that didn't work.
Is there anything you can do for, for that user or you just have to say, no, we've got to set up this entire environment. We
Joao: support many, many such modules. We support many packages, popular ones, and we actually are open to supporting new ones. If you, if the user is interested in just reach out to us and talk to us, we're very open about that for all the languages Python, PHP, Java.
If you have something that you want us to support and it's open source, just talk to us. We're very likely able to help you.
Jonathan: How many, do you have an idea of how many discrete things between like languages and kernels and packages that TuxCare offers this sort of support for?
Joao: If you want to go to that level, then it's going to be hundreds or maybe thousands across all the different distributions.
Yeah. It's a very large support matrix. Like I said, the build environment is massive.
Jonathan: Do you know off the top of your head what the oldest thing is that you guys still do support for?
Joao: Not really. We cover CentOS 6 has very old stuff in there. But I can, from the top of my mind, just give you that one example that we're looking
Jonathan: for. That's, that's fine. That's fine. Can
Aaron: I, can I ask another question? This is now starting, my interest has peaked hearing all this conversation. So, I noticed on the website that there's that you also support QEMU.
Joao: Yeah.
Aaron: Which is curious to me I mean, I've been a fan of QEMU since the old days, I think Fabrice Bellard, I don't know if he's still involved, I think he was one of the original contributors or started the project, I started using it a long, long time ago when I was still at Sun Microsystems, I think and it seems to be coming, be more and more popular these days, so I'm kind of curious, what do you, what needs to be fixed with QEMU that, A customer would come to you and say, I will pay you money.
If you can take care of this problem for me.
Joao: It's getting popular probably because VMware helps that. So you know how the maintenance operations go around when you have to patch the hypervisor, when you have to patch the host system, you need to migrate every single way you need to patch the system, you need to migrate everything back, that's incredibly.
Easy to do when you have two systems. That's very, very hard when you go into the hundreds or thousands of systems. I may have done in the past migrated every single way from the node, then to apply updates and reboot the node where I had migrated to instead of from. That was a fun day for all of the 20 or 30 VMs that went down.
It's really easy to mess up the system. Sure, you can talk about all the orchestration tools that you want and all your automation scripts and all of that, but again, it takes a long time. It's very bandwidth intensive. It's really error, very easy to make mistakes there. So what we do with QMU Is that we can apply life patches to it.
If there are flaws found in QAMU and there are security issues, instead of compromising your VMs, because the hypervisor was, was breached. We can apply life patches to it. You don't have to move any VMs away. In fact, the VM workloads don't even know that the QAMU was patched. They're not even aware of that.
It doesn't stop the process.
Jonathan: I've got to ask, and you guys may not want to say this publicly, and that's fine, I suppose, but what, and Aaron touched on it just now, if somebody wants to pay you money, you can do this. And like, that's how business works. Lots of companies are willing to do lots of things if people are paying enough money.
What does that look like? And I'm also curious, like, is there a free tier that individuals can play around with, if they just want to try out some of this live patching stuff from Tuxcare?
Joao: Yeah, there is. Just go to the website, you can sign up for for a trial for that. But if you actually want to look at source code and see all the steps involved with that, we have a GitHub repo with how to create patches, live patches for libcare, which is a component of KernelCare, the live patching solution.
And we go through the motions of explaining how you take the code from a regular patch. Do all the conversions that need to happen and you turn it into a live patch and how to apply it. We explain all of that. Oh, very good. It's not about the actual code. That's our expertise or expertise in, in creating the patches and preparing and testing them and being, yeah, being able to deploy them securely and confidently that it won't bot your systems when you apply them.
Jonathan: Is, is the, is the pricing model sort of per system or is it like per library or?
Joao: It depends on the product. The pricing system is typically per system, but again, depends on the product. We have all of that information on the website. It's easy to find or just get in touch with us. We won't bite. We're very friendly.
We love talking to people, but no, but really sometimes you see that, Oh, we don't want to talk to somebody because it's going to be a hassle and they're going to try to push stuff. Get in touch with us. We have ways to help you. We can help you get in touch with us. We're friendly. Yeah, we really
Jonathan: are. Very good.
It's it's a it's a neat thing. And I'm not, I guess I'm not aware of anybody really else sort of in this niche. So that that's, that's a good thing. You're in a very interesting niche, and you're sort of there by yourselves. Yeah, what, what, how I asked you how many products you support? Do you have an idea of like, how many servers are using the different Tuxcare package or the service?
You got it.
Joao: Hundreds of thousands probably. Yeah. At this point, upwards of a million.
Jonathan: Yeah. Yeah. That's a, that's a, that's a bunch, that's a bunch of machines that is is one intern sneeze away from having a problem .
Joao: The, the median NEP time for the, for the systems running kernel care, which is the life patching solution.
Mm-hmm . Is over a year and a half right now.
Jonathan: I mentioned it in the back chat. I got real excited when one of my systems hit a thousand days of uptime. I didn't have live patching on that. I didn't feel too bad about it because it was on a segregated network. It wasn't, you know, exposed to the internet and nobody ever used it for anything other than file storage.
But yeah, I, I got real proud of a thousand days of uptime. You said you've got, you guys know of one that's over 10 years. That's just kind of ridiculous.
Joao: Again, it's the type of thing that if you're not using life patching, it's absurd to run a system for a year without updates, because especially if it's network connected and internet facing, that's just the best sentence at that point,
Aaron: that's what I was going to ask.
Like, what are, what are the, if you don't go with something like this. As a business, how, how would you even solve this? I'm thinking the stuff that I used to do, like I said, working for, for those regular regulated environments, we had high availability. And so we could take one side down, patch everything, get it up to spec and then cut over and then do the other side.
That's basically how we would have to do it. But that's super expensive because we were maintaining very expensive Unix servers. on those systems with high redundancy, live swapping hardware parts, you know, all of that kind of stuff. We were maintaining all of that twice just for the off chance that we would have to use it because we had to comply with the regulations that, that we were under and then the other, the other thing I'm thinking, maybe I'm answering my own question here.
I hope not. I'm going to give you a chance to talk. I promise. The other thing I'm thinking is that you could possibly use a container scenario. To do kind of the same thing where you were controlling things putting putting the things that you need to do control for a particular cv inside a container and then I guess it all depends on what what the problem is, but you could potentially like put a wall around it and say okay, I know that there's a vulnerability here because It's accessing something, it's not supposed to, we're just gonna like, put a firewall around that and make sure it doesn't happen.
What else would you do? I don't know, is that even valid anymore?
Joao: If you have very deep pockets, then you can go down the high availability route and you have higher upfront cost because you have to buy twice the amount that you need, or at least enough to support the amount of high availability that you went, the amount of nodes that you want to take down at one given moment and.
Yeah, if you're able to do that, if you're able to foot the bill for doing that, because it gets very expensive, as you were saying, it's very expensive hardware that you need to accommodate for, or even cloud capacity, if that's the route that you're going if you're able to do that, then that's one alternative.
But the thing with high availability is that it was created not to support that high availability using high availability as your shield for your maintenance operations. That's not really the intended usage. High availability was used. To accommodate hardware failures. Those are unexpected and predictable.
And when they hit, you need to have some way of keeping your systems up. If you're using it to do stuff that you can anticipate, then again, that's not really the intended scenario. It works. It's a solution. It's something possible, but you're essentially wasting resources because you have stuff lying around consuming electricity that's not doing anything.
And again, over provisioning when you probably could save the cost there. The other thing about containers is that The Linux kernel that's running on the host is the exact same kernel that's running inside and visible inside of the containers. The containers aren't magical, it's a kernel functionality that it gives out, it isolates the processes, it doesn't let one process talk to another, and that's what you get, what you call a container.
The kernel still needs updates. We can still live patch the kernel on the host and that way you'll get the updates to be visible and reflect inside of the containers as well. So yeah, you can go that route. You can fully isolate the container depending on the CVS. If you have a container breakout, that might be not.
That possible to do if the breakout allows you to, to go around that. But again, it all depends on the workload that you're trying to save. If it's stuff that really needs hardware access, direct hardware access, for some reason, it's controlling a tape controller or a CT scanner or something like that, then it might not work inside the container due to the restrictions.
Depends very much on the workload.
Aaron: Yeah. And the ugly truth there too, is that we know that developers and system maintainers don't always go back. They'll architect something in development, which. Maybe has ports open that it shouldn't, or maybe has root privileges that it shouldn't. And then when that goes into production, I mean, you want to call it lazy, but I think it's also a complexity problem as well, where you just can't keep track of this stuff, but it happens and you get stuff in production that has root privileges or elevated privileges available or ports open and, and then it becomes very difficult to go back and fix that after the fact, so, yeah, stuff like that is just, and then of course when a CVE comes out that can take advantage of that, then that's really when, when the, you know, what hits the fan, so yeah, yeah, interesting, it's an interesting problem to think about, and seems like a really interesting solution.
That you offer both for. net, as we've been talking today, but for lots of other things too.
Joao: Yeah. And what you were describing there, you don't just do one thing. You don't just prepare one workload and then you just sit around waiting for any changes to come that need to be applied to that. You have lots of other things to do.
You have users to deal with. You have other problems to handle. You have lots of tickets popping up. I mean. That's just a day in the life of a sysadmin, that's just what you do. So focusing on just one thing like that, like you said, with that level of attention that, Oh, I need to remember to fix that part.
And I'm going to do it tomorrow. And I'll be able to go back to that as that never happens.
Jonathan: So I, I've, I've done some looking into some of the other live patch options and there's, you know, for the kernel in particular, there's a couple of others. And one thing that I remember seeing on some of those is that.
It's like, not every part of the kernel code can be live patched. And so then, at least in theory, you could have some CVEs, some vulnerabilities, that you just couldn't ever effectively live patch, because they're, you know, they're so, for lack of a better term, they're so deep inside of the kernel. Is that, is that a problem that you guys have hit, or do you have some secret sauce that makes it possible to live patch anything inside of this?
Joao: We don't have, we don't have that problem. Okay, we need to delve a bit more into life patching. There's the, there's a consistency problem, and this is true of all of the life patching solutions. You need to maintain consistency when you're applying a life patch. Right. What I mean by this is that when you apply a change to the code, it either has to go all in and be completely applied or none at all.
So that you don't catch code that's executing in the middle of a function, then you switch the end of a function, and for example, you remove a release, and now you have a memory, a memory loss there. You need to make sure that the kernel when you apply the live patch is in a consistent state. That means that when you're applying a patch to a function, that function must not be in use.
Okay? Kernel care is smart enough to wait for a moment where the functions that it wants to patch are not being in use. One of the very few CVs, and I mean, you can count them on the fingers in one hand, that we haven't been able to live patch, has been when one of the changes a few years ago changed all of the, all of the way that functions returned, and it was for one of those high profile functions one of those high profile C vulnerabilities out there.
This one's trampolines. It's
Jonathan: just trampolines, right? The retpolines, I think they call them. Exactly.
Joao: When all the functions in the code, the, I mean, the diff for that was massive. When all of the functions in the code had the, the return poll added there. So. You had to switch all of them at the same time. We could create a live patch, and we did create a live patch.
The problem was that there was never a good time where all of those functions were not in use simultaneously, so you would never be able to apply the patch. The, the problem was not so much with the live patch creation, it was that it was impossible to maintain consistency.
Jonathan: You would have to convince the colonel to park in Maine, and good luck.
I don't even know if colonel has a Maine function.
Oh, that's fun. That is an excellent answer for that. Okay, so one of the things that I've seen is, from you guys, and in fact in your
Joao: Sorry, sorry to interrupt. Let me just backtrack there. The, those, those issues that you were seeing with other solutions, let's say that they can patch certain functions, tie into the way that they apply the life patches and into the way that they treat life patching.
Canonical, for example, the, they will tell you that you have to reboot after 90 days. If you apply the life patch, we don't tell you that you can run the systems for as long as you want and you can apply the subsequent life patches as Often as you want, you will always be running an equivalent version to whatever is out there at the moment.
So you will never be in a state where your kernel is patched up to this level plus one or two more CVEs. You will always be equivalent to whatever is out there released. So the thing with with Canonical's approach, the way that they do live patching, for example, is that they will patch specific and individual CVEs.
And they will be able, and they will often leave the kernel in a state where they can't cleanly apply another one because there will be lots of different changes that they haven't accounted for. We're never in that situation with our solution. So that's why some of the life patching solutions out there and some of the vendors out there, they don't tell you that you can patch every single, every single CVE that comes out because they will often find themselves in dead ends where they can no longer move forward with life patching.
Jonathan: Interesting. I, I am very, I'm now very intrigued and going to your GitHub repo and learning more about how exactly this works because it is, it is pretty fascinating stuff. I, I want to ask one of the things that you mentioned in your notes that you sent to us was this idea of endless. And I guess that kind of goes along, but with NET 6, you're going to support it forever?
Question mark.
Joao: So we're going to support it for as long as there are people interested in us keeping it supported. The, the mantra here being that will probably outlast the hardware where you're running your workloads on. We can continue to support when that hardware dies, you'll be able to do that because there's no technical reason why we can't.
And that's the fundamental truth of it. There will always be a way forward and there will always be a way to change the code and fix the bugs that come up. There's no reason why we can't do it. So as long as there's interest, rather than saying, Oh, no, we will only provide you with the service for three years or five additional years.
What we would be doing in that situation was just kicking the can down the road. You would eventually find yourself in the same situation when our support ended. We don't want to do that for as long as you need the service for as long as there's interest, we have the technical ability to continue to support it.
Jonathan: Are there, are there other vendors out there that are doing something similar? Well, Provide you have keeping a, a fork of dotnet six up to date fixing CVEs, and then I'm, I'm, I'm really curious if there are other vendors doing this. Are you doing, are you working together at all? Is there a kind of a shared, semi-official do net six code base that all the CVE fixes are landing in?
Joao: No. To my knowledge, there is no such air code base. We're not working together doing that. For one particular reason, the distribution vendors that are doing that, Canonical and Red Hat, they are doing that only for their own private, their own particular distributions. Canonical is supporting it for particular Ubuntu versions, RHEL is doing that for particular RHEL versions.
Right. We support all of them. We're not distribution specific. We're agnostic in that we can support all the distributions that have the package. So we're not essentially focused on one over the other or on just doing the fixes for one. We cover, we create the packages for all of those distributions, which also helps.
In most infrastructure scenarios where you're not just having one single distribution running all the systems, all the distributions try to come up with some marketing strategy or with some advantage over the others. They're all essentially just running the kernel and some tooling, but some will tell you, Oh, we're better at doing raw calculations.
We're better for AI. We're better for file systems. We're better for whatever. So we try to get past that and we support all of them.
Jonathan: A lot of that just boils down to whether users are more comfortable using dnf and yum or app app to get
Joao: essentially It's more of a philosophical choice right now than the technical one, because you can do the same thing with all of them, essentially.
Yep,
Jonathan: yep, basically. Okay, so what about NET 7? It's out of support as well. Is that on the radar?
Joao: Not necessarily 7, but we're looking at the next one because 7 is one of the regular supported ones not a long term supported one. We could eventually do it, but right now I don't know if there are plans for it.
I honestly don't.
Jonathan: You would have to have someone come along and say, We really need this service for NET 7. Here, take our money.
Joao: That would be a reason, obviously. I mean, who wouldn't now, but again, going back to distributions, because this is something that's more relatable. There are distributions and there are distribution versions that get more usage than others.
For example, CentOS 8 had much less usage than CentOS 7. That's why it ridiculously went out of support earlier than CentOS 7. So it's expectable that more people are running CentOS 7 today and using extended life cycle support from us, the endless life cycle support from us than the ones that are using CentOS 8 because it has less usage overall.
The same is true with NET 6. It's just the more common Library out there. It's just more has more usage right now than seven because seven has a shorter lifespan. So people might just have jumped over it and went for the next one the next long term after that. Again, it goes back to the life cycle and the expectation of the life cycle that you get around dot net.
When you're developing applications, you don't just take a week to develop an application. It's usually a months long process or even years long process. So you're not gonna target something that's gonna go out to support a year after a month after you release. You're gonna go with the one that gives you the most amount of time.
So developers will opt for long term support.
Jonathan: Yeah. Yeah. Very cool. Alright, Aaron, did you wanna get anything else in before I move to our wrap questions?
Aaron: No, not at all. I, I mean, this has been very, I I've learned a lot actually. Yeah. Through the discussion. It's not something I typically touch on in my day to day for work because my work focuses mostly around Kubernetes and containers and monitoring, but it's tangentially related, right?
Because we're always running into customers that have. Oh, I've got mainframe still, you know that I've got to support or I've got cobalt applications that I've got to support And this is just one more of those where I also have legacy Applications written in non supported versions of programming languages like Python or NET that I also have to, it always happens, always, without fail.
And of course there's the issue with all the CVEs around Linux and everything else. So all that to say, I think it's a really interesting discussion and certainly a service that is pretty much required these days to have around, especially for big companies, you know, with very broad deployments.
They're going to run into something that's like, it's easier just to have someone support this ongoing than it is to hire Somebody to come in and write it over from scratch. It's just the way it goes. Yeah,
Jonathan: you know that that has a lead on question Aaron and that is is is the the tux care offerings are they x86 64 specific?
Or is there support in there for, you know, arm 64? Something like some of the old IBM's mainframe, like the Z series and all of that? Or is it just, is it strictly X 86 64? At this point,
Joao: when we're looking at do net, right now, it's X 86 64 right now. Mm-hmm . We have arm, we've tossed around the idea. If there's enough interest, we, we'll probably log into that as well.
Right now it's for X 86 64. Sure. Windows and Linux. We offer this for both Windows and Linux.
Jonathan: I did not realize that. That is good. I'm glad I asked. Alright, so we've got two final questions that we are absolutely required to ask everybody, and that is, what is your favorite text editor and scripting language?
Joao: Joe, which I believe is not a very common response there, but it was the one that I learned first, and I fell in love with it, and I I still remember the keystrokes from WordStar back in the day, and it's very similar in Joe, so yeah, that's one for Joe. The scripting language, again, I do everything in Bash.
Jonathan: Ah, yes. We had the creator of Bash on one time, and I asked him this question, and I asked him, Does Bash even count as a scripting language? And he, he sort of He sort of got offended that I asked the question. He's like, well, I think so, of course it does. I was like, okay, just, just checking. So, I have, I have it on a good authority that Bash, Bash counts and is an acceptable answer.
Joao: Could be years of learning.
Jonathan: Indeed, indeed. Alright, thank you so much for being here. It has been a blast talking with you about TuxCare and NET, picking your brain about all this stuff. We appreciate you being here.
Joao: Thank you very much for the invitation. It was a pleasure.
Jonathan: Yes. All right. Aaron, what
Aaron: do you think?
Yeah, I mean, it's something that I think a lot of people don't realize or maybe don't think about it unless you're working in one of those environments where you have to provide legacy support, not just because You want to, to give your customers a good experience, but because you have to because if you don't, you're either going to leave yourself open to some sort of security problem, or because you've got to comply with some sort of either government or industry imposed regulation, and there could be severe penalties if you don't I forget how many millions of dollars it was for.
I think it was something like, Okay. It was 500, 000 an hour or something crazy like that for the systems that I worked on that if, if they weren't available, that's how much we would get fined. Oh my goodness. And so we had like an eight hour outage one time and it was like, yeah, that's 4 million to the company.
Not nothing that I did. I was going
Jonathan: to say, I'd hate to be that intern that tripped over that power cable.
Aaron: No, no, no. I mean, that's, that's what does happen. You know, there has to be people out there that are willing to take care, or there doesn't have to be, but it's nice that there are people out there that, that are, are, are in this field.
To help companies with this because sometimes like I said, it is the cheapest and easiest alternative to Standing up a new dot net 10 whatever they're on now Application.
Jonathan: Yeah. Yeah, you know, I was I was basically not aware of tux care being I've always found live patching to be super interesting, but I was not aware that tux care was Apparently the name in live patching.
I'm honestly, I'm thinking now about my, my one lone server. I now have down in Dallas. We were talking before the show, do we have to do some server transfers? And I'm down to one. And it's like, I don't like rebooting that thing. So I may be contacting TuxCare. How much would you charge me to keep one server up to date so I don't have to reboot it again?
And that's a, that's fascinating.
Aaron: Yeah, for sure. For sure. I knew about TuxCare. But it never really looked into it specifically to see, you know, I just figured, oh, okay, they offer support and live patching, okay, whatever. But it's, like I said, the QEMU stuff, the virtualization things, the fact at the end there where they mentioned they support Windows, I didn't realize that.
So it's much more involved than I assumed it was. And so really interesting to learn about.
Jonathan: Yeah, absolutely. All right, man, do you have anything you want to plug?
Aaron: Well of course you can go to either of my YouTube channels, right? So, RetroHackShack or RetroHackShack After Hours. The my, my frequency of posting is a little less these days with my eye problems.
It just makes it more difficult to do things. And so, that should get better once that once I get that fixed, hopefully. In the future, but still posting a lot of videos. You can go out there and see, I'm working on one right now. Give people a preview of, I found a whole stack of old IBM PCs. And when I say old, I mean the original IBM PC, the 5160, which is the XT and the 5170, which was the AT.
So, original hardware from IBM, I found a whole stack of those that were missing their top cover and cards had been mangled and, and, you know, state, state of the hardware was unknown. So, I'm working my way through that as kind of a repair a thon or a salvage a thon right now, and the first of those videos should be coming out this weekend on the main channel, so go to RetroHackShack on YouTube and you can see a 5150 motherboard that I had a little trouble.
Diagnosing the problems with and fixing, but yeah, spoiler alert, I got there in the end.
Jonathan: Very cool. Yes, I, I very much enjoy. Watching the, the Retro Hatchback and the After Hours channel. I'm a, I'm a sucker for retro computing as well. And I, I like your, your combination of doing it on original hardware.
But also you'll take these, these gadgets that people are still making. A lot of them are open source and that's cool about it too. And you know, you plug it into the old hardware to be able to give it, teach the old dog new tricks as they say. I, I think it's a, it's a fun combination of things.
Aaron: Yeah, yeah, I love modding things.
I mean, that's my background. You know, I started a makerspace. I love 3d printing and all that kind of stuff. So I do try to work in both open source software and hardware and modding and making things into the channel, you know, as, as often as I can, because it's just what I naturally gravitate towards.
Yeah. Very cool.
Jonathan: All right. Well, appreciate you being here, man.
Aaron: Absolutely.
Jonathan: So we don't yet have a guest scheduled for next week. We do have Darko Fabian of Semaphore on the 18th. I have, I've had some real life stuff come up, and so I've been a bit distracted from the scheduling, so we are on the hunt for guests.
If you or someone you know runs an open source project and you want to be on the show, give us an email. It's floss at hackaday dot com, and that will come to me. We'll get you scheduled up. We sure appreciate that. You can also follow, of course, us. Also on Hackaday, the home of the show, you can follow my security column that goes live every Friday morning.
And that is my take on the interesting news on the, in the security world from the week. Lots of fun and interesting and crazy stories there. And then we've also got the Untitled Linux Show that's over at twit. tv. And we have a lot of fun there covering the Linux news of the week each week. And make sure to check those out.
We appreciate everybody that is here, both that get us live and on the download, and we will see you next week with another great guest on Floss Weekly.