DragonFlyBSD lead developer Matthew Dillon has been working on a big VM rework in the name of performance and other kernel improvements recently. Here is a look at how those DragonFlyBSD 5.5-DEVELOPMENT improvements are paying off compared to DragonFlyBSD 5.4 as well as FreeBSD 12 and five Linux distribution releases. With Dillon using an AMD Ryzen Threadripper system, we used that too for this round of BSD vs. Linux performance benchmarks.
The work by Dillon on the VM overhaul and other changes (including more HAMMER2 file-system work) will ultimately culminate with the DragonFlyBSD 5.6 release (well, unless he opts for DragonFlyBSD 6.0 or so). These are benchmarks of the latest DragonFlyBSD 5.5-DEVELOPMENT daily ISO as of this week benchmarked across DragonFlyBSD 5.4.3 stable, FreeBSD 12.0, Ubuntu 19.04, Red Hat Enterprise Linux 8.0, Debian 9.9, Debian Buster, and CentOS 7 1810 as a wide variety of reference points both from newer and older Linux distributions. (As for no Clear Linux reference point for a speedy reference point, it currently has a regression with AMD + Samsung NVMe SSD support on some hardware, including this box, prohibiting the drive from coming up due to a presumed power management issue that is still being resolved.)
With Matthew Dillon doing much of his development on an AMD Ryzen Threadripper system after he last year proclaimed the greatness of these AMD HEDT CPUs, for this round of testing I also used a Ryzen Threadripper 2990WX with 32 cores / 64 threads. Tests of other AMD/Intel hardware with DragonFlyBSD will come as the next stable release is near and all of the kernel work has settled down. For now it's mostly entertaining our own curiosity how well these DragonFlyBSD optimizations are paying off and how it's increasing the competition against FreeBSD 12 and Linux distributions.
Maybe you have been reading recently about the release of OpenBSD 6.5 and wonder, "What are the differences between Linux and OpenBSD?"
I've also been there at some point in the past and these are my conclusions.
They also apply, to some extent, to other BSDs. However, an important disclaimer applies to this article.
This list is aimed at people who are used to Linux and are curious about OpenBSD. It is written to highlight the most important changes from their perspective, not the absolute most important changes from a technical standpoint.
Please bear with me.
We are very happy to announce The NetBSD Foundation Google Summer of Code 2019 projects:
The communiting bonding period - where students get in touch with mentors and community - started yesterday. The coding period will start from May 27 until August 19.
Please welcome all our students and a big good luck to students and mentors! A big thank to Google and The NetBSD Foundation organization mentors and administrators! Looking forward to a great Google Summer of Code!
The opening keynote at EuroBSDCon 2016 predicted the future 10 years of BSDs. Amongst all the funny previsions, gnn@FreeBSD said that by 2026 OpenBSD will have its first implementation of SMP. Almost 3 years after this talk, that sounds like a plausible forecast... Why? Where are we? What can we do? Let's dive into the issue!
Most of OpenBSD's kernel still runs under a single lock, ze KERNEL_LOCK(). That includes most of the syscalls, most of the interrupt handlers and most of the fault handlers. Most of them, not all of them. Meaning we have collected & fixed bugs while setting up infrastructures and examples. Now this lock remains the principal responsible for the spin % you can observe in top(1) and systat(1).
I believe that we opted for a difficult hike when we decided to start removing this lock from the bottom. As a result many SCSI & Network interrupt handlers as well as all Audio & USB ones can be executed without big lock. On the other hand very few syscalls are already or almost ready to be unlocked, as we incorrectly say. This explains why basic primitives like tsleep(9), csignal() and selwakeup() are only receiving attention now that the top of the Network Stack is running (mostly) without big lock.
In the past years, most of our efforts have been invested into the Network Stack. As I already mentioned it should be ready to be parallelized. However think we should now concentrate on removing the KERNEL_LOCK(), even if the code paths aren't performance critical.
This release finally addresses some of the problems that prevent simple running of several games.
This happens for example when an old FNA.dll library comes with the games that doesn't match the API of our native libraries like SDL2, OpenAL, or MojoShader anymore. Some of those cases can be fixed by simply dropping in a newer FNA.dll. fnaify now asks if FNA 17.12 should be automatically added if a known incompatible FNA version is found. You simply answer yes or no.Another blocker happens when the game expects to check the SteamAPI - either from a running Steam process, or a bundled steam_api library. OpenBSD 6.5-current now has steamworks-nosteam in ports, a stub library for Steamworks.NET that prevents games from crashing simply because an API function isn't found. The repo is here. fnaify now finds this library in /usr/local/share/steamstubs and uses it instead of the bundled (full) Steamworks.NET.dll.
This may help with any games that use this layer to interact with the SteamAPI, mostly those that can only be obtained via Steam.
The order of the arguments in the create, start, and stop commands of vmctl(8) has been changed to match a commonly expected style. Manual usage or scripting with vmctl must be adjusted to use the new syntax.
For example, the old syntax looked like this:
# vmctl create disk.qcow2 -s 50G
The new syntax specifies the command options before the argument:
# vmctl create -s 50G disk.qcow2
Right now I am a bit unhappy at Fedora for a specific packaging situation, so let me tell you a little story of what I, as a system administrator, would really like distributions to not do.
For reasons beyond the scope of this blog entry, I run a Prometheus and Grafana setup on both my home and office Fedora Linux machines (among other things, it gives me a place to test out various things involving them). When I set this up, I used the official upstream versions of both, because I needed to match what we are running (or would soon be).
Recently, Fedora decided to package Grafana themselves (as a RPM), and they called this RPM package 'grafana'. Since the two different packages are different versions of the same thing as far as package management tools are concerned, Fedora basically took over the 'grafana' package name from Grafana. This caused my systems to offer to upgrade me from the Grafana.com 'grafana-6.1.5-1' package to the Fedora 'grafana-6.1.6-1.fc29' one, which I actually did after taking reasonable steps to make sure that the Fedora version of 6.1.6 was compatible with the file layouts and so on from the Grafana version of 6.1.5.
Why is this a problem? It's simple. If you're going to take over a package name from the upstream, you should keep up with the upstream releases. If you take over a package name and don't keep up to date or keep up to date only sporadically, you cause all sorts of heartburn for system administrators who use the package. The least annoying future of this situation is that Fedora has abandoned Grafana at 6.1.6 and I am going to 'upgrade' it with the upstream 6.2.1, which will hopefully be a transparent replacement and not blow up in my face. The most annoying future is that Fedora and Grafana keep ping-ponging versions back and forth, which will make 'dnf upgrade' into a minefield (because it will frequently try to give me a 'grafana' upgrade that I don't want and that would be dangerous to accept). And of course this situation turns Fedora version upgrades into their own minefield, since now I risk an upgrade to Fedora 30 actually reverting the 'grafana' package version on me.