Organizations are racing to harness the transformative power of AI, but sensitive data privacy and model security remain critical roadblocks. What if you could unlock the full potential of AI without compromising your most valuable assets?
Canonical is thrilled to announce the availability of Ubuntu Confidential VMs on Google Cloud’s accelerator-optimized A3 machine series, featuring the groundbreaking NVIDIA H100 Tensor Core GPUs. This powerful combination brings a new level of secure and high-performance AI computing to the cloud, enabling you to confidently tackle previously impossible use cases. Ubuntu is the only operating system to support Confidential GPU on Google Cloud.
Why Confidential AI Matters
As AI permeates every industry, the need to protect sensitive data and proprietary models becomes paramount. Whether it’s fine-tuning large language models (LLMs) with private customer data, collaborating with multiple untrusted parties on healthcare research, or deploying cutting-edge AI services while safeguarding intellectual property, traditional cloud environments simply fall short.
Confidential AI, powered by the convergence of hardware-based Trusted Execution Environments (TEEs) and cutting-edge GPU technology, provides the answer. Ubuntu Confidential VMs on Google Cloud A3 extend this protection to the entire AI stack, ensuring data privacy and integrity throughout its lifecycle.
How confidential AI works
Google Cloud’s Confidential AI architecture combines AMD SEV-SNP technology with NVIDIA H100 GPUs to create a robust, confidential computing environment. Data is protected in use, in transit, and at rest through the following mechanisms:
CPU-TEE (AMD SEV-SNP): Ubuntu confidential VMs running on AMD 4th Gen EPYC processors utilize SEV-SNP to encrypt and protect the entire VM memory space. Hardware-managed keys prevent unauthorized access or modification from outside the TEE.
GPU-TEE (NVIDIA H100): NVIDIA H100 Tensor Core GPUs extend the Trusted Execution Environment to GPU-accelerated computations, ensuring data security within the GPU.
Encrypted PCIe: All PCIe traffic between the VM and GPU is encrypted and integrity-protected, mitigating risks associated with hardware-level attacks.
Attestation: Provides cryptographic verification of the CPU and GPU TEEs, ensuring workload integrity and data processing adheres to specified policies.
Ubuntu: The Secure Foundation
Our collaboration with Google Cloud and NVIDIA delivers a truly groundbreaking solution:
Accelerator Optimized Ubuntu 24.04 LTS and Ubuntu 22.04 LTS, known for their security and stability, power these confidential VMs on Google Cloud, providing a trusted and reliable foundation for your sensitive AI applications.
We recommend using Ubuntu Pro for its extended security maintenance of 12 years and additional enterprise-grade capabilities. These features ensure a more comprehensive security posture for your sensitive workloads.
Key Benefits:
Enhanced Security: Protect your sensitive data and proprietary models from unauthorized access, manipulation, or reverse engineering.
Expanded Use Cases: Unlock new opportunities for secure AI in regulated industries like healthcare, finance, and government.
Accelerated Innovation: Collaborate confidently with partners and competitors without compromising data privacy.
Simplified Compliance: Meet stringent regulatory requirements and demonstrate verifiable compliance with data protection laws.
Seamless Integration: The CUDA driver and GPU firmware handle encryption transparently, maintaining performance and ease of use. NVIDIA Blackwell architecture will provide nearly identical performance and be protected with NVIDIA Confidential Computing with strong guarantees.
Unlocking New Possibilities Across Industries
Ubuntu Confidential VMs with NVIDIA H100 GPUs on Google Cloud A3 unlocks a wide range of use cases:
Healthcare: Securely train AI models on sensitive patient data to improve diagnoses and treatment outcomes.
Finance: Detect fraud and assess risk using AI while ensuring the confidentiality of financial data.
Drug Discovery: Collaborate securely with research partners to accelerate the development of new drugs and therapies.
AI Chatbots: Give chatbot users additional assurances that their queries are not visible to anyone besides themselves.
Getting Started Today
Ready to experience the power of Confidential AI with Ubuntu? Contact us today to explore how this transformative solution can help you unlock new possibilities while safeguarding your most valuable assets.
Thanks to the hard work of our contributors, we are happy to announce the release of Lubuntu's Plucky Beta, which will become Lubuntu 25.04. This is a snapshot of the daily images. Approximately two months ago, we posted an Alpha-level update. While some information is duplicated below, that contains an accurate, concise technical summary of […]
Fixed an issue in apparmor preventing QT6 webengine applications from starting.
Beta testing!
KDE Snaps:
Updated Qt6 to 6.8.2
Updated Kf6 6.11.0
Rolling out 25.04 RC applications! You can find them in the –candidate channel!
Life:
I have decided to strike out on my own. I can’t take any more rejections! Honestly, I don’t blame them, I wouldn’t want a one armed engineer either. However, I have persevered and accomplished quite a bit with my one arm! So I have decided to take a leap of faith and with your support for open source work and a resurrected side gig of web development I will survive. If you can help sponsor my work, anything at all, even a dollar! I would be eternally grateful. I have several methods to do so:
The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 25.04, codenamed “Plucky Puffin”.
While this beta is reasonably free of any showstopper installer bugs, you will find some bugs within. This image is, however, mostly representative of what you will find when Ubuntu Studio 25.04 is released on April 17, 2025.
We encourage everyone to try this image and report bugs to improve our final release.
Special Notes
The Ubuntu Studio 25.04 image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.
Full updated information, including Upgrade Instructions, are available in the Release Notes.
New Features This Release
This release is more evolutionary rather than revolutionary. While we work hard to bring new features, this one was not one where we had anything major to report. Here are a few highlights:
Plasma 6.3 is now the default desktop environment, an upgrade from Plasma 6.1.
PipeWire continues to improve with every release.. Version 1.2.7
The Default Panel Icons are now back. The default panel now populates depending on which applications are available, so that there are never empty icons if you choose the minimal install, and then install one or more of our featured applications. This refresh to the default is done every reboot, so it’s not a live update. Additionally, it must be refreshed manually from the User side either by selecting the Global Theme or removing the panel and adding “Ubuntu Studio Default Panel”.
While not included in this Beta, Darktable will be upgraded to 5.0.0 before final release.
Major Package Upgrades
Ardour version 8.12.0
Qtractor version 1.5.3
Audacity version 3.7.3
digiKam version 8.5.0
Kdenlive version 24.12.3
Krita version 5.2.9
GIMP version 3.0.0
There are many other improvements, too numerous to list here. We encourage you to look around the freely-downloadable ISO image.
Known Issues
The installer was supposed to be able to keep the screen from locking, but this will still happen after 15 minutes. Please keep the screen active during installation. As a workaround if you know you will be keeping your machine unattended during installation, press Alt-Space to invoke Krunner (this even works from the Install Ubuntu Studio versus the Try Ubuntu Studio live environment) and type “System Settings”. From there, search for “Screen Locking” and deactivate “Lock automatically after…”.
Another possible workaround is to click on “Switch User” and then re-login as “Live User” without a password if this happens.
The Installer background and slideshow still show the Oracular Oriole mascot. This is work in progress, to be fixed in a daily release sometime between now and final release.
Additionally, we need financial contributions. Our project lead, Erich Eickmeyer, is working long hours on this project and trying to generate a part-time income. Go here to see how you can contribute financially (options are also in the sidebar).
Frequently Asked Questions
Q: Does Ubuntu Studio contain snaps? A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.
Thunderbird is also a snap this cycle in order for the maintainers to get security patches delivered faster.
Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.
Also, to keep theming consistent, all included themes are snapped in addition to the included .deb versions so that snaps stay consistent with out themes.
We are working with Canonical to make sure that the quality of snaps goes up with each release, so we please ask that you give snaps a chance instead of writing them off completely.
Q: If I install this Beta release, will I have to reinstall when the final release comes out? A: No. If you keep it updated, your installation will automatically become the final release. However, if Audacity returns to the Ubuntu repositories before final release, then you might end-up with a double-installation of Audacity. Removal instructions of one or the other will be made available in a future post.
Q: Will you make an ISO with {my favorite desktop environment}? A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.
Q: What if I don’t want all these packages installed on my machine? A: We now include a minimal install option. Install using the minimal install option, then use Ubuntu Studio Installer to install what you need for your very own content creation studio.
CHIRP is a powerful open-source tool for programming amateur radios, supporting brands like Baofeng, Kenwood, and Yaesu. With the transition from chirp-daily to chirp-next, Ubuntu users need a new approach to install the latest version. This guide provides a step-by-step method to install CHIRP, configure dependencies, and troubleshoot common issues.
Step 1: Install Required Dependencies
Before installing CHIRP, ensure your system has the necessary dependencies. Open a terminal and execute:
(Ensure you use the correct file name for your version.)
After installation, CHIRP should be available system-wide.
To add a shortcut for CHIRP to your application menu after installing it via pipx, first create a desktop entry file in the ~/.local/share/applications/ directory. Open a terminal and run nano ~/.local/share/applications/chirp.desktop to create a new file. In this file, add the following content:
Make sure to replace /home/YOUR_USERNAME/.local/bin/chirp with the correct path to the CHIRP executable. Once the file is created, save it and make it executable by running chmod +x ~/.local/share/applications/chirp.desktop. After that, refresh the application menu by running update-desktop-database ~/.local/share/applications or restarting your desktop environment. Your CHIRP application should now appear in the application menu, ready to launch with a custom shortcut.
Step 4: Ensure CHIRP Is in Your PATH
If CHIRP is not recognized as a command, update your PATH:
pipx ensurepath
Restart your terminal or log out and log back in. You can now run CHIRP using:
chirp
Step 5: Configure Serial Port Permissions
If CHIRP cannot detect your radio, you may need to grant serial port access.
Identify your radio’s device name: dmesg | grep ttyUSB This should return something like /dev/ttyUSB0.
Grant access to your user: sudo usermod -a -G $(stat -c %G /dev/ttyUSB0) $USER
Log out and back in or reboot your system for the changes to take effect.
Updating CHIRP
To update CHIRP in the future:
Download the latest .whl file.
Uninstall the current version: pipx uninstall chirp
Reinstall using the latest .whl following Step 3 above.
Troubleshooting Common Issues
CHIRP Doesn’t Start
Ensure pipx ensurepath has been executed.
Restart your terminal or log out and back in.
Serial Port Access Denied
Check user group permissions with: ls -l /dev/ttyUSB0
Add your user to the required group (e.g., dialout or uucp).
If wxPython is missing or outdated, install it manually: pip3 install -U -f https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-20.04 wxPython
Final Thoughts
By following this guide, you’ll have the latest CHIRP version running smoothly on Ubuntu. Whether you’re programming Baofeng, Kenwood, or other compatible radios, CHIRP simplifies configuration and channel management.
The latest report from the International Data Corporation (IDC) co-sponsored by Canonical and Google Cloud indicates that 36% of organizations adopt open source to improve development velocity, and 7 in 10 organizations see open source as extremely important to run mission critical workloads. However as open source adoption grows, organizations face increasing difficulty in securing and maintaining their software supply chains. These challenges are compounded by the complexity of modern cloud environments, skill gaps and stringent compliance requirements.
The report features insights from 500 participants across the Americas, APAC, Europe, and Middle East, primarily in IT Manager and IT Security Manager/Director roles. The respondents were surveyed on open source security. The report explores how businesses can build resilience by reducing maintenance complexity and automating vulnerability management. In this blog, we will explore key takeaways from the report and how they relate to enterprise application security, offering possible solutions.
Affordability, customizability, innovation, and security drive organizations to adopt open source.
Affordability – 44% of respondents claimed that using open source helps reduce their costs. Use of open source can reduce or eliminate licensing and support fees associated with proprietary software.
Customizability – Open source also allows organizations to modify and adapt the code to fit their specific needs, with 35% of participants recommending open source for customizability.
Development Velocity – With a global community contributing improvements, security patches, and optimizations, organizations can adopt cutting-edge technologies faster.
Improving security – Open source is more transparent than proprietary software, which is why 31% of organizations in the report see it as a means to improve security. As Linus’s famous law explains, the more eyeballs you have on something, the more likely it is you’ll be able to spot flaws and security issues as well as collaborate on fixes.
However, open source is naturally fragmented, and as a result, organizations consume it from a variety of projects and their dependencies, which come from many different sources. This fragmentation can compound enterprise security challenges, especially as the software stack complexity increases. Without the right auditing tools and governance framework, these challenges are difficult to tackle.
Enterprise challenges with application security
Vulnerability and patching
Vulnerability and patch management has been cited as the number one challenge for software supply chains. 7 in 10 responsible teams spend more than 6 hours per week, on average, on security patching. However the investment is not paying dividends, with only 23% of teams being satisfied or mostly satisfied with their ability to fix vulnerabilities.
Additionally, applications are often packaged with their dependencies in containers, to help ensure consistency across different environments – like development, testing, production, etc. Organizations can use containers for their microservices architecture, cloud-native applications, or DevOps workflows. However, the dynamic and intricate nature of containers can also create security challenges in the software supply chain. Managing container security requires continuous monitoring, timely patching, and strict enforcement of best practices and regulations. Around 70% of organizations mandate vulnerability patching for containers within 24 hours of identification, but only 41% are confident in their ability to execute on this policy. Many organizations struggle with limited automation, fragmented tooling, and the sheer volume of security updates required to keep systems protected. Delays in patching leave critical applications exposed to potential exploits, increasing security risks and compliance challenges.
Compliance burdens
Speaking of compliance, 37% of organizations lack an understanding of how compliance regulations apply to their systems, technologies, and software components. Regulations like FedRAMP, GDPR, and HIPAA create additional complexity for enterprises. Without a clear compliance strategy, businesses risk regulatory fines, security vulnerabilities, and operational inefficiencies.
Skills shortages
40% of organizations cited a skills shortage as the reason why they lack confidence in securing their environments. As security threats get more complicated, many enterprises struggle to find and retain talent with the expertise needed to manage vulnerabilities, apply fixes, and ensure compliance with regulations.
Risk mitigation
To mitigate these risks, 9 out of 10 organizations would prefer sourcing software packages from their OS repositories.
In practice, however, many pull software from unverified and potentially unmaintained third-party sources – introducing additional security and dependency risks. The lack of a centralized approach to software sourcing increases exposure to supply chain attacks, outdated packages, and inconsistencies in security maintenance.
Without proper controls, organizations risk integrating vulnerable components into their environments, making it difficult to ensure the integrity and security of their software stacks.
Tackling enterprise application security challenges the Ubuntu way
At Canonical, we have spent over 20 years maintaining open source. We understand these challenges and have developed a security maintenance offering around Ubuntu to alleviate these burdens for organizations: Ubuntu Pro.
Ubuntu Pro is a comprehensive subscription that covers security for both the Operating System (OS) as well as thousands of packages in Ubuntu’s repositories, many of which are commonly used for software development, such as Python, Java, PHP and others.
Organizations that develop on Ubuntu can do so with confidence, knowing the OS and the packages offered in Ubuntu are maintained for 10 years (or up to 12 with the Legacy Support add-on). Because we backport the fixes to previous versions of Ubuntu, organizations benefit from stability and fewer headaches when managing dependencies and issues resulting from forced upgrades.
Ubuntu Pro also covers security maintenance for both infrastructure and applications in Canonical’s open source portfolio. This includes OpenStack, Ceph Storage, the Kubernetes offering, and data solutions like Kafka and Postgres, meaning both the underlying infrastructure and applications are maintained.
In addition to security maintenance, Ubuntu Pro offers automated patching, hardening and compliance profiles. By ensuring that critical vulnerabilities are addressed quickly and that software dependencies come from vetted sources, Ubuntu Pro helps businesses reduce their operational risk and enhance application security. Our goal is to make security maintenance easy, just like we made Linux easy to use.
O Miguel passou a semana a ensebar um Magalhões com as suas mãos gordurosas num teclado peganhento; o Diogo não se lembra de nada do que fez e acordou numa banheira com gelo e um rim a menos. Falámos de compostos orgânicos voláteis, lançamento da Beta do Plucky Puffin, bonitos papéis de parede; os eventos da comunidade no Porto, Aveiro, Sintra e Lisboa; piadas foleiras com Rust, actualidades da Ubports, clientes de Matrix para Ubuntu Touch, bloqueio de navegadores alternativos fora dos grandes monopólios e modas de bigode foleiro.
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Linux distributions come in two main release models: fixed release and rolling release. Each has its own advantages and drawbacks, making the choice between them dependent on user needs, preferences, and use cases. In this article, we’ll dive into the history of Linux releases, explain both models in detail, provide examples of each, and help you determine which one suits you best.
A Brief History of Linux Releases
The Linux operating system was first developed by Linus Torvalds in 1991, and soon after, various distributions (distros) began emerging to make Linux more accessible to users. Early distributions followed a fixed release cycle, similar to traditional commercial software, providing stable versions with long-term support.
As Linux usage grew, developers and power users sought an alternative release model that allowed them to receive continuous updates without waiting for major version upgrades. This led to the birth of the rolling release model, which delivers updates as soon as they are available, without the need for reinstalling or upgrading to a new version.
What is a Fixed Release Distribution?
A fixed release distribution follows a structured development cycle, with periodic major releases that bundle all updates, improvements, and new features into one package. These releases are well-tested before being distributed to users.
Examples of Fixed Release Distros:
Ubuntu – Releases a new version every six months, with Long-Term Support (LTS) versions every two years.
Debian – Has three main branches: Stable (fixed release), Testing, and Unstable.
Fedora – Releases a new version approximately every six months.
openSUSE Leap – A stable release that is synchronized with SUSE Linux Enterprise.
Linux Mint – Based on Ubuntu LTS releases, focusing on stability and user-friendliness.
Pros of Fixed Release Distros:
Stable and reliable: Thoroughly tested before release.
Long-term support (LTS versions): Security updates for many years.
Predictable update cycles: Users know when a new version will be available.
Ideal for production environments and enterprises.
Cons of Fixed Release Distros:
Software can become outdated between releases.
Requires major upgrades to move to a new version.
May lack the latest features and improvements available in newer software.
What is a Rolling Release Distribution?
A rolling release distribution continuously updates packages as soon as they are available, rather than waiting for a scheduled release. This means that the operating system is always up to date without needing periodic major upgrades.
Examples of Rolling Release Distros:
Arch Linux – A minimalist and highly customizable distribution.
openSUSE Tumbleweed – A rolling release counterpart to openSUSE Leap.
Gentoo Linux – Source-based rolling release with maximum flexibility.
EndeavourOS – A user-friendly Arch-based distro.
Manjaro – Based on Arch but with added stability and ease of use.
Pros of Rolling Release Distros:
Always up to date: No need to wait for major releases.
Access to the latest software and kernel versions.
No system reinstallation required to upgrade. Ideal for developers and enthusiasts who want cutting-edge software.
Cons of Rolling Release Distros:
Can be less stable due to frequent updates.
Updates may occasionally break the system if not managed carefully.
Requires more maintenance and troubleshooting knowledge.
Fixed vs. Rolling Release: Which One Should You Choose?
Choosing between a fixed release and a rolling release distribution depends on your needs:
Criteria
Fixed Release
Rolling Release
Stability
More stable
Less stable (but up to date)
Software updates
Periodic major updates
Continuous updates
Ease of use
Easier, especially for beginners
Requires more maintenance
Security
Long-term security patches
Security updates arrive faster
Ideal for
Enterprises, production environments, beginners
Developers, power users, enthusiasts
If you prefer a stable and predictable system with fewer maintenance requirements, a fixed release distribution like Ubuntu LTS, Debian Stable, or Linux Mint is a great choice.
If you want cutting-edge software, continuous updates, and don’t mind occasional troubleshooting, a rolling release distribution like Arch Linux, Manjaro, or openSUSE Tumbleweed will suit you better.
Both fixed and rolling release distributions have their place in the Linux ecosystem. Understanding their differences allows you to make an informed choice based on your workflow, experience level, and expectations. Whether you prioritize stability or cutting-edge software, there’s a Linux distribution that fits your needs.
The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats.
In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.
Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February.
I was dismayed when I received the following mail from Nick Vidal:
Dear Luke,
Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.
We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.
The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.
I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy.
I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle.
Upd, N.B.: to people writing about this, I use they/them pronouns
Depois de sobreviverem por milagre, os anfitriões voltam cheios de novidades. O Miguel descobriu o formato PATA-com-fitinha na dissecação de um Magalhães; continua a campanha de solidariedade «vamos ajudar o Miguel a descobrir uma Caixa Mágica» com bonitas canções; o Diogo voltou do ESPECTACULAR primeiro Festival de Tecnologia Popular, organizado pela Odet em Setúbal e traz-nos um relato do evento; e provavelmente vai dedicar-se à medicina veterinária, com a abertura de uma Clínica de Linux. E como não podia deixar de ser, malhámos forte e feio no Firefox e nas trapalhadas da Mozilla - vamos precisar do patrocínio de um remédio para a azia e inibidores de ácido gama-aminobutírico.
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Thank you everyone for keeping the lights on for a bit longer. KDE snaps have been restored. I also released 24.12.3! In addition, I have moved “most” snaps to core24. The remaining snaps need newer qt6/kf6, which is a WIP. “The Bad luck girl” has been hit once again with another loss, so with that, I will be reducing my hours on snaps while I consider my options for my future. I am still around, just a bit less.
Thanks again everyone, if you can get me through one more ( lingering broken arm ) surgery I would be forever grateful! https://gofund.me/d5d59582
The Incus team is pleased to announce the release of Incus 6.10!
This release brings in an easier way to run Incus on a valid HTTPS certificate, a new way to send through provisioning data to VMs, a very welcome API enhancement and much more!
The highlights for this release are:
ACME DNS-01 validation (Let’s Encrypt)
API wide filtering support
Support for SMBIOS11 provisioning in VMs
IOMMU support in VMs
VRF support for routed NICs
Creating profiles in a project through preseed
LZ4 support for backups and images
NOTE: A bugfix release has been made available fixing a few regressions from the original 6.10 release. This is available as 6.10.1.
The full announcement and changelog can be found here. And for those who prefer videos, here’s the release overview video:
And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus
Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.
Most of my Debian contributions this month were
sponsored by Freexian.
You can also support my work directly via
Liberapay.
OpenSSH
OpenSSH upstream released
9.9p2 with fixes for
CVE-2025-26465 and CVE-2025-26466. I got a heads-up on this in advance from
the Debian security team, and prepared updates for all of testing/unstable,
bookworm (Debian 12), bullseye (Debian 11), buster (Debian 10, LTS), and
stretch (Debian 9, ELTS). jessie (Debian 8) is also still in ELTS for a few
more months, but wasn’t affected by either vulnerability.
Although I’m not particularly active in the Perl team, I fixed a
libnet-ssleay-perl build failure because
it was blocking openssl from migrating to testing, which in turn was
blocking the above openssh fixes.
A lot of my Python team work is driven by its maintainer
dashboard.
Now that we’ve finished the transition to Python 3.13 as the default
version, and inspired by a recent debian-devel thread started by
Santiago, I
thought it might be worth spending a bit of time on the “uscan error”
section. uscan is typically
scraping upstream web sites to figure out whether new versions are
available, and so it’s easy for its configuration to become outdated or
broken. Most of this work is pretty boring, but it can often reveal
situations where we didn’t even realize that a Debian package was out of
date. I fixed these packages:
cssutils (this in particular was very out of date due to a new and active
upstream maintainer since 2021)
In bookworm-backports, I updated python-django to 3:4.2.18-1 (issuing
BSA-121)
and added new backports of python-django-dynamic-fixture and
python-django-pgtrigger, all of which are dependencies of
debusine.
Anarcat recently wrote about Qalculate, and I think I’m a convert, even though
I’ve only barely scratched the surface.
The thing I almost immediately started using it for is time calculations.
When I started tracking my time, I
quickly found that Timewarrior was good at
keeping all the data I needed, but I often found myself extracting bits of
it and reprocessing it in variously clumsy ways. For example, I often don’t
finish a task in one sitting; maybe I take breaks, or I switch back and
forth between a couple of different tasks. The raw output of timew
summary is a bit clumsy for this, as it shows each chunk of time spent as
a separate row:
$ timewsummary2025-02-18Debian
Wk Date Day Tags Start End Time TotalW8 2025-02-18 Tue CVE-2025-26465, Debian, 9:41:44 10:24:17 0:42:33 next, openssh Debian, FTBFS with GCC-15, 10:24:17 10:27:12 0:02:55 icoutils Debian, FTBFS with GCC-15, 11:50:05 11:57:25 0:07:20 kali Debian, Upgrade to 0.67, 11:58:21 12:12:41 0:14:20 python_holidays Debian, FTBFS with GCC-15, 12:14:15 12:33:19 0:19:04 vigor Debian, FTBFS with GCC-15, 12:39:02 12:39:38 0:00:36 python_setproctitle Debian, Upgrade to 1.3.4, 12:39:39 12:46:05 0:06:26 python_setproctitle Debian, FTBFS with GCC-15, 12:48:28 12:49:42 0:01:14 python_setproctitle Debian, Upgrade to 3.4.1, 12:52:07 13:02:27 0:10:20 1:44:48 python_charset_normalizer 1:44:48
So I wrote this Python program to help me:
#! /usr/bin/python3"""Summarize timewarrior data, grouped and sorted by time spent."""importjsonimportsubprocessfromargparseimportArgumentParser,RawDescriptionHelpFormatterfromcollectionsimportdefaultdictfromdatetimeimportdatetime,timedelta,timezonefromoperatorimportitemgetterfromrichimportbox,printfromrich.tableimportTableparser=ArgumentParser(description=__doc__,formatter_class=RawDescriptionHelpFormatter)parser.add_argument("-t","--only-total",default=False,action="store_true")parser.add_argument("range",nargs="?",default=":today",help="Time range (usually a hint, e.g. :lastweek)",)parser.add_argument("tag",nargs="*",help="Tags to filter by")args=parser.parse_args()entries:defaultdict[str,timedelta]=defaultdict(timedelta)now=datetime.now(timezone.utc)forentryinjson.loads(subprocess.run(["timew","export",args.range,*args.tag],check=True,capture_output=True,text=True,).stdout):start=datetime.fromisoformat(entry["start"])if"end"inentry:end=datetime.fromisoformat(entry["end"])else:end=nowentries[", ".join(entry["tags"])]+=end-startifnotargs.only_total:table=Table(box=box.SIMPLE,highlight=True)table.add_column("Tags")table.add_column("Time",justify="right")fortags,timeinsorted(entries.items(),key=itemgetter(1),reverse=True):table.add_row(tags,str(time))print(table)total=sum(entries.values(),start=timedelta())hours,rest=divmod(total,timedelta(hours=1))minutes,rest=divmod(rest,timedelta(minutes=1))seconds=rest.secondsprint(f"Total time: {hours:02}:{minutes:02}:{seconds:02}")
$ summarize-time2025-02-18Debian
Tags Time ─────────────────────────────────────────────────────────────── CVE-2025-26465, Debian, next, openssh 0:42:33 Debian, FTBFS with GCC-15, vigor 0:19:04 Debian, Upgrade to 0.67, python_holidays 0:14:20 Debian, Upgrade to 3.4.1, python_charset_normalizer 0:10:20 Debian, FTBFS with GCC-15, kali 0:07:20 Debian, Upgrade to 1.3.4, python_setproctitle 0:06:26 Debian, FTBFS with GCC-15, icoutils 0:02:55 Debian, FTBFS with GCC-15, python_setproctitle 0:01:50Total time: 01:44:48
Much nicer. But that only helps with some of my reporting. At the end of a
month, I have to work out how much time to bill Freexian for and fill out a
timesheet, and for various reasons those queries don’t correspond to single
timew tags: they sometimes correspond to the sum of all time spent on
multiple tags, or to the time spent on one tag minus the time spent on
another tag, or similar. As a result I quite often have to do basic
arithmetic on time intervals; but that’s surprisingly annoying! I didn’t
previously have good tools for that, and was reduced to doing things like
str(timedelta(hours=..., minutes=..., seconds=...) + ...) in Python,
which gets old fast.
I also often want to work out how much of my time I’ve spent on Debian work
this month so far, since Freexian pays me for up to 20% of my work time on
Debian; if I’m
under that then I might want to prioritize more Debian projects, and if I’m
over then I should be prioritizing more Freexian projects as otherwise I’m
not going to get paid for that time.
$ summarize-time-t:monthFreexian
Total time: 69:19:42$ summarize-time-t:monthDebian
Total time: 24:05:30$ qalc'24:05:30 / (24:05:30 + 69:19:42) to %'(86730 / 3600)/ ((86730 / 3600) + (249582 / 3600)) ≈ 25.78855349%
The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 24.04.2 LTS. This is a minor release which wraps-up the security and bug fixes into one .iso image, available for download now.
Giving is down. We understand that some people may no longer be able to give financially to this project, and that’s OK. However, if you have never given to Ubuntu Studio for the hard work and dedication we put into this project, please consider a monetary contribution.
Additionally, we would love to see more monthly contributions to this project. You can do so via PayPal, Liberapay, or Patreon. We would love to see more contributions!
So don’t wait, and don’t wait for someone else to do it! Thank you in advance!
Donate using PayPal Donations are Monthly or One-Time
Donate using Liberapay Donations are Weekly, Monthly, or Annually
I can’t remember exactly the joke I was making at the time in my
work’s slack instance (I’m sure it wasn’t particularly
funny, though; and not even worth re-reading the thread to work out), but it
wound up with me writing a UEFI binary for the punchline. Not to spoil the
ending but it worked - no pesky kernel, no messing around with “userland”. I
guess the only part of this you really need to know for the setup here is that
it was a Severance joke,
which is some fantastic TV. If you haven’t seen it, this post will seem perhaps
weirder than it actually is. I promise I haven’t joined any new cults. For
those who have seen it, the payoff to my joke is that I wanted my machine to
boot directly to an image of
Kier Eagan.
As for how to do it – I figured I’d give the uefi
crate a shot, and see how it is to use,
since this is a low stakes way of trying it out. In general, this isn’t the
sort of thing I’d usually post about – except this wound up being easier and
way cleaner than I thought it would be. That alone is worth sharing, in the
hopes someome comes across this in the future and feels like they, too, can
write something fun targeting the UEFI.
First thing’s first – gotta create a rust project (I’ll leave that part to you
depending on your life choices), and to add the uefi crate to your
Cargo.toml. You can either use cargo add or add a line like this by hand:
uefi = { version = "0.33", features = ["panic_handler", "alloc", "global_allocator"] }
We also need to teach cargo about how to go about building for the UEFI target,
so we need to create a rust-toolchain.toml with one (or both) of the UEFI
targets we’re interested in:
Unfortunately, I wasn’t able to use the
image crate,
since it won’t build against the uefi target. This looks like it’s
because rustc had no way to compile the required floating point operations
within the image crate without hardware floating point instructions
specifically. Rust tends to punt a lot of that to libm usually, so this isnt
entirely shocking given we’re nostd for a non-hardfloat target.
So-called “softening” requires a software floating point implementation that
the compiler can use to “polyfill” (feels weird to use the term polyfill here,
but I guess it’s spiritually right?) the lack of hardware floating point
operations, which rust hasn’t implemented for this target yet. As a result, I
changed tactics, and figured I’d use ImageMagick to pre-compute the pixels
from a jpg, rather than doing it at runtime. A bit of a bummer, since I need
to do more out of band pre-processing and hardcoding, and updating the image
kinda sucks as a result – but it’s entirely manageable.
This will take our input file (kier.jpg), resize it to get as close to the
desired resolution as possible while maintaining aspect ration, then convert it
from a jpg to a flat array of 4 byte RGBA pixels. Critically, it’s also
important to remember that the size of the kier.full.jpg file may not actually
be the requested size – it will not change the aspect ratio, so be sure to
make a careful note of the resulting size of the kier.full.jpg file.
Last step with the image is to compile it into our Rust bianary, since we
don’t want to struggle with trying to read this off disk, which is thankfully
real easy to do.
Remember to use the width and height from the final kier.full.jpg file as the
values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we
have 4 byte wide values for each pixel as a result of our conversion step into
RGBA. We’ll only use RGB, and if we ever drop the alpha channel, we can drop
that down to 3. I don’t entirely know why I kept alpha around, but I figured it
was fine. My kier.full.jpg image winds up shorter than the requested height
(which is also qemu’s default resolution for me) – which means we’ll get a
semi-annoying black band under the image when we go to run it – but it’ll
work.
Anyway, now that we have our image as bytes, we can get down to work, and
write the rest of the code to handle moving bytes around from in-memory
as a flat block if pixels, and request that they be displayed using the
UEFI GOP. We’ll just need to hack up a container
for the image pixels and teach it how to blit to the display.
/// RGB Image to move around. This isn't the same as an
/// `image::RgbImage`, but we can associate the size of
/// the image along with the flat buffer of pixels.
structRgbImage {
/// Size of the image as a tuple, as the
/// (width, height)
size: (usize, usize),
/// raw pixels we'll send to the display.
inner: Vec<BltPixel>,
}
impl RgbImage {
/// Create a new `RgbImage`.
fnnew(width: usize, height: usize) -> Self {
RgbImage {
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
}
}
/// Take our pixels and request that the UEFI GOP
/// display them for us.
fnwrite(&self, gop: &mut GraphicsOutput) -> Result {
gop.blt(BltOp::BufferToVideo {
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
})
}
}
impl Index<(usize, usize)>for RgbImage {
typeOutput= BltPixel;
fnindex(&self, idx: (usize, usize)) -> &BltPixel {
let (x, y) = idx;
&self.inner[y * self.size.0+ x]
}
}
impl IndexMut<(usize, usize)>for RgbImage {
fnindex_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel {
let (x, y) = idx;
&mut self.inner[y * self.size.0+ x]
}
}
We also need to do some basic setup to get a handle to the UEFI
GOP via the UEFI crate (using
uefi::boot::get_handle_for_protocol
and
uefi::boot::open_protocol_exclusive
for the GraphicsOutput
protocol), so that we have the object we need to pass to RgbImage in order
for it to write the pixels to the display. The only trick here is that the
display on the booted system can really be any resolution – so we need to do
some capping to ensure that we don’t write more pixels than the display can
handle. Writing fewer than the display’s maximum seems fine, though.
fnpraise() -> Result {
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
letmut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
// our image and the display we're using.
let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
letmut buffer = RgbImage::new(width, height);
for y in0..height {
for x in0..width {
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel =&mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r +1];
pixel.blue = KIER[idx_r +2];
}
}
buffer.write(&mut gop)?;
Ok(())
}
Not so bad! A bit tedious – we could solve some of this by turning
KIER into an RgbImage at compile-time using some clever Cow and
const tricks and implement blitting a sub-image of the image – but this
will do for now. This is a joke, after all, let’s not go nuts. All that’s
left with our code is for us to write our main function and try and boot
the thing!
#[entry]fnmain() -> Status {
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
}
If you’re following along at home and so interested, the final source is over at
gist.github.com.
We can go ahead and build it using cargo (as is our tradition) by targeting
the UEFI platform.
While I can definitely get my machine to boot these blobs to test, I figured
I’d save myself some time by using QEMU to test without a full boot.
If you’ve not done this sort of thing before, we’ll need two packages,
qemu and ovmf. It’s a bit different than most invocations of qemu you
may see out there – so I figured it’d be worth writing this down, too.
$ doas apt install qemu-system-x86 ovmf
qemu has a nice feature where it’ll create us an EFI partition as a drive and
attach it to the VM off a local directory – so let’s construct an EFI
partition file structure, and drop our binary into the conventional location.
If you haven’t done this before, and are only interested in running this in a
VM, don’t worry too much about it, a lot of it is convention and this layout
should work for you.
With all this in place, we can kick off qemu, booting it in UEFI mode using
the ovmf firmware, attaching our EFI partition directory as a drive to
our VM to boot off of.
If all goes well, soon you’ll be met with the all knowing gaze of
Chosen One, Kier Eagan. The thing that really impressed me about all
this is this program worked first try – it all went so boringly
normal. Truly, kudos to the uefi crate maintainers, it’s incredibly
well done.
Booting a live system
Sure, we could stop here, but anyone can open up an app window and see a
picture of Kier Eagan, so I knew I needed to finish the job and boot a real
machine up with this. In order to do that, we need to format a USB stick.
BE SURE /dev/sda IS CORRECT IF YOU’RE COPY AND PASTING. All my drives
are NVMe, so BE CAREFUL – if you use SATA, it may very well be your
hard drive! Please do not destroy your computer over this.
$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Once that looks good (depending on your flavor of udev you may or
may not need to unplug and replug your USB stick), we can go ahead
and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR
USB STICK) and write our EFI directory to it.
$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi
Of course, naturally, devotion to Kier shouldn’t mean backdooring your system.
Disabling Secure Boot runs counter to the Core Principals, such as Probity, and
not doing this would surely run counter to Verve, Wit and Vision. This bit does
require that you’ve taken the step to enroll a
MOK and know how
to use it, right about now is when we can use sbsign to sign our UEFI binary
we want to boot from to continue enforcing Secure Boot. The details for how
this command should be run specifically is likely something you’ll need to work
out depending on how you’ve decided to manage your MOK.
I figured I’d leave a signed copy of boot2kier at
/boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled
and enforcing, just took a matter of going into my BIOS to add the right
boot option, which was no sweat. I’m sure there is a way to do it using
efibootmgr, but I wasn’t smart enough to do that quickly. I let ‘er rip,
and it booted up and worked great!
It was a bit hard to get a video of my laptop, though – but lucky for me, I
have a Minisforum Z83-F sitting around (which, until a few weeks ago was running
the annual http server to control my christmas tree
) – so I grabbed it out of the christmas bin, wired it up to a video capture
card I have sitting around, and figured I’d grab a video of me booting a
physical device off the boot2kier USB stick.
Attentive readers will notice the image of Kier is smaller then the qemu booted
system – which just means our real machine has a larger GOP display
resolution than qemu, which makes sense! We could write some fancy resize code
(sounds annoying), center the image (can’t be assed but should be the easy way
out here) or resize the original image (pretty hardware specific workaround).
Additionally, you can make out the image being written to the display before us
(the Minisforum logo) behind Kier, which is really cool stuff. If we were real
fancy we could write blank pixels to the display before blitting Kier, but,
again, I don’t think I care to do that much work.
But now I must away
If I wanted to keep this joke going, I’d likely try and find a copy of the
original
video when Helly 100%s her file
and boot into that – or maybe play a terrible midi PC speaker rendition of
Kier, Chosen One, Kier after
rendering the image. I, unfortunately, don’t have any friends involved with
production (yet?), so I reckon all that’s out for now. I’ll likely stop playing
with this – the joke was done and I’m only writing this post because of how
great everything was along the way.
All in all, this reminds me so much of building a homebrew kernel to boot a
system into – but like, good, though, and it’s a nice reminder of both how
fun this stuff can be, and how far we’ve come. UEFI protocols are light-years
better than how we did it in the dark ages, and the tooling for this is SO
much more mature. Booting a custom UEFI binary is miles ahead of trying to
boot your own kernel, and I can’t believe how good the uefi crate is
specifically.
Praise Kier! Kudos, to everyone involved in making this so delightful ❤️.
Wireshark is an essential tool for network analysis, and staying up to date with the latest releases ensures access to new features, security updates, and bug fixes. While Ubuntu’s official repositories provide stable versions, they are often not the most recent.
Wearing both WiresharkCore Developer and Debian/Ubuntu package maintainer hats, I’m happy to help the Wireshark team in providing updated packages for all supported Ubuntu versions through dedicated PPAs. This post outlines how you can install the latest stable and nightly Wireshark builds on Ubuntu.
Latest Stable Releases
For users who want the most up-to-date stable Wireshark version, we maintain a PPA with backports of the latest releases:
For those who want to test new features before they are officially released, nightly builds are also available. These builds track the latest development code and you can watch them cooking on their Launchpad recipe page.
Note: Nightly builds may contain experimental features and are not guaranteed to be as stable as the official releases. Also it targets only Ubuntu 24.04 and later including the current development release.
If you need to revert to the stable version later, remove the nightly PPA and reinstall Wireshark:
Throughout my career, I’ve had the privilege of working with organizations that create widely-used open source tools. The popularity of these tools is evident through their impressive download statistics, strong community presence, and engagement both online and at events.
At Influxdata, I was part of the Telegraf team, where we witnessed substantial adoption through downloads and active usage, reflected in our vibrant bug tracker.
What makes Syft and Grype particularly exciting, beyond their permissive licensing, consistent release cycle, dedicated developer team, and distinctivemascots, is how they serve as building blocks for other tools and services.
Syft isn’t just a standalone SBOM generator - it’s a library that developers can integrate into their own tools. Some organizations even build their own SBOM generators and vulnerability tools directly from our open source foundation!
(I find it delightfully meta to discover syft inside other tools using syft itself)
This collaborative building upon existing tools mirrors how Linux distributions often build upon other Linux distributions. Like Ubuntu and Telegraf, we see countless individuals and organizations creating innovative solutions that extend beyond the core capabilities of Syft and Grype. It’s the essence of open source - a multiplier effect that comes from creating accessible, powerful tools.
While we may not always know exactly how and where these tools are being used (and sometimes, rightfully so, it’s not our business), there are many cases where developers and companies want to share their innovative implementations.
I’m particularly interested in these stories because they deserve to be shared. I’ve been exploring public repositories like the GitHub network dependents for syft, grype, sbom-action, and scan-action to discover where our tools are making an impact.
The adoption has been remarkable!
I reached out to several open source projects to learn about their implementations, and Nicolas Vuilamy from MegaLinter was the first to respond - which brings us full circle.
Tired of waiting for apt to finish installing packages? Wish there were a way to make your installations blazingly fast without caring about minor things like, oh, data integrity? Well, today is your lucky day!
I’m thrilled to introduce apt-eatmydata, now available for Debian and all supported Ubuntu releases!
What Is apt-eatmydata?
If you’ve ever used libeatmydata, you know it’s a nifty little hack that disables fsync() and friends, making package installations way faster by skipping unnecessary disk writes. Normally, you’d have to remember to wrap apt commands manually, like this:
eatmydata apt install texlive-full
But who has time for that? apt-eatmydata takes care of this automagically by integrating eatmydata seamlessly into apt itself! That means every package install is now turbocharged—no extra typing required.
How to Get It
Debian
If you’re on Debian unstable/testing (or possibly soon in stable-backports), you can install it directly with:
sudo apt install apt-eatmydata
Ubuntu
Ubuntu users already enjoy faster package installation thanks to zstd-compressed packages and to switch to even higher gear I’ve backported apt-eatmydata to all supported Ubuntu releases. Just add this PPA and install:
And boom! Your apt install times are getting serious upgrade. Let’s run some tests…
# pre-download package to measure only the installation $ sudo apt install -d linux-headers-6.8.0-53-lowlatency ... # installation time is 9.35s without apt-eatmydata: $ sudo time apt install linux-headers-6.8.0-53-lowlatency ... 2.30user 2.12system 0:09.35elapsed 47%CPU (0avgtext+0avgdata 174680maxresident)k 32inputs+1495216outputs (0major+196945minor)pagefaults 0swaps $ sudo apt install apt-eatmydata ... $ sudo apt purge linux-headers-6.8.0-53-lowlatency # installation time is 3.17s with apt-eatmydata: $ sudo time eatmydata apt install linux-headers-6.8.0-53-lowlatency 2.30user 0.88system 0:03.17elapsed 100%CPU (0avgtext+0avgdata 174692maxresident)k 0inputs+205664outputs (0major+198099minor)pagefaults 0swaps
apt-eatmydata just made installing Linux headers 3x faster!
But Wait, There’s More!
If you’re automating CI builds, there’s even a GitHub Action to make your workflows faster essentially doing what apt-eatmydata does, just setting it up in less than a second! Check it out here: GitHub Marketplace: apt-eatmydata
Should You Use It?
Warning:apt-eatmydatais not for all production environments. If your system crashes mid-install, you might end up with a broken package database. But for throwaway VMs, containers, and CI pipelines? It’s an absolute game-changer. I use it on my laptop, too.
So go forth and install recklessly fast!
If you run into any issues, feel free to file a bug or drop a comment. Happy hacking!
Everyone's got a newsletter these days (like everyone's got a podcast). In general, I think this is OK: instead of going through a middleman publisher, have a direct connection from you to the people who want to read what you say, so that that audience can't be taken away from you.
On the other hand, I don't actually like newsletters. I don't really like giving my email address to random people1, and frankly an email app is not a great way to read long-form text! There are many apps which are a lot better at this.
There is a solution to this and the solution is called RSS. Andy Bell explains RSS and this is exactly how I read newsletters. If I want to read someone's newsletter and it's on Substack, or ghost.io, or buttondown.email, what I actually do is subscribe to their newsletter but what I'm actually subscribing to is their RSS feed. This sections off newsletter stuff into a completely separate app that I can catch up on when I've got the time, it means that the newsletter owner (or the site they're using) can't decide to "upsell" me on other stuff they do that I'm not interested in, and it's a better, nicer reading experience than my mail app.2
I use NetNewsWire on my iOS phone, but there are a bunch of other newsreader apps for every platform and you should choose whichever one you want. Andy lists a bunch, above.
The question, of course, then becomes: how do you find the RSS feed for a thing you want to read?3 Well, it turns out... you don't have to.
When you want to subscribe to a newsletter, you literally just put the web address of the newsletter itself into your RSS reader, and that reader will take care of finding the feed and subscribing to it, for you. It's magic. Hooray! I've tested this with substack, with ghost.io, with buttondown.email, and it works with all of them. You don't need to do anything.
If that doesn't work, then there is one neat alternative you can try, though. Kill The Newsletter will give you an email address for any site you name, and provide the incoming emails to that as an RSS feed. So, if you've found a newsletter which doesn't exist on the web (boo hiss!) and doesn't provide an RSS feed, then you go to KTN, it gives you some randomly-generated email address, you subscribe to the intransigent newsletter with that email address, and then you can subscribe to the resultant feed in your RSS reader. It's dead handy.
If you run a newsletter and it doesn't have an RSS feed and you want it to have, then have a look at whatever newsletter software you use; it will almost certainly provide a way to create one, and you might have to tick a box. (You might also want to complain to the software creators that that box wasn't ticked by default.) If you've got an RSS feed for the newsletter that you write, but putting your site's address into an RSS reader doesn't find that RSS feed, then what you need is RSS autodiscovery, which is the "magic" alluded to above; you add a line to your site's HTML in the <head> section which reads <link rel="alternate" type="application/rss+xml" title="RSS" href="https://URL/of/your/feed"> and then it'll work.
I like this. Read newsletters at my pace, in my choice of app, on my terms. More of that sort of thing.
despite how it's my business to do so and it's right there on the front page of the website, I know, I know ↩
Is all of this doable in my mail client? Sure. I could set up filters, put newsletters into their own folders/labels, etc. But that's working around a problem rather than solving it ↩
I suggested to Andy that he ought to write this post explaining how to do this and then realised that I should do it myself and stop being such a lazy snipe, so here it is ↩
Lubuntu Plucky Puffin is the current development branch of Lubuntu, which will become 25.04. Since the release of 24.10, we have been hard at work polishing the experience and fixing bugs in the upcoming release. Below, we detail some of the changes you can look forward to in 25.04. Two Minute Minimal Install When installing […]
The latest thing circulating around people still blogging is the Blog Questions Challenge; Jon did it (and asked if I was) and so have Jeremy and Ethan and a bunch of others, so clearly it is time I should get on board, fractionally late as ever.1
Why did you start blogging in the first place?
Some other people I admired were doing it. I think the person I was most influenced by to start doing it was Simon Willison, who is also still at it2, but a whole bunch of people got on board at around that same time, back in the early days when you be a medium-sized fish in a small pool just by participating. Mark Pilgrim springs to mind as well -- that's a good example of having influence, when the "standard format" of permalinks got sort of hashed out collectively to be /2025/02/03/blog-questions-challenge, which a lot of places still adhere to (although it feels faintly quaint, these days).
Interestingly, a lot of the early posts on this site are short two-sentence half-paragraph things, throwaway thoughts, and that all got sucked up by social media... but social media hadn't been invented, back in 2002.
What platform are you using to manage your blog and why did you choose it? Have you blogged on other platforms before?
Cor. When it started, this site was being run by Castalian, which was basically "classic ASP but Python instead of VBScript", a thing I built. This is because I was using ASP at work on Windows machines, so that was the model for "dynamic web pages" that I understood, but I wasn't on Windows5 and so I built it myself. No idea if it still works and I very much doubt it since it's old enough to buy all the drinks these days.
After that it was Movable Type for a bit and then, because I'd discovered the idea of funky caching6 it was Vellum, that model (a) in Python and (b) written by me. Then for a while it was "Thort", which was based on CouchDB7, and then it was WordPress, and then in 2014 I switched from WP to a static build based on Pelican, which it still is to this day. Crikey, that was over ten years ago!8 I like static site generators: I even wrote 10 Popular Static Site Generators a few years ago for WebsiteSetup which I think is still pretty good.
How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog?
In my text editor, which is Sublime Text. The static setup is here on my machine; I write a post, I type make kryogenix, and it runs a whole little series of scripts which invoke Pelican to build the static HTML for the blog, do a few things that I've added (such as add footnote handling9, make og:image links and images10, and sort of handle webmentions but that's broken at the moment) and then copy it up to my actual website (via git) to be published.
It's all a bit lashed together, to be honest, but this whole website is like that. It is something like an ancient city, such as London or Rome; what this site is mostly built on is the ruins of the previous history of the city. Sometimes the older bits poke through because they're still actually OK, or they never got updated; sometimes they've been replaced with the new shiny. You should see the .htaccess file, which operates a bewildering set of redirects through about six different generations of URLs so all the old links still work.11
When do you feel most inspired to write?
When the muse seizes me. Sometimes that's a lot; sometimes not. I do quite a lot of paid writing as part of my various day jobs for others, and quite a lot of creative writing as part of running a play-by-post D&D campaign, and that sucks up a reasonable amount of the writing energy, but there are things which just demand going on the website. Normally these days it's things where I want them to be a reference of some kind -- maybe of a useful tech thing, or some important thought, or something interesting -- for myself or for others.
Alternatively you might think the answer is "while in the pub, which leads to making random notes in an email to myself from my phone and then writing a blog post when I get home" and while this is not true, it's not not true either. I do not want to do a histogram of posting times from this site because I am worried that I will find that the majority are at, like, 11.15pm.
Do you publish immediately after writing, or do you let it simmer a bit as a draft?
Always post immediately. I have discovered about myself that, for semi-ephemeral stuff like posts here or projects that I do for fun, that I need to get them done as part of that initial burst of inspiration and energy. If I don't get it done, then my enthusiasm will fade and they will linger half-finished for ever and never get completed. I don't necessarily like this, but I've learned to live with it. If I think of an idea for a post and write a note about it and then don't do it, when I rediscover the note a week later it will not seem anything like as compelling. So posts are mostly written as one long stream-of-consciousness to capitalise on the burning of the creative fire before it gets doused by time or work or everything going on in the world. Carpe diem, I guess.12
Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature?
Not really at the moment, but, as above, these things tend to arrive in a blizzard of excitement and implementation and then linger forever once done. But right now... it all seems to work OK. Ask me when I get back from the pub.
Next?
Well, I should probably point back at some of the people who inspired me to do this or other things and keep doing so to this day. So Simon, Remy, and Bruce, perhaps!
although no longer at simon.incutio.com -- what even was Incutio? ↩
I resisted the word "blog" for a long time, calling it a "weblog", and the activity being "weblogging", because "blog" is such an ugly word. Like most of the fights I was picking in the mid 2000s, this also seems faintly antiquated and passé now. Sic transit gloria mundi and all that. ↩
or "nihil sub sole novum", since we're doing Latin quotes today ↩
and Windows's relationship with Python has always been a bit unsteady, although it's better these days now that Microsoft are prepared to acknowledge that other people can have ideas ↩
you write the pages in an online form, but then a server process builds a static HTML version of them; the advanced version of this where pages were only built on request was called "funky caching" back then ↩
if a disinterested observer were to consider this progression, they might unfairly but accurately conclude that whatever this site runs on is basically a half-arsed system I built based on the latest thing I'm interested in, mightn't they? ↩
Some of the Incus maintainers will be present at FOSDEM 2025, helping run both the containers and kernel devrooms. For those arriving in town early, there will be a “Friends of Incus” gathering sponsored by FuturFusion on Thursday evening (January 30th), you can find the details of that here.
And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus
Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.
For several years, DigitalOcean has been an important sponsor of Ubuntu Budgie. They provide the infrastructure we need to host our website at https://ubuntubudgie.org and our Discourse community forum at https://discourse.ubuntubudgie.org. Maybe you are familiar with them. Maybe you use them in your personal or professional life. Or maybe, like me, you didn’t really see how they would benefit you.
Whenever something touches the red cap, the system wakes up from suspend/s2idle.
I’ve used ThinkPad T14 Gen 3 AMD for 2 years, and I recently purchased T14 Gen 5 AMD. The previous system as Gen 3 annoyed me so much because the laptop randomly woke up from suspend even inside a backpack on its own, heated up the confined air in it, and drained the battery pretty fast as a consequence. Basically it’s too sensitive to any events. For example, whenever a USB Type-C cable is plugged in as a power source or whenever something touches the TrackPoint even if a display on a closed lid slightly makes contact with the red cap, the system wakes up from suspend. It was uncontrollable.
I was hoping that Gen 5 would make a difference, and it did when it comes to the power source event. However, frequent wakeups due to the TrackPoint event remained the same so I started to dig in.
Disabling touchpad as a wakeup source on T14 Gen 5 AMD
Disabling touchpad events as a wakeup source is straightforward. The touchpad device, ELAN0676:00 04F3:3195 Touchpad, can be found in the udev device tree as follows.
And you can get all attributes including parent devices like the following.
$ udevadm info --attribute-walk -p /devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12
...
looking at device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12':
KERNEL=="input12"SUBSYSTEM=="input"DRIVER=="" ...
ATTR{name}=="ELAN0676:00 04F3:3195 Touchpad" ATTR{phys}=="i2c-ELAN0676:00"...
looking at parent device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00':
KERNELS=="i2c-ELAN0676:00"SUBSYSTEMS=="i2c"DRIVERS=="i2c_hid_acpi" ATTRS{name}=="ELAN0676:00" ...
ATTRS{power/wakeup}=="enabled"
The line I’m looking for is ATTRS{power/wakeup}=="enabled". By using the identifiers of the parent device that has ATTRS{power/wakeup}, I can make sure that /sys/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/power/wakeup is always disabled with the custom udev rule as follows.
Disabling TrackPoint as a wakeup source on T14 Gen 5 AMD
I’ve seen a pattern already as above so I should be able to apply the same method. The TrackPoint device, TPPS/2 Elan TrackPoint, can be found in the udev device tree.
$ udevadm info --attribute-walk -p /devices/platform/i8042/serio1/input/input5
...
looking at device '/devices/platform/i8042/serio1/input/input5':
KERNEL=="input5"SUBSYSTEM=="input"DRIVER=="" ...
ATTR{name}=="TPPS/2 Elan TrackPoint" ATTR{phys}=="isa0060/serio1/input0"...
looking at parent device '/devices/platform/i8042/serio1':
KERNELS=="serio1"SUBSYSTEMS=="serio"DRIVERS=="psmouse" ATTRS{bind_mode}=="auto" ATTRS{description}=="i8042 AUX port" ATTRS{drvctl}=="(not readable)" ATTRS{firmware_id}=="PNP: LEN0321 PNP0f13" ...
ATTRS{power/wakeup}=="disabled"
I hit the wall here. ATTRS{power/wakeup}=="disabled" for the i8042 AUX port is already there but the TrackPoint still wakes up the system from suspend. I had to do bisecting for all remaining wakeup sources.
Wakeup sources:
│ [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:001/wakeup66]: enabled
│ [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:002/wakeup67]: enabled
│ ACPI Battery [PNP0C0A:00]: enabled
│ ACPI Lid Switch [PNP0C0D:00]: enabled
│ ACPI Power Button [PNP0C0C:00]: enabled
│ ACPI Sleep Button [PNP0C0E:00]: enabled
│ AT Translated Set 2 keyboard [serio0]: enabled
│ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] Multimedia controller [0000:c4:00.5]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:03.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:04.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.5]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.6]: enabled
│ Mobile Broadband host interface [mhi0]: enabled
│ Plug-n-play Real Time Clock [00:01]: enabled
│ Real Time Clock alarm timer [rtc0]: enabled
│ Thunderbolt domain [domain0]: enabled
│ Thunderbolt domain [domain1]: enabled
│ USB4 host controller [0-0]: enabled
└─USB4 host controller [1-0]: enabled
Somehow, disabling SLPB “ACPI Sleep Button” stopped undesired wakeups by the TrackPoint.
looking at parent device '/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00':
KERNELS=="PNP0C0E:00"SUBSYSTEMS=="acpi"DRIVERS=="button" ATTRS{hid}=="PNP0C0E" ATTRS{path}=="\_SB_.SLPB" ...
ATTRS{power/wakeup}=="enabled"
The final udev rule is the following. It also disables wakeup events from the keyboard as a side effect, but opening the lid or pressing the power button can still wake up the system so it works for me.
After solving the headache of frequent wakeups for T14 Gen5 AMD. I was curious if I could apply the same to Gen 3 AMD retrospectively. Gen 3 has the following wakeup sources active out of the box.
Wakeup sources:
│ ACPI Battery [PNP0C0A:00]: enabled
│ ACPI Lid Switch [PNP0C0D:00]: enabled
│ ACPI Power Button [LNXPWRBN:00]: enabled
│ ACPI Power Button [PNP0C0C:00]: enabled
│ ACPI Sleep Button [PNP0C0E:00]: enabled
│ AT Translated Set 2 keyboard [serio0]: enabled
│ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
│ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.4]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.0]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.3]: enabled
│ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.4]: enabled
│ ELAN0678:00 04F3:3195 Mouse [i2c-ELAN0678:00]: enabled
│ Mobile Broadband host interface [mhi0]: enabled
│ Plug-n-play Real Time Clock [00:01]: enabled
└─Real Time Clock alarm timer [rtc0]: enabled
Disabling the touchpad event was straightforward. The only difference from Gen 5 was the ID of the device.
When it comes to the TrackPoint or power source event, nothing was able to stop it from waking up the system even after disabling all wakeup sources. I came across a hidden gem named amd_s2idle.py. The “S0i3/s2idle analysis script for AMD systems” is full with the domain knowledge of s2idle like where to look in /proc or /sys or how to enable debug and what part of the logs is important.
By running the script, I got the following output around the unexpected wakeup.
$ sudo python3 ./amd_s2idle.py --debug-ec --duration 30
Debugging script for s2idle on AMD systems
💻 LENOVO 21CF21CFT1 (ThinkPad T14 Gen 3) running BIOS 1.56 (R23ET80W (1.56 )) released 10/28/2024 and EC 1.32
🐧 Ubuntu 24.04.1 LTS
🐧 Kernel 6.11.0-12-generic
🔋 Battery BAT0 (Sunwoda ) is operating at 90.91% of design
Checking prerequisites for s2idle
✅ Logs are provided via systemd
✅ AMD Ryzen 7 PRO 6850U with Radeon Graphics (family 19 model 44)
...
Suspending system in 0:00:02
Suspending system in 0:00:01
Started at 2025-01-04 00:46:53.063495 (cycle finish expected @ 2025-01-04 00:47:27.063532)
Collecting data in 0:00:02
Collecting data in 0:00:01
Results from last s2idle cycle
💤 Suspend count: 1
💤 Hardware sleep cycle count: 1
○ GPIOs active: ['0']
🥱 Wakeup triggered from IRQ 9: ACPI SCI
🥱 Wakeup triggered from IRQ 7: GPIO Controller
🥱 Woke up from IRQ 7: GPIO Controller
❌ Userspace suspended for 0:00:14.031448 (< minimum expected 0:00:27)
💤 In a hardware sleep state for 0:00:10.566894 (75.31%)
🔋 Battery BAT0 lost 10000 µWh (0.02%) [Average rate 2.57W]
Explanations for your system
🚦 Userspace wasn't asleep at least 0:00:30
The system was programmed to sleep for 0:00:30, but woke up prematurely.
This typically happens when the system was woken up from a non-timer based source.
If you didn't intentionally wake it up, then there may be a kernel or firmware bug
I compared all the logs generated between the events of power button, power source, TrackPoint, and touchpad. But except for the touchpad event, everything else was coming from GPIO pin #0 and there was no more information of how to distinguish those wakeup triggers. I ended up with a drastic approach of ignoring wakeup triggers from the GPIO pin #0 completely with the following kernel option.
gpiolib_acpi.ignore_wake=AMDI0030:00@0
And I get the line on each boot.
kernel: amd_gpio AMDI0030:00: Ignoring wakeup on pin 0
That comes with obvious downsides. The system doesn’t wake up frequently any longer, that is good. However, nothing can wake it up after getting into suspend. Opening the lid, pressing the power button or any key is simply ignored since all are going to GPIO pin #0. In the end, I had to enable the touchpad back as a wakeup source explicitly so the system can wakeup by tapping the touchpad. It’s far from ideal, but the touchpad is less sensitive than the TrackPoint so I will keep it that way.
So often I come across the need to avoid my system to block forever, or until a process finishes, I can’t recall how did I came across systemd inhibit, but
here’s my approach and a bit of motivation
After some fiddling (not much really), it starts directly once I login and I will be using it instead of a fully fledged plex or the like, I just want to stream some videos from time to time from my home pc over my ipad :D using VLC.
The Hack
systemd-inhibit --who=foursixnine --why="maybe there be dragons" --mode block \
bash -c 'while $(systemctl --user is-active -q rygel.service); do sleep 1h; done'
Last week I was bitten by a interesting C feature. The following terminate function was expected to exit if okay was zero (false) however it exited when zero was passed to it. The reason is the missing semicolon after the return function.
The interesting part this that is compiles fine because the void function terminate is allowed to return the void return value, in this case the void return from exit().
OCI (open container initiative) images are the standard format based on the original docker format. Each container image is represented as an array of ‘layers’, each of which is a .tar.gz. To unpack the container image, untar the first, then untar the second on top of the first, etc.
Several years ago, while we were working on a product which ships its root filesystem (and of course containers) as OCI layers, Tycho Andersen (https://tycho.pizza/) came up with the idea of ‘atomfs’ as a way to avoid some of the deficiencies of tar (https://www.cyphar.com/blog/post/20190121-ociv2-images-i-tar). In ‘atomfs’, the .tar.gz layers are replaced by squashfs (now optionally erofs) filesystems with dm-verity root hashes specified. Mounting an image now consists of mounting each squashfs, then merging them with overlay. Since we have the dmverity root hash, we can ensure that the filesystem has not been corrupted without having to checksum the files before mounting, and there is no tar unpacking step.
This past week, Ram Chinchani presented atomfs at the OCI weekly discussion, which you can see here https://www.youtube.com/watch?v=CUyH319O9hM starting at about 28 minutes. He showed a full use cycle, starting with a Dockerfile, building atomfs images using stacker, mounting them using atomfs, and then executing a container with lxc. Ram mentioned his goal is to have a containerd snapshotter for atomfs soon. I’m excited to hear that, as it will make it far easier to integrate into e.g. kubernetes.
I’m pleased to introduce uCareSystem 24.12.11, the latest version of the all-in-one system maintenance tool for Ubuntu, Linux Mint, Debian and its derivatives. This release brings some major changes in UI, fixes and improvements under the hood. Continuing on the path of the earlier release, in this release after many many … many … did […]
The new feature bug templates in Launchpad aims to streamline the bug reporting process, making it more efficient for both users and project maintainers.
In the past, Launchpad provided only a basic description field for filling bug reports. This often led to incomplete or vague submissions, as users may not include essential details or steps to reproduce an issue. This could slow down the debugging process when fixing bugs.
To improve this, we are introducing bug templates. These allow project maintainers to guide users when reporting bugs. By offering a structured template, users are prompted to provide all the necessary information, which helps to speed up the development process.
To start using bug templates in your project, simply follow these steps:
Access your project’s bug page view.
Select ‘Configure bugs’.
A field showing the bug template will prompt you to fill in your desired template.
Save the changes. The template will now be available to users when they report a new bug for your project.
For now, only a default bug template can be set per project. Looking ahead, the idea is to expand this by introducing multiple bug templates per project, as well as templates for other content types such as merge proposals or answers. This will allow project maintainers to define various templates for different purposes, making the open-source collaboration process even more efficient.
Additionally, we will introduce Markdown support, allowing maintainers to create structured and visually clear templates using features such as headings, lists, or code blocks.
I’m pleased to introduce uCareSystem 24.11.17, the latest version of the all-in-one system maintenance tool. This release brings some minor fixes and improvements with visual changes that you will love. I’m excited to share the details of the latest update to uCareSystem! With this release, the focus is on refining the user experience and modernizing […]
In basically every engineering organization I’ve ever regarded as particularly
high functioning, I’ve sat through one specific recurring conversation which is
not – a conversation about “complexity”. Things are good or bad because they
are or aren’t complex, architectures needs to be redone because it’s too
complex – some refactor of whatever it is won’t work because it’s too complex.
You may have even been a part of some of these conversations – or even been
the one advocating for simple light-weight solutions. I’ve done it. Many times.
Rarely, if ever, do we talk about complexity within its rightful context –
complexity for whom. Is a solution complex because it’s complex for the
end user? Is it complex if it’s complex for an API consumer? Is it complex if
it’s complex for the person maintaining the API service? Is it complex if it’s
complex for someone outside the team maintaining it to understand?
Complexity within a problem domain I’ve come to believe, is fairly zero-sum –
there’s a fixed amount of complexity in the problem to be solved, and you can
choose to either solve it, or leave it for those downstream of you
to solve that problem on their own.
That being said, while I believe there is a lower bound in complexity to
contend with for a problem, I do not believe there is an upper bound to the
complexity of solutions possible. It is always possible, and in fact, very
likely that teams create problems for themselves while trying to solve a
problem. The rest of this post is talking to the lower bound. When getting
feedback on an early draft of this blog post, I’ve been informed that Fred
Brooks coined a term for what I call “lower bound complexity” – “Essential
Complexity”, in the paper
“No Silver Bullet—Essence and Accident in Software Engineering”,
which is a better term and can be used interchangeably.
Complexity Culture
In a large enough organization, where the team is high functioning enough to
have and maintain trust amongst peers, members of the team will specialize.
People will begin to engage with subsets of the work to be done, and begin to
have their efficacy measured against that part of the organization’s problems.
Incentives shift, and over time it becomes increasingly likely that two
engineers may have two very different priorities when working on the same
system together. Someone accountable for uptime and tasked with responding to
outages will begin to resist changes. Someone accountable for rapidly
delivering features will resist gates between them and their users. Companies
(either wittingly or unwittingly) will deal with this by tasking engineers with
both production (feature development) and operational tasks (maintenance), so
the difference in incentives isn’t usually as bad as it could be.
When we get a bunch of folks from far-flung corners of an organization in a
room, fire up a slide deck and throw up some aspirational to-be architecture
diagram in order to get a sign-off to solve some problem (be it someone needs a
credible promotion packet, new feature needs to get delivered, or the system
has begun to fail and needs fixing), the initial reaction will, more often than
I’d like, start to devolve into a discussion of how this is going to introduce
a bunch of complexity, going to be hard to maintain, why can’t you make it
less complex?
Right around here is when I start to try and contextualize the conversation
happening around me – understand what complexity is that being discussed, and
understand who is taking on that burden. Think about who should be owning
that problem, and work through the tradeoffs involved. Is it best solved here,
or left to consumers (be them other systems, developers, or users). Should
something become an API call’s optional param, taking on all the edge-cases and
on, or should users have to implement the logic using the data you
return (leaving everyone else to take on all the edge-cases and maintenance)?
Should you process the data, or require the user to preprocess it for you?
Frequently it’s right to make an active and explicit decision to simplify and
leave problems to be solved downstream, since they may not actually need to be
solved – or perhaps you expect consumers will want to own the specifics of
how the problem is solved, in which case you leave lots of documentation and
examples. Many other times, especially when it’s something downstream consumers
are likely to hit, it’s best solved internal to the system, since the only
thing that can come of leaving it unsolved are bugs, frustration and
half-correct solutions. This is a grey-space of tradeoffs, not a clear decision
tree. No one wants the software manifestation of a katamari ball or a junk
drawer, nor does anyone want a half-baked service unable to handle the simplest
use-case.
Head-in-sand as a Service
Popoffs about how complex something is, are, to a first approximation, best
understood as meaning “complicated for the person making comments”. A lot of
the #thoughtleadership believe that an AWS hosted EKS k8s cluster running
images built by CI talking to an AWS hosted PostgreSQL RDS is not complex.
They’re right. Mostly right. This is less complex – less complex for them.
It’s not, however, without complexity and its own tradeoffs – it’s just
complexity that they do not have to deal with. Now they don’t have to
maintain machines that have pesky operating systems or hard drive failures.
They don’t have to deal with updating the version of k8s, nor ensuring the
backups work. No one has to push some artifact to prod manually. Deployments
happen unattended. You click a button and get a cluster.
On the other hand, developers outside the ops function need to deal with
troubleshooting CI, debugging access control rules encoded in turing complete
YAML, permissions issues inside the cluster due to whatever the fuck a service
mesh is, everyone needs to learn how to use some k8s tools they only actually
use during a bad day, likely while doing some x.509 troubleshooting to
connect to the cluster (an internal only endpoint; just port forward it) – not
to mention all sorts of rules to route packets to their project (a single
repo’s binary being run in 3 containers on a single vm host).
Beyond that, there’s the invisible complexity – complexity on the interior of
a service you depend on. I think about the dozens of teams maintaining the EKS
service (which is either run on EC2 instances, or alternately, EC2 instances in
a trench coat, moustache and even more shell scripts), the RDS service (also
EC2 and shell scripts, but this time accounting for redundancy, backups,
availability zones), scores of hypervisors pulled off the shelf (xen, kvm)
smashed together with the ones built in-house (firecracker, nitro, etc)
running on hardware that has to be refreshed and maintained continuously. Every
request processed by network ACL rules, AWS IAM rules, security group rules,
using IP space announced to the internet wired through IXPs directly into ISPs.
I don’t even want to begin to think about the complexity inherent in how those
switches are designed. Shitloads of complexity to solve problems you may or
may not have, or even know you had.
What’s more complex? An app running in an in-house 4u server racked in the
office’s telco closet in the back running off the office Verizon line, or an
app running four hypervisors deep in an AWS datacenter? Which is more complex
to you? What about to your organization? In total? Which is more prone to
failure? Which is more secure? Is the complexity good or bad? What type of
Complexity can you manage effectively? Which threaten the system? Which
threaten your users?
COMPLEXIVIBES
This extends beyond Engineering. Decisions regarding “what tools are we able to
use” – be them existing contracts with cloud providers, CIO mandated SaaS
products, a list of the only permissible open source projects – will incur
costs in terms of expressed “complexity”. Pinning open source projects to a
fixed set makes SBOM production “less complex”. Using only one SaaS provider’s
product suite (even if its terrible, because it has all the types of tools you
need) makes accreditation “less complex”. If all you have is a contract with
Pauly T’s lowest price technically acceptable artisinal cloudary and
haberdashery, the way you pay for your compute is “less complex” for the CIO
shop, though you will find yourself building your own hosted database template,
mechanism to spin up a k8s cluster, and all the operational and technical
burden that comes with it. Or you won’t and make it everyone else’s problem in
the organization. Nothing you can do will solve for the fact that you must
now deal with this problem somewhere because it was less complicated for the
business to put the workloads on the existing contract with a cut-rate vendor.
Suddenly, the decision to “reduce complexity” because of an existing contract
vehicle has resulted in a huge amount of technical risk and maintenance burden
being onboarded. Complexity you would otherwise externalize has now been taken
on internally. With large enough organizations (specifically, in this case,
I’m talking about you, bureaucracies), this is largely ignored or accepted as
normal since the personnel cost is understood to be free to everyone involved.
Doing it this way is more expensive, more work, less reliable and less
maintainable, and yet, somehow, is, in a lot of ways, “less complex” to the
organization. It’s particularly bad with bureaucracies, since screwing up a
contract will get you into much more trouble than delivering a broken product,
leaving basically no reason for anyone to care to fix this.
I can’t shake the feeling that for every story of technical mandates gone
awry, somewhere just
out of sight there’s a decisionmaker optimizing for what they believe to be the
least amount of complexity – least hassle, fewest unique cases, most
consistency – as they can. They freely offload complexity from their
accreditation and risk acceptance functions through mandates. They will never
have to deal with it. That does not change the fact that someone does.
TC;DR (TOO COMPLEX; DIDN’T REVIEW)
We wish to rid ourselves of systemic Complexity – after all, complexity is
bad, simplicity is good. Removing upper-bound own-goal complexity (“accidental
complexity” in Brooks’s terms) is important, but once you hit the lower bound
complexity, the tradeoffs become zero-sum. Removing complexity from one part of
the system means that somewhere else - maybe outside your organization or in a
non-engineering function - must grow it back. Sometimes, the opposite is the
case, such as when a previously manual business processes is automated. Maybe that’s a
good idea. Maybe it’s not. All I know is that what doesn’t help the situation
is conflating complexity with everything we don’t like – legacy code,
maintenance burden or toil, cost, delivery velocity.
Complexity is not the same as proclivity to failure. The most reliable
systems I’ve interacted with are unimaginably complex, with layers of internal
protection to prevent complete failure. This has its own set of costs which
other people have written about extensively.
Complexity is not cost. Sometimes the cost of taking all the complexity
in-house is less, for whatever value of cost you choose to use.
Complexity is not absolute. Something simple from one perspective may
be wildly complex from another. The impulse to burn down complex sections of
code is helpful to have generally, but
sometimes things are complicated for a reason,
even if that reason exists outside your codebase or organization.
Complexity is not something you can remove without introducing complexity
elsewhere. Just as not making a decision is a decision itself; choosing to
require someone else to deal with a problem rather than dealing with it
internally is a choice that needs to be considered in its full context.
Next time you’re sitting through a discussion and someone starts to talk about
all the complexity about to be introduced, I want to pop up in the back of your
head, politely asking what does complex mean in this context? Is it lower
bound complexity? Is this complexity desirable? Is what they’re saying mean
something along the lines of I don’t understand the problems being solved, or
does it mean something along the lines of this problem should be solved
elsewhere? Do they believe this will result in more work for them in a way that
you don’t see? Should this not solved at all by changing the bounds of what we
should accept or redefine the understood limits of this system? Is the perceived
complexity a result of a decision elsewhere? Who’s taking this complexity on,
or more to the point, is failing to address complexity required by the problem
leaving it to others? Does it impact others? How specifically? What are you
not seeing?
I decided to be more selective and remove those that did very porly at 1.5G, which was most.
Ubuntu - booted but desktop not stable, took 1.5 minutes to load Firefox
Xubuntu-minimal - does not include a web browser so can't further test. Snap is preinstaled even though no apps are - but trying to install a web browser worked but couldn't start.
Manjaro KDE - Desktop loads, but browser doesn't
Xubuntu - laggy when Firefox is opened, can't load sites
Ubuntu Mate -laggy when Firefox is opened, can't load sites
Kubuntu - laggy when Firefox is opened, can't load sites
Linux Mint 22 - desktop loads, browsers isn't responsive
Fedora video is a bit laggy, but watchable.. EndlessOS with Chromium is the most smooth and resonsive watching YouTube.
For fun let's look at startup time with 2GB (with me hitting buttons as needed to open a folder)
Conclusion
Lubuntu lowered it's memory usage from 2020 for loading a desktop 585M to 450M! Kudos to Lubuntu team!
Both Fedora and Endless desktops worked in lower memory then 2020 too!
Lubuntu, Fedora and Endless all used Zram.
Chromium has definitely improved it's memory usage as last time Endless got dinged for using it. Now it appears to work better then Firefox.
Networking is a complex topic, and there is lots of confusion around the definition of an “online” system. Sometimes the boot process gets delayed up to two minutes, because the system still waits for one or more network interfaces to be ready. Systemd provides the network-online.target that other service units can rely on, if they are deemed to require network connectivity. But what does “online” actually mean in this context, is a link-local IP address enough, do we need a routable gateway and how about DNS name resolution?
The requirements for an “online” network interface depend very much on the services using an interface. For some services it might be good enough to reach their local network segment (e.g. to announce Zeroconf services), while others need to reach domain names (e.g. to mount a NFS share) or reach the global internet to run a web server. On the other hand, the implementation of network-online.target varies, depending on which networking daemon is in use, e.g. systemd-networkd-wait-online.service or NetworkManager-wait-online.service. For Ubuntu, we created a specification that describes what we as a distro expect an “online” system to be. Having a definition in place, we are able to tackle the network-online-ordering issues that got reported over the years and can work out solutions to avoid delayed boot times on Ubuntu systems.
In essence, we want systems to reach the following networking state to be considered online:
Do not wait for “optional” interfaces to receive network configuration
Have IPv6 and/or IPv4 “link-local” addresses on every network interface
Have at least one interface with a globally routable connection
Have functional domain name resolution on any routable interface
A common implementation
NetworkManager and systemd-networkd are two very common networking daemons used on modern Linux systems. But they originate from different contexts and therefore show different behaviours in certain scenarios, such as wait-online. Luckily, on Ubuntu we already have Netplan as a unification layer on top of those networking daemons, that allows for common network configuration, and can also be used to tweak the wait-online logic.
With the recent release of Netplan v1.1 we introduced initial functionality to tweak the behaviour of the systemd-networkd-wait-online.service, as used on Ubuntu Server systems. When Netplan is used to drive the systemd-networkd backend, it will emit an override configuration file in /run/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf, listing the specific non-optional interfaces that should receive link-local IP configuration. In parallel to that, it defines a list of network interfaces that Netplan detected to be potential global connections, and waits for any of those interfaces to reach a globally routable state.
In addition to the new features implemented in Netplan, we reached out to upstream systemd, proposing an enhancement to the systemd-networkd-wait-online service, integrating it with systemd-resolved to check for the availability of DNS name resolution. Once this is implemented upstream, we’re able to fully control the systemd-networkd backend on Ubuntu Server systems, to behave consistently and according to the definition of an “online” system that was lined out above.
Future work
The story doesn’t end there, because Ubuntu Desktop systems are using NetworkManager as their networking backend. This daemon provides its very own nm-online utility, utilized by the NetworkManager-wait-online systemd service. It implements a much higher-level approach, looking at the networking daemon in general instead of the individual network interfaces. By default, it considers a system to be online once every “autoconnect” profile got activated (or failed to activate), meaning that either a IPv4 or IPv6 address got assigned.
There are considerable enhancements to be implemented to this tool, for it to be controllable in a fine-granular way similar to systemd-networkd-wait-online, so that it can be instructed to wait for specific networking states on selected interfaces.
A note of caution
Making a service depend on network-online.target is considered an antipattern in most cases. This is because networking on Linux systems is very dynamic and the systemd target can only ever reflect the networking state at a single point in time. It cannot guarantee this state to be remained over the uptime of your system and has the potentially to delay the boot process considerably. Cables can be unplugged, wireless connectivity can drop, or remote routers can go down at any time, affecting the connectivity state of your local system. Therefore, “instead of wondering what to do about network.target, please just fix your program to be friendly to dynamically changing network configuration.” [source].