The Call for Participation (CFP) for All Systems Go! 2025 will close tomorrow, on 13th of June! We’d like to invite you to submit your proposals for consideration to the CFP submission site quickly!
Last week we released systemd v257 into the wild.
In the weeks leading up to this release (and the week after) I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd257 hash tag. In case you aren't using Mastodon, but would like to read up, here's a list of all 37 posts:
- Post #1: Fully Locked Accounts with systemd-sysusers
- Post #2: Combined Signed PCR and Locally Managed PCR Policies for Disk Encryption
- Post #3: Progress Indication via Terminal ANSI Sequence
- Post #4: Multi-Profile UKIs
- Post #5: The New sd-varlink & sd-json APIs in libsystemd
- Post #6: Querying for Passwords in User Scope
- Post #7: Secure Attention Key Logic in systemd-logind
- Post #8: systemd-nspawn --bind-user= Now Copies User's SSH Key
- Post #9: The New DeferReactivation= Switch in .timer Units
- Post #10: Support for the New IPE LSM
- Post #11: Environment Variables for Shell Prompt Prefix/Suffix
- Post #12: sysctl Conflict Detection via eBPF
- Post #13: initrd and µcode UKI Add-Ons
- Post #14: SecureBoot Signing with the New systemd-sbsign Tool
- Post #15: Managed Access to hidraw devices in systemd-logind
- Post #16: Fuzzy Filtering in userdbctl
- Post #17: MAC Address Based Alternative Network Interface Names
- Post #18: Conditional Copying/Symlinking in tmpfiles.d/
- Post #19: Automatic Service Restarts in Debug Mode
- Post #20: Filtering by Invocation ID in journalctl
- Post #21: Supplement Partitions in repart.d/
- Post #22: DeviceTree Matching in UKIs
- Post #23: The New ssh-exec: Protocol in varlinkctl
- Post #24: SecureBoot Key Enrollment Preparation with bootctl
- Post #25: Automatically Installing confext/sysext/portable/VMs/container Images at Boot
- Post #26: Designated Maintenance Time in systemd-logind
- Post #27: PID Namespacing in Service Management
- Post #28: Marking Experimental OS Releases in /etc/os-release
- Post #29: Decoding Capability Masks with systemd-analyze
- Post #30: Investigating Passed SMBIOS Type #11 Data
- Post #31: Initializing Partitions from Character Devices in repart.d/
- Post #32: Entering Namespaces to Generate Stacktraces
- Post #33: ID Mapped Mounts for Per-Service Directories
- Post #34: A Daemon for systemd-sysupdate
- Post #35: User Record Modifications without Administrator Consent in systemd-homed
- Post #36: DNR DHCP Support
- Post #37: Name Based AF_VSOCK ssh Access
I intend to do a similar series of serieses of posts for the next systemd release (v258), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.
Yesterday evening we released systemd v256 into the wild. While other projects, such as Firefox are just about to leave the 7bit world and enter 8bit territory, we already entered 9bit version territory! For details about the release, see our announcement mail.
In the weeks leading up to this release I have posted a series of serieses of posts to Mastodon about key new features in this release. Mastodon has its goods and its bads. Among the latter is probably that it isn't that great for posting listings of serieses of posts. Hence let me provide you with a list of the relevant first post in the series of posts here:
- Post #1:
.v/
Directories - Post #2: User-Scoped Encrypted Service Credentials
- Post #3:
X_SYSTEMD_UNIT_ACTIVE=
sd_notify()
Messages - Post #4: System-wide
ProtectSystem=
- Post #5:
run0
Assudo
Replacement - Post #6: System Credentials
- Post #7: Unprivileged DDI Mounts + Unprivileged
systemd-nspawn
- Post #8:
ssh
intosystemd-homed
Accounts - Post #9:
systemd-vmspawn
- Post #10: Mutable
systemd-sysext
- Post #11: Network Device Ownership
- Post #12:
systemctl sleep
- Post #13:
systemd-ssh-generator
- Post #14:
systemd-cryptenroll
without device argument - Post #15:
dlopen()
ELF Metadata - Post #16:
Capsules
I intend to do a similar series of serieses of posts for the next systemd release (v257), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.
And while I have you: note that the All Systems Go 2024 Conference (Berlin) Call for Papers ends 😲 THIS WEEK 🤯! Hence, HURRY, and get your submissions in now, for the best low-level Linux userspace conference around!
This is a guest post written by Daan De Meyer, systemd and mkosi maintainer
Almost 7 years ago, Lennart first
wrote
about mkosi
on this blog. Some years ago, I took over development and
there's been a huge amount of changes and improvements since then. So I
figure this is a good time to re-introduce mkosi
.
mkosi
stands for Make Operating
System Image. It generates OS images that can be used for a variety of
purposes.
If you prefer watching a video over reading a blog post, you can also
watch my presentation on
mkosi
at All Systems Go 2023.
What is mkosi?
mkosi
was originally written as a tool to simplify hacking on systemd
and for experimenting with images using many of the new concepts being
introduced in systemd at the time. In the meantime, it has evolved into
a general purpose image builder that can be used in a multitude of
scenarios.
Instructions to install mkosi
can be found in its
readme. We
recommend running the latest version to take advantage of all the latest
features and bug fixes. You'll also need bubblewrap
and the package
manager of your favorite distribution to get started.
At its core, the workflow of mkosi
can be divided into 3 steps:
- Generate an OS tree for some distribution by installing a set of packages.
- Package up that OS tree in a variety of output formats.
- (Optionally) Boot the resulting image in
qemu
orsystemd-nspawn
.
Images can be built for any of the following distributions:
- Fedora Linux
- Ubuntu
- OpenSUSE
- Debian
- Arch Linux
- CentOS Stream
- RHEL
- Rocky Linux
- Alma Linux
And the following output formats are supported:
- GPT disk images built with
systemd-repart
- Tar archives
- CPIO archives (for building initramfs images)
- USIs (Unified System Images which are full OS images packed in a UKI)
- Sysext, confext and portable images
- Directory trees
For example, to build an Arch Linux GPT disk image and boot it in
qemu
, you can run the following command:
$ mkosi -d arch -p systemd -p udev -p linux -t disk qemu
To instead boot the image in systemd-nspawn, replace qemu
with boot
:
$ mkosi -d arch -p systemd -p udev -p linux -t disk boot
The actual image can be found in the current working directory named
image.raw
. However, using a separate output directory is recommended
which is as simple as running mkdir mkosi.output
.
To rebuild the image after it's already been built once, add -f
to the
command line before the verb to rebuild the image. Any arguments passed
after the verb are forwarded to either systemd-nspawn
or qemu
itself. To build the image without booting it, pass build
instead of
boot
or qemu
or don't pass a verb at all.
By default, the disk image will have an appropriately sized root
partition and an ESP partition, but the partition layout and contents
can be fully customized using systemd-repart
by creating partition
definition files in mkosi.repart/
. This allows you to customize the
partition as you see fit:
- The root partition can be encrypted.
- Partition sizes can be customized.
- Partitions can be protected with signed dm-verity.
- You can opt out of having a root partition and only have a /usr partition instead.
- You can add various other partitions, e.g. an XBOOTLDR partition or a swap partition.
- ...
As part of building the image, we'll run various tools such as
systemd-sysusers
, systemd-firstboot
, depmod
, systemd-hwdb
and
more to make sure the image is set up correctly.
Configuring mkosi image builds
Naturally with extended use you don't want to specify all settings on
the command line every time, so mkosi
supports configuration files
where the same settings that can be specified on the command line can be
written down.
For example, the command we used above can be written down in a
configuration file mkosi.conf
:
[Distribution]
Distribution=arch
[Output]
Format=disk
[Content]
Packages=
systemd
udev
linux
Like systemd, mkosi
uses INI configuration files. We also support
dropins which can be placed in mkosi.conf.d
. Configuration files can
also be conditionalized using the [Match]
section. For example, to
only install a specific package on Arch Linux, you can write the
following to mkosi.conf.d/10-arch.conf
:
[Match]
Distribution=arch
[Content]
Packages=pacman
Because not everything you need will be supported in mkosi
, we support
running scripts at various points during the image build process where
all extra image customization can be done. For example, if it is found,
mkosi.postinst
is called after packages have been installed. Scripts
are executed on the host system by default (in a sandbox), but can be
executed inside the image by suffixing the script with .chroot
, so if
mkosi.postinst.chroot
is found it will be executed inside the image.
To add extra files to the image, you can place them in mkosi.extra
in
the source directory and they will be automatically copied into the
image after packages have been installed.
Bootable images
If the necessary packages are installed, mkosi
will automatically
generate a UEFI/BIOS bootable image. As mkosi
is a systemd project, it
will always build
UKIs
(Unified Kernel Images), except if the image is BIOS-only (since UKIs
cannot be used on BIOS). The initramfs is built like a regular image by
installing distribution packages and packaging them up in a CPIO archive
instead of a disk image. Specifically, we do not use dracut
,
mkinitcpio
or initramfs-tools
to generate the initramfs from the
host system. ukify
is used to assemble all the individual components
into a UKI.
If you don't want mkosi
to generate a bootable image, you can set
Bootable=no
to explicitly disable this logic.
Using mkosi for development
The main requirements to use mkosi
for development is that we can
build our source code against the image we're building and install it
into the image we're building. mkosi
supports this via build scripts.
If a script named mkosi.build
(or mkosi.build.chroot
) is found,
we'll execute it as part of the build. Any files put by the build script
into $DESTDIR
will be installed into the image. Required build
dependencies can be installed using the BuildPackages=
setting. These
packages are installed into an overlay which is put on top of the image
when running the build script so the build packages are available when
running the build script but don't end up in the final image.
An example mkosi.build.chroot
script for a project using meson
could
look as follows:
#!/bin/sh
meson setup "$BUILDDIR" "$SRCDIR"
ninja -C "$BUILDDIR"
if ((WITH_TESTS)); then
meson test -C "$BUILDDIR"
fi
meson install -C "$BUILDDIR"
Now, every time the image is built, the build script will be executed and the results will be installed into the image.
The $BUILDDIR
environment variable points to a directory that can be
used as the build directory for build artifacts to allow for incremental
builds if the build system supports it.
Of course, downloading all packages from scratch every time and
re-installing them again every time the image is built is rather slow,
so mkosi
supports two modes of caching to speed things up.
The first caching mode caches all downloaded packages so they don't have
to be downloaded again on subsequent builds. Enabling this is as simple
as running mkdir mkosi.cache
.
The second mode of caching caches the image after all packages have been
installed but before running the build script. On subsequent builds,
mkosi
will copy the cache instead of reinstalling all packages from
scratch. This mode can be enabled using the Incremental=
setting.
While there is some rudimentary cache invalidation, the cache can also
forcibly be rebuilt by specifying -ff
on the command line instead of
-f
.
Note that when running on a btrfs filesystem, mkosi
will automatically
use subvolumes for the cached images which can be snapshotted on
subsequent builds for even faster rebuilds. We'll also use reflinks to
do copy-on-write copies where possible.
With this setup, by running mkosi -f qemu
in the systemd repository,
it takes about 40 seconds to go from a source code change to a root
shell in a virtual machine running the latest systemd with your change
applied. This makes it very easy to test changes to systemd in a safe
environment without risk of breaking your host system.
Of course, while 40 seconds is not a very long time, it's still more
than we'd like, especially if all we're doing is modifying the kernel
command line. That's why we have the KernelCommandLineExtra=
option to
configure kernel command line options that are passed to the container
or virtual machine at runtime instead of being embedded into the image.
These extra kernel command line options are picked up when the image is
booted with qemu's direct kernel boot (using -append
), but also when
booting a disk image in UEFI mode (using SMBIOS). The same applies to
systemd credentials (using the Credentials=
setting). These settings
allow configuring the image without having to rebuild it, which means
that you only have to run mkosi qemu
or mkosi boot
again afterwards
to apply the new settings.
Building images without root privileges and loop devices
By using newuidmap
/newgidmap
and systemd-repart
, mkosi
is able to
build images without needing root privileges. As long as proper subuid
and subgid mappings are set up for your user in /etc/subuid
and
/etc/subgid
, you can run mkosi
as your regular user without having
to switch to root
.
Note that as of the writing of this blog post this only applies to the
build
and qemu
verbs. Booting the image in a systemd-nspawn
container with mkosi boot
still needs root privileges. We're hoping to
fix this in an future systemd release.
Regardless of whether you're running mkosi
with root or without root,
almost every tool we execute is invoked in a sandbox to isolate as much
of the build process from the host as possible. For example, /etc
and
/var
from the host are not available in this sandbox, to avoid host
configuration inadvertently affecting the build.
Because systemd-repart
can build disk images without loop devices,
mkosi
can run from almost any environment, including containers. All
that's needed is a UID range with 65536 UIDs available, either via
running as the root user or via /etc/subuid
and newuidmap
. In a
future systemd release, we're hoping to provide an alternative to
newuidmap
and /etc/subuid
to allow running mkosi
from all
containers, even those with only a single UID available.
Supporting older distributions
mkosi depends on very recent versions of various systemd tools (v254 or
newer). To support older distributions, we implemented so called tools
trees. In short, mkosi
can first build a tools image for you that
contains all required tools to build the actual image. This can be
enabled by adding ToolsTree=default
to your mkosi configuration.
Building a tools image does not require a recent version of systemd.
In the systemd mkosi configuration, we automatically use a tools tree if we detect your distribution does not have the minimum required systemd version installed.
Configuring variants of the same image using profiles
Profiles can be defined in the mkosi.profiles/
directory. The profile
to use can be selected using the Profile=
setting (or --profile=
) on
the command line. A profile allows you to bundle various settings behind
a single recognizable name. Profiles can also be matched on if you want
to apply some settings only to a few profiles.
For example, you could have a bootable
profile that sets
Bootable=yes
, adds the linux
and systemd-boot
packages and
configures Format=disk
to end up with a bootable disk image when
passing --profile bootable
on the kernel command line.
Building system extension images
System extension images may – dynamically at runtime — extend the base system with an overlay containing additional files.
To build system extensions with mkosi
, we need a base image on top of
which we can build our extension.
To keep things manageable, we'll make use of mkosi
's support for
building multiple images so that we can build our base image and system
extension in one go.
We start by creating a temporary directory with a base configuration
file mkosi.conf
with some shared settings:
[Output]
OutputDirectory=mkosi.output
CacheDirectory=mkosi.cache
Now let's continue with the base image definition by writing the
following to mkosi.images/base/mkosi.conf
:
[Output]
Format=directory
[Content]
CleanPackageMetadata=no
Packages=systemd
udev
We use the directory
output format here instead of the disk
output
so that we can build our extension without needing root privileges.
Now that we have our base image, we can define a sysext that builds on
top of it by writing the following to mkosi.images/btrfs/mkosi.conf
:
[Config]
Dependencies=base
[Output]
Format=sysext
Overlay=yes
[Content]
BaseTrees=%O/base
Packages=btrfs-progs
BaseTrees=
point to our base image and Overlay=yes
instructs mkosi
to only package the files added on top of the base tree.
We can't sign the extension image without a key. We can generate one
by running mkosi genkey
which will generate files that are
automatically picked up when building the image.
Finally, you can build the base image and the extensions by running
mkosi -f
. You'll find btrfs.raw
in mkosi.output
which is the
extension image.
Various other interesting features
- To sign any generated UKIs for secure boot, put your secure boot key
and certificate in
mkosi.key
andmkosi.crt
and enable theSecureBoot=
setting. You can also runmkosi genkey
to havemkosi
generate a key and certificate itself. - The
Ephemeral=
setting can be enabled to boot the image in an ephemeral copy that is thrown away when the container or virtual machine exits. ShimBootloader=
andBiosBootloader=
settings are available to configure shim and grub installation if needed.mkosi
can boot directory trees in a virtual usingvirtiofsd
. This is very useful for quickly rebuilding an image and booting it as the image does not have to be packed up as a disk image.- ...
There's many more features that we won't go over in detail here in this blog post. Learn more about those by reading the documentation.
Conclusion
I'll finish with a bunch of links to more information about mkosi
and
related tooling:
The Call for Participation (CFP) for All Systems Go! 2023 will close in three days, on 7th of July! We’d like to invite you to submit your proposals for consideration to the CFP submission site quickly!
All topics relevant to foundational open-source Linux technologies are welcome. In particular, however, we are looking for proposals including, but not limited to, the following topics:
The CFP will close on July 7th, 2023. A response will be sent to all submitters on or before July 14th, 2023. The conference takes place in 🗺️ Berlin, Germany 🇩🇪 on Sept. 13-14th.
All Systems Go! 2023 is all about foundational open-source Linux technologies. We are primarily looking for deeply technical talks by and for developers, engineers and other technical roles.
We focus on the userspace side of things, so while kernel topics are welcome they must have clear, direct relevance to userspace. The following is a non-comprehensive list of topics encouraged for 2023 submissions:
- Image-Based Linux 🖼️
- Secure and Measured Boot 📏
- TPM-Based Local/Remote Attestation, Encryption, Authentication 🔑
- Low-level container executors and infrastructure ⚙️.
- IoT, embedded and server Linux infrastructure
- Reproducible builds 🔧
- Package management, OS, container 📦, image delivery and updating
- Building Linux devices and applications 🏗️
- Low-level desktop 💻 technologies
- Networking 🌐
- System and service management 🚀
- Tracing and performance measuring 🔍
- IPC and RPC systems 🦜
- Security 🔐 and Sandboxing 🏖️
For more information please visit our conference website!
💽 Linux Boot Partitions and How to Set Them Up 🚀
Let’s have a look how traditional Linux distributions set up
/boot/
and the ESP, and how this could be improved.
How Linux distributions traditionally have been setting up their
“boot” file systems has been varying to some degree, but the most
common choice has been to have a separate partition mounted to
/boot/
. Usually the partition is formatted as a Linux file system
such as ext2/ext3/ext4. The partition contains the kernel images, the
initrd and various boot loader resources. Some distributions, like
Debian and Ubuntu, also store ancillary files associated with the
kernel here, such as kconfig
or System.map
. Such a traditional
boot partition is only defined within the context of the distribution,
and typically not immediately recognizable as such when looking just
at the partition table (i.e. it uses the generic Linux partition type
UUID).
With the arrival of UEFI a new partition relevant for boot appeared, the EFI System Partition (ESP). This partition is defined by the firmware environment, but typically accessed by Linux to install or update boot loaders. The choice of file system is not up to Linux, but effectively mandated by the UEFI specifications: vFAT. In theory it could be formatted as other file systems too. However, this would require the firmware to support file systems other than vFAT. This is rare and firmware specific though, as vFAT is the only file system mandated by the UEFI specification. In other words, vFAT is the only file system which is guaranteed to be universally supported.
There’s a major overlap of the type of the data typically stored in the ESP and in the traditional boot partition mentioned earlier: a variety of boot loader resources as well as kernels/initrds.
Unlike the traditional boot partition, the ESP is easily recognizable in the partition table via its GPT partition type UUID. The ESP is also a shared resource: all OSes installed on the same disk will share it and put their boot resources into them (as opposed to the traditional boot partition, of which there is one per installed Linux OS, and only that one will put resources there).
To summarize, the most common setup on typical Linux distributions is something like this:
Type | Linux Mount Point | File System Choice |
---|---|---|
Linux “Boot” Partition | /boot/ |
Any Linux File System, typically ext2/ext3/ext4 |
ESP | /boot/efi/ |
vFAT |
As mentioned, not all distributions or local installations agree on this. For example, it’s probably worth mentioning that some distributions decided to put kernels onto the root file system of the OS itself. For this setup to work the boot loader itself [sic!] must implement a non-trivial part of the storage stack. This may have to include RAID, storage drivers, networked storage, volume management, disk encryption, and Linux file systems. Leaving aside the conceptual argument that complex storage stacks don’t belong in boot loaders there are very practical problems with this approach. Reimplementing the Linux storage stack in all its combinations is a massive amount of work. It took decades to implement what we have on Linux now, and it will take a similar amount of work to catch up in the boot loader’s reimplementation. Moreover, there’s a political complication: some Linux file system communities made clear they have no interest in supporting a second file system implementation that is not maintained as part of the Linux kernel.
What’s interesting is that the /boot/efi/
mount point is nested
below the /boot/
mount point. This effectively means that to access
the ESP the Boot partition must exist and be mounted first. A system
with just an ESP and without a Boot partition hence doesn’t fit well
into the current model. The Boot partition will also have to carry an
empty “efi” directory that can be used as the inner mount point, and
serves no other purpose.
Given that the traditional boot partition and the ESP may carry similar data (i.e. boot loader resources, kernels, initrds) one may wonder why they are separate concepts. Historically, this was the easiest way to make the pre-UEFI way how Linux systems were booted compatible with UEFI: conceptually, the ESP can be seen as just a minor addition to the status quo ante that way. Today, primarily two reasons remained:
-
Some distributions see a benefit in support for complex Linux file system concepts such as hardlinks, symlinks, SELinux labels/extended attributes and so on when storing boot loader resources. – I personally believe that making use of features in the boot file systems that the firmware environment cannot really make sense of is very clearly not advisable. The UEFI file system APIs know no symlinks, and what is SELinux to UEFI anyway? Moreover, putting more than the absolute minimum of simple data files into such file systems immediately raises questions about how to authenticate them comprehensively (including all fancy metadata) cryptographically on use (see below).
-
On real-life systems that ship with non-Linux OSes the ESP often comes pre-installed with a size too small to carry multiple Linux kernels and initrds. As growing the size of an existing ESP is problematic (for example, because there’s no space available immediately after the ESP, or because some low-quality firmware reacts badly to the ESP changing size) placing the kernel in a separate, secondary partition (i.e. the boot partition) circumvents these space issues.
File System Choices
We already mentioned that the ESP effectively has to be vFAT, as that is what UEFI (more or less) guarantees. The file system choice for the boot partition is not quite as restricted, but using arbitrary Linux file systems is not really an option either. The file system must be accessible by both the boot loader and the Linux OS. Hence only file systems that are available in both can be used. Note that such secondary implementations of Linux file systems in the boot environment – limited as they may be – are not typically welcomed or supported by the maintainers of the canonical file system implementation in the upstream Linux kernel. Modern file systems are notoriously complicated and delicate and simply don’t belong in boot loaders.
In a trusted boot world, the two file systems for the ESP and the
/boot/
partition should be considered untrusted: any code or
essential data read from them must be authenticated cryptographically
before use. And even more, the file system structures themselves are
also untrusted. The file system driver reading them must be careful
not to be exploitable by a rogue file system image. Effectively this
means a simple file system (for which a driver can be more easily
validated and reviewed) is generally a better choice than a complex
file system (Linux file system communities made it pretty clear that
robustness against rogue file system images is outside of their scope
and not what is being tested for.).
Some approaches tried to address the fact that boot partitions are untrusted territory by encrypting them via a mechanism compatible to LUKS, and adding decryption capabilities to the boot loader so it can access it. This misses the point though, as encryption does not imply authentication, and only authentication is typically desired. The boot loader and kernel code are typically Open Source anyway, and hence there’s little value in attempting to keep secret what is already public knowledge. Moreover, encryption implies the existence of an encryption key. Physically typing in the decryption key on a keyboard might still be acceptable on desktop systems with a single human user in front, but outside of that scenario unlock via TPM, PKCS#11 or network services are typically required. And even on the desktop FIDO2 unlocking is probably the future. Implementing all the technologies these unlocking mechanisms require in the boot loader is not realistic, unless the boot loader shall become a full OS on its own as it would require subsystems for FIDO2, PKCS#11, USB, Bluetooth network, smart card access, and so on.
File System Access Patterns
Note that traditionally both mentioned partitions were read-only during most parts of the boot. Only later, once the OS is up, write access was required to implement OS or boot loader updates. In today’s world things have become a bit more complicated. A modern OS might want to require some limited write access already in the boot loader, to implement boot counting/boot assessment/automatic fallback (e.g., if the same kernel fails to boot 3 times, automatically revert to older kernel), or to maintain an early storage-based random seed. This means that even though the file system is mostly read-only, we need limited write access after all.
vFAT cannot compete with modern Linux file systems such as btrfs
when it comes to data safety guarantees. It’s not a journaled file
system, does not use CoW or any form of checksumming. This means when
used for the system boot process we need to be particularly careful
when accessing it, and in particular when making changes to it (i.e.,
trying to keep changes local to single sectors). It is essential to
use write patterns that minimize the chance of file system
corruption. Checking the file system (“fsck
”) before modification
(and probably also reading) is important, as is ensuring the file
system is put into a “clean” state as quickly as possible after each
modification.
Code quality of the firmware in typical systems is known to not always be great. When relying on the file system driver included in the firmware it’s hence a good idea to limit use to operations that have a better chance to be correctly implemented. For example, when writing from the UEFI environment it might be wise to avoid any operation that requires allocation algorithms, but instead focus on access patterns that only override already written data, and do not require allocation of new space for the data.
Besides write access from the boot loader code (as described above) these file systems will require write access from the OS, to facilitate boot loader and kernel/initrd updates. These types of accesses are generally not fully random accesses (i.e., never partial file updates) but usually mean adding new files as whole, and removing old files as a whole. Existing files are typically not modified once created, though they might be replaced wholly by newer versions.
Boot Loader Updates
Note that the update cycle frequencies for boot loaders and for kernels/initrds are probably similar these days. While kernels are still vastly more complex than boot loaders, security issues are regularly found in both. In particular, as boot loaders (through “shim” and similar components) carry certificate/keyring and denylist information, which typically require frequent updates. Update cycles hence have to be expected regularly.
Boot Partition Discovery
The traditional boot partition was not recognizable by looking just at the partition table. On MBR systems it was directly referenced from the boot sector of the disk, and on EFI systems from information stored in the ESP. This is less than ideal since by losing this entrypoint information the system becomes unbootable. It’s typically a better, more robust idea to make boot partitions recognizable as such in the partition table directly. This is done for the ESP via the GPT partition type UUID. For traditional boot partitions this was not done though.
Current Situation Summary
Let’s try to summarize the above:
-
Currently, typical deployments use two distinct boot partitions, often using two distinct file system implementations
-
Firmware effectively dictates existence of the ESP, and the use of vFAT
-
In userspace view: the ESP mount is nested below the general Boot partition mount
-
Resources stored in both partitions are primarily kernel/initrd, and boot loader resources
-
The mandatory use of vFAT brings certain data safety challenges, as does quality of firmware file system driver code
-
During boot limited write access is needed, during OS runtime more comprehensive write access is needed (though still not fully random).
-
Less restricted but still limited write patterns from OS environment (only full file additions/updates/removals, during OS/boot loader updates)
-
Boot loaders should not implement complex storage stacks.
-
ESP can be auto-discovered from the partition table, traditional boot partition cannot.
-
ESP and the traditional boot partition are not protected cryptographically neither in structure nor contents. It is expected that loaded files are individually authenticated after being read.
-
The ESP is a shared resource — the traditional boot partition a resource specific to each installed Linux OS on the same disk.
How to Do it Better
Now that we have discussed many of the issues with the status quo ante, let’s see how we can do things better:
-
Two partitions for essentially the same data is a bad idea. Given they carry data very similar or identical in nature, the common case should be to have only one (but see below).
-
Two file system implementations are worse than one. Given that vFAT is more or less mandated by UEFI and the only format universally understood by all players, and thus has to be used anyway, it might as well be the only file system that is used.
-
Data safety is unnecessarily bad so far: both ESP and boot partition are continuously mounted from the OS, even though access is pretty restricted: outside of update cycles access is typically not required.
-
All partitions should be auto-discoverable/self-descriptive
-
The two partitions should not be exposed as nested mounts to userspace
To be more specific, here’s how I think a better way to set this all up would look like:
-
Whenever possible, only have one boot partition, not two. On EFI systems, make it the ESP. On non-EFI systems use an XBOOTLDR partition instead (see below). Only have both in the case where a Linux OS is installed on a system that already contains an OS with an ESP that is too small to carry sufficient kernels/initrds. When a system contains a XBOOTLDR partition put kernels/initrd on that, otherwise the ESP.
-
Instead of the vaguely defined, traditional Linux “boot” partition use the XBOOTLDR partition type as defined by the Discoverable Partitions Specification. This ensures the partition is discoverable, and can be automatically mounted by things like
systemd-gpt-auto-generator
. Use XBOOTLDR only if you have to, i.e., when dealing with systems that lack UEFI (and where the ESP hence has no value) or to address the mentioned size issues with the ESP. Note that unlike the traditional boot partition the XBOOTLDR partition is a shared resource, i.e., shared between multiple parallel Linux OS installations on the same disk. Because of this it is typically wise to place a per-OS directory at the top of the XBOOTLDR file system to avoid conflicts. -
Use vFAT for both partitions, it’s the only thing universally understood among relevant firmwares and Linux. It’s simple enough to be useful for untrusted storage. Or to say this differently: writing a file system driver that is not easily vulnerable to rogue disk images is much easier for vFAT than for let’s say btrfs. – But the choice of vFAT implies some care needs to be taken to address the data safety issues it brings, see below.
-
Mount the two partitions via the “automount” logic. For example, via systemd’s automount units, with a very short idle time-out (one second or so). This improves data safety immensely, as the file systems will remain mounted (and thus possibly in a “dirty” state) only for very short periods of time, when they are actually accessed – and all that while the fact that they are not mounted continuously is mostly not noticeable for applications as the file system paths remain continuously around. Given that the backing file system (vFAT) has poor data safety properties, it is essential to shorten the access for unclean file system state as much as possible. In fact, this is what the aforementioned
systemd-gpt-auto-generator
logic actually does by default. -
Whenever mounting one of the two partitions, do a file system check (fsck; in fact this is also what
systemd-gpt-auto-generator
does by default, hooked into the automount logic, to run on first access). This ensures that even if the file system is in an unclean state it is restored to be clean when needed, i.e., on first access. -
Do not mount the two partitions nested, i.e., no more
/boot/efi/
. First of all, as mentioned above, it should be possible (and is desirable) to only have one of the two. Hence it is simply a bad idea to require the other as well, just to be able to mount it. More importantly though, by nesting them, automounting is complicated, as it is necessary to trigger the first automount to establish the second automount, which defeats the point of automounting them in the first place. Use the two distinct mount points/efi/
(for the ESP) and/boot/
(for XBOOTLDR) instead. You might have guessed, but that too is whatsystemd-gpt-auto-generator
does by default. -
When making additions or updates to ESP/XBOOTLDR from the OS make sure to create a file and write it in full, then
syncfs()
the whole file system, then rename to give it its final name, andsyncfs()
again. Similar when removing files. -
When writing from the boot loader environment/UEFI to ESP/XBOOTLDR, do not append to files or create new files. Instead overwrite already allocated file contents (for example to maintain a random seed file) or rename already allocated files to include information in the file name (and ideally do not increase the file name in length; for example to maintain boot counters).
-
Consider adopting UKIs, which minimize the number of files that need to be updated on the ESP/XBOOTLDR during OS/kernel updates (ideally down to 1)
-
Consider adopting
systemd-boot
, which minimizes the number of files that need to be updated on boot loader updates (ideally down to 1) -
Consider removing any mention of ESP/XBOOTLDR from
/etc/fstab
, and just letsystemd-gpt-auto-generator
do its thing. -
Stop implementing file systems, complex storage, disk encryption, … in your boot loader.
Implementing things like that you gain:
-
Simplicity: only one file system implementation, typically only one partition and mount point
-
Robust auto-discovery of all partitions, no need to even configure
/etc/fstab
-
Data safety guarantees as good as possible, given the circumstances
To summarize this in a table:
Type | Linux Mount Point | File System Choice | Automount |
---|---|---|---|
ESP | /efi/ |
vFAT | yes |
XBOOTLDR | /boot/ |
vFAT | yes |
A note regarding modern boot loaders that implement the Boot Loader Specification: both partitions are explicitly listed in the specification as sources for both Type #1 and Type #2 boot menu entries. Hence, if you use such a modern boot loader (e.g. systemd-boot) these two partitions are the preferred location for boot loader resources, kernels and initrds anyway.
Addendum: You got RAID?
You might wonder, what about RAID setups and the ESP? This comes up regularly in discussions: how to set up the ESP so that (software) RAID1 (mirroring) can be done on the ESP. Long story short: I’d strongly advise against using RAID on the ESP. Firmware typically doesn’t have native RAID support, and given that firmware and boot loader can write to the file systems involved, any attempt to use software RAID on them will mean that a boot cycle might corrupt the RAID sync, and immediately requires a re-synchronization after boot. If RAID1 backing for the ESP is really necessary, the only way to implement that safely would be to implement this as a driver for UEFI – but that creates certain bootstrapping issues (i.e., where to place the driver if not the ESP, a file system the driver is supposed to be used for), and also reimplements a considerable component of the OS storage stack in firmware mode, which seems problematic.
So what to do instead? My recommendation would be to solve this via userspace tooling. If redundant disk support shall be implemented for the ESP, then create separate ESPs on all disks, and synchronize them on the file system level instead of the block level. Or in other words, the tools that install/update/manage kernels or boot loaders should be taught to maintain multiple ESPs instead of one. Copy the kernels/boot loader files to all of them, and remove them from all of them. Under the assumption that the goal of RAID is a more reliable system this should be the best way to achieve that, as it doesn’t pretend the firmware could do things it actually cannot do. Moreover it minimizes the complexity of the boot loader, shifting the syncing logic to userspace, where it’s typically easier to get right.
Addendum: Networked Boot
The discussion above focuses on booting up from a local disk. When thinking about networked boot I think two scenarios are particularly relevant:
-
PXE-style network booting. I think in this mode of operation focus should be on directly booting a single UKI image instead of a boot loader. This sidesteps the whole issue of maintaining any boot partition at all, and simplifies the boot process greatly. In scenarios where this is not sufficient, and an interactive boot menu or other boot loader features are desired, it might be a good idea to take inspiration from the UKI concept, and build a single boot loader EFI binary (such as systemd-boot), and include the UKIs for the boot menu items and other resources inside it via PE sections. Or in other words, build a single boot loader binary that is “supercharged” and contains all auxiliary resources in its own PE sections. (Note: this does not exist, it’s an idea I intend to explore with systemd-boot). Benefit: a single file has to be downloaded via PXE/TFTP, not more. Disadvantage: unused resources are downloaded unnecessarily. Either way: in this context there is no local storage, and the ESP/XBOOTLDR discussion above is without relevance.
-
Initrd-style network booting. In this scenario the boot loader and kernel/initrd (better: UKI) are available on a local disk. The initrd then configures the network and transitions to a network share or file system on a network block device for the root file system. In this case the discussion above applies, and in fact the ESP or XBOOTLDR partition would be the only partition available locally on disk.
And this is all I have for today.
🔐 Brave New Trusted Boot World 🚀
This document looks at the boot process of general purpose Linux distributions. It covers the status quo and how we envision Linux boot to work in the future with a focus on robustness and simplicity.
This document will assume that the reader has comprehensive
familiarity with TPM 2.0 security chips and their capabilities (e.g.,
PCRs, measurements, SRK), boot loaders, the shim
binary, Linux,
initrds, UEFI Firmware, PE binaries, and SecureBoot.
Problem Description
Status quo ante of the boot logic on typical Linux distributions:
-
Most popular Linux distributions generate
initrds
locally, and they are unsigned, thus not protected through SecureBoot (since that would require local SecureBoot key enrollment, which is generally not done), nor TPM PCRs. -
Boot chain is typically Firmware →
shim
→grub
→ Linux kernel →initrd
(dracut
or similar) → root file system -
Firmware’s UEFI SecureBoot protects shim, shim’s key management protects grub and kernel. No code signing protects initrd. initrd acquires the key for encrypted root fs from the user (or TPM/FIDO2/PKCS11).
-
shim
/grub
/kernel is measured into TPM PCR 4, among other stuff -
EFI TPM event log reports measured data into TPM PCRs, and can be used to reconstruct and validate state of TPM PCRs from the used resources.
-
No userspace components are typically measured, except for what IMA measures
-
New kernels require locally generating new boot loader scripts and generating a new initrd each time. OS updates thus mean fragile generation of multiple resources and copying multiple files into the boot partition.
Problems with the status quo ante:
-
initrd typically unlocks root file system encryption, but is not protected whatsoever, and trivial to attack and modify offline
-
OS updates are brittle: PCR values of grub are very hard to pre-calculate, as grub measures chosen control flow path, not just code images. PCR values vary wildly, and OS provided resources are not measured into separate PCRs. Grub’s PCR measurements might be useful up to a point to reason about the boot after the fact, for the most basic remote attestation purposes, but useless for calculating them ahead of time during the OS build process (which would be desirable to be able to bind secrets to future expected PCR state, for example to bind secrets to an OS in a way that it remain accessible even after that OS is updated).
-
Updates of a boot loader are not robust, require multi-file updates of ESP and boot partition, and regeneration of boot scripts
-
No rollback protection (no way to cryptographically invalidate access to TPM-bound secrets on OS updates)
-
Remote attestation of running software is needlessly complex since initrds are generated locally and thus basically are guaranteed to vary on each system.
-
Locking resources maintained by arbitrary user apps to TPM state (PCRs) is not realistic for general purpose systems, since PCRs will change on every OS update, and there’s no mechanism to re-enroll each such resource before every OS update, and remove the old enrollment after the update.
-
There is no concept to cryptographically invalidate/revoke secrets for an older OS version once updated to a new OS version. An attacker thus can always access the secrets generated on old OSes if they manage to exploit an old version of the OS — even if a newer version already has been deployed.
Goals of the new design:
-
Provide a fully signed execution path from firmware to userspace, no exceptions
-
Provide a fully measured execution path from firmware to userspace, no exceptions
-
Separate out TPM PCRs assignments, by “owner” of measured resources, so that resources can be bound to them in a fine-grained fashion.
-
Allow easy pre-calculation of expected PCR values based on booted kernel/initrd, configuration, local identity of the system
-
Rollback protection
-
Simple & robust updates: one updated file per concept
-
Updates without requiring re-enrollment/local preparation of the TPM-protected resources (no more “brittle” PCR hashes that must be propagated into every TPM-protected resource on each OS update)
-
System ready for easy remote attestation, to prove validity of booted OS, configuration and local identity
-
Ability to bind secrets to specific phases of the boot, e.g. the root fs encryption key should be retrievable from the TPM only in the initrd, but not after the host transitioned into the root fs.
-
Reasonably secure, automatic, unattended unlocking of disk encryption secrets should be possible.
-
“Democratize” use of PCR policies by defining PCR register meanings, and making binding to them robust against updates, so that external projects can safely and securely bind their own data to them (or use them for remote attestation) without risking breakage whenever the OS is updated.
-
Build around TPM 2.0 (with graceful fallback for TPM-less systems if desired, but TPM 1.2 support is out of scope)
Considered attack scenarios and considerations:
-
Evil Maid: neither online nor offline (i.e. “at rest”), physical access to a storage device should enable an attacker to read the user’s plaintext data on disk (confidentiality); neither online nor offline, physical access to a storage device should allow undetected modification/backdooring of user data or OS (integrity), or exfiltration of secrets.
-
TPMs are assumed to be reasonably “secure”, i.e. can securely store/encrypt secrets. Communication to TPM is not “secure” though and must be protected on the wire.
-
Similar, the CPU is assumed to be reasonably “secure”
-
SecureBoot is assumed to be reasonably “secure” to permit validated boot up to and including shim+boot loader+kernel (but see discussion below)
-
All user data must be encrypted and authenticated. All vendor and administrator data must be authenticated.
-
It is assumed all software involved regularly contains vulnerabilities and requires frequent updates to address them, plus regular revocation of old versions.
-
It is further assumed that key material used for signing code by the OS vendor can reasonably be kept secure (via use of HSM, and similar, where secret key information never leaves the signing hardware) and does not require frequent roll-over.
Proposed Construction
Central to the proposed design is the concept of a Unified Kernel Image (UKI). These UKIs are the combination of a Linux kernel image, and initrd, a UEFI boot stub program (and further resources, see below) into one single UEFI PE file that can either be directly invoked by the UEFI firmware (which is useful in particular in some cloud/Confidential Computing environments) or through a boot loader (which is generally useful to implement support for multiple kernel versions, with interactive or automatic selection of image to boot into, potentially with automatic fallback management to increase robustness).
UKI Components
Specifically, UKIs typically consist of the following resources:
-
An UEFI boot stub that is a small piece of code still running in UEFI mode and that transitions into the Linux kernel included in the UKI (e.g., as implemented in
sd-stub
, see below) -
The Linux kernel to boot in the
.linux
PE section -
The initrd that the kernel shall unpack and invoke in the
.initrd
PE section -
A kernel command line string, in the
.cmdline
PE section -
Optionally, information describing the OS this kernel is intended for, in the
.osrel
PE section (derived from/etc/os-release
of the booted OS). This is useful for presentation of the UKI in the boot loader menu, and ordering it against other entries, using the included version information. -
Optionally, information describing kernel release information (i.e.
uname -r
output) in the.uname
PE section. This is also useful for presentation of the UKI in the boot loader menu, and ordering it against other entries. -
Optionally, a boot splash to bring to screen before transitioning into the Linux kernel in the
.splash
PE section -
Optionally, a compiled Devicetree database file, for systems which need it, in the
.dtb
PE section -
Optionally, the public key in PEM format that matches the signatures of the
.pcrsig
PE section (see below), in a.pcrpkey
PE section. -
Optionally, a JSON file encoding expected PCR 11 hash values seen from userspace once the UKI has booted up, along with signatures of these expected PCR 11 hash values, matching a specific public key in the
.pcrsig
PE section. (Note: we use plural for “values” and “signatures” here, as this JSON file will typically carry a separate value and signature for each PCR bank for PCR 11, i.e. one pair of value and signature for the SHA1 bank, and another pair for the SHA256 bank, and so on. This ensures when enrolling or unlocking a TPM-bound secret we’ll always have a signature around matching the banks available locally (after all, which banks the local hardware supports is up to the hardware). For the sake of simplifying this already overly complex topic, we’ll pretend in the rest of the text there was only one PCR signature per UKI we have to care about, even if this is not actually the case.)
Given UKIs are regular UEFI PE files, they can thus be signed as one
for SecureBoot, protecting all of the individual resources listed
above at once, and their combination. Standard Linux tools such as
sbsigntool
and pesign
can be used to sign
UKI files.
UKIs wrap all of the above data in a single file, hence all of the above components can be updated in one go through single file atomic updates, which is useful given that the primary expected storage place for these UKIs is the UEFI System Partition (ESP), which is a vFAT file system, with its limited data safety guarantees.
UKIs can be generated via a single, relatively simple objcopy invocation, that glues the listed components together, generating one PE binary that then can be signed for SecureBoot. (For details on building these, see below.)
Note that the primary location to place UKIs in is the EFI System Partition (or an otherwise firmware accessible file system). This typically means a VFAT file system of some form. Hence an effective UKI size limit of 4GiB is in place, as that’s the largest file size a FAT32 file system supports.
Basic UEFI Stub Execution Flow
The mentioned UEFI stub program will execute the following operations
in UEFI mode before transitioning into the Linux kernel that is
included in its .linux
PE section:
-
The PE sections listed are searched for in the invoked UKI the stub is part of, and superficially validated (i.e. general file format is in order).
-
All PE sections listed above of the invoked UKI are measured into TPM PCR 11. This TPM PCR is expected to be all zeroes before the UKI initializes. Pre-calculation is thus very straight-forward if the resources included in the PE image are known. (Note: as a single exception the
.pcrsig
PE section is excluded from this measurement, as it is supposed to carry the expected result of the measurement, and thus cannot also be input to it, see below for further details about this section.) -
If the
.splash
PE section is included in the UKI it is brought onto the screen -
If the
.dtb
PE section is included in the UKI it is activated using the Devicetree UEFI “fix-up” protocol -
If a command line was passed from the boot loader to the UKI executable it is discarded if SecureBoot is enabled and the command line from the
.cmdline
used. If SecureBoot is disabled and a command line was passed it is used in place of the one from.cmdline
. Either way the used command line is measured into TPM PCR 12. (This of course removes any flexibility of control of the kernel command line of the local user. In many scenarios this is probably considered beneficial, but in others it is not, and some flexibility might be desired. Thus, this concept probably needs to be extended sooner or later, to allow more flexible kernel command line policies to be enforced via definitions embedded into the UKI. For example: allowing definition of multiple kernel command lines the user/boot menu can select one from; allowing additional allowlisted parameters to be specified; or even optionally allowing any verification of the kernel command line to be turned off even in SecureBoot mode. It would then be up to the builder of the UKI to decide on the policy of the kernel command line.) -
It will set a couple of volatile EFI variables to inform userspace about executed TPM PCR measurements (and which PCR registers were used), and other execution properties. (For example: the EFI variable
StubPcrKernelImage
in the4a67b082-0a4c-41cf-b6c7-440b29bb8c4f
vendor namespace indicates the PCR register used for the UKI measurement, i.e. the value “11”). -
An initrd cpio archive is dynamically synthesized from the
.pcrsig
and.pcrpkey
PE section data (this is later passed to the invoked Linux kernel as additional initrd, to be overlaid with the main initrd from the .initrd section). These files are later available in the/.extra/
directory in the initrd context. -
The Linux kernel from the
.linux
PE section is invoked with with a combined initrd that is composed from the blob from the.initrd
PE section, the dynamically generated initrd containing the.pcrsig
and.pcrpkey
PE sections, and possibly some additional components like sysexts or syscfgs.
TPM PCR Assignments
In the construction above we take possession of two PCR registers previously unused on generic Linux distributions:
-
TPM PCR 11 shall contain measurements of all components of the UKI (with exception of the
.pcrsig
PE section, see above). This PCR will also contain measurements of the boot phase once userspace takes over (see below). -
TPM PCR 12 shall contain measurements of the used kernel command line. (Plus potentially other forms of parameterization/configuration passed into the UKI, not discussed in this document)
On top of that we intend to define two more PCR registers like this:
-
TPM PCR 15 shall contain measurements of the volume encryption key of the root file system of the OS.
-
[TPM PCR 13 shall contain measurements of additional extension images for the initrd, to enable a modularized initrd – not covered by this document]
(See the Linux TPM PCR Registry for an overview how these four PCRs fit into the list of Linux PCR assignments.)
For all four PCRs the assumption is that they are zero before the UKI initializes, and only the data that the UKI and the OS measure into them is included. This makes pre-calculating them straightforward: given a specific set of UKI components, it is immediately clear what PCR values can be expected in PCR 11 once the UKI booted up. Given a kernel command line (and other parameterization/configuration) it is clear what PCR values are expected in PCR 12.
Note that these four PCRs are defined by the conceptual “owner” of the resources measured into them. PCR 11 only contains resources the OS vendor controls. Thus it is straight-forward for the OS vendor to pre-calculate and then cryptographically sign the expected values for PCR 11. The PCR 11 values will be identical on all systems that run the same version of the UKI. PCR 12 only contains resources the administrator controls, thus the administrator can pre-calculate PCR values, and they will be correct on all instances of the OS that use the same parameters/configuration. PCR 15 only contains resources inherently local to the local system, i.e. the cryptographic key material that encrypts the root file system of the OS.
Separating out these three roles does not imply these actually need to be separate when used. However the assumption is that in many popular environments these three roles should be separate.
By separating out these PCRs by the owner’s role, it becomes straightforward to remotely attest, individually, on the software that runs on a node (PCR 11), the configuration it uses (PCR 12) or the identity of the system (PCR 15). Moreover, it becomes straightforward to robustly and securely encrypt data so that it can only be unlocked on a specific set of systems that share the same OS, or the same configuration, or have a specific identity – or a combination thereof.
Note that the mentioned PCRs are so far not typically used on generic Linux-based operating systems, to our knowledge. Windows uses them, but given that Windows and Linux should typically not be included in the same boot process this should be unproblematic, as Windows’ use of these PCRs should thus not conflict with ours.
To summarize:
PCR | Purpose | Owner | Expected Value before UKI boot | Pre-Calculable |
---|---|---|---|---|
11 | Measurement of UKI components and boot phases | OS Vendor | Zero | Yes (at UKI build time) |
12 | Measurement of kernel command line, additional kernel runtime configuration such as systemd credentials, systemd syscfg images | Administrator | Zero | Yes (when system configuration is assembled) |
13 | System Extension Images of initrd (and possibly more) |
(Administrator) | Zero | Yes |
15 | Measurement of root file system volume key (Possibly later more: measurement of root file system UUIDs and labels and of the machine ID /etc/machine-id ) |
Local System | Zero | Yes (after first boot once ll such IDs are determined) |
Signature Keys
In the model above in particular two sets of private/public key pairs are relevant:
-
The SecureBoot key to sign the UKI PE executable with. This controls permissible choices of OS/kernel
-
The key to sign the expected PCR 11 values with. Signatures made with this key will end up in the
.pcrsig
PE section. The public key part will end up in the.pcrpkey
PE section.
Typically the key pair for the PCR 11 signatures should be chosen with a narrow focus, reused for exactly one specific OS (e.g. “Fedora Desktop Edition”) and the series of UKIs that belong to it (all the way through all the versions of the OS). The SecureBoot signature key can be used with a broader focus, if desired. By keeping the PCR 11 signature key narrow in focus one can ensure that secrets bound to the signature key can only be unlocked on the narrow set of UKIs desired.
TPM Policy Use
Depending on the intended access policy to a resource protected by the TPM, one or more of the PCRs described above should be selected to bind TPM policy to.
For example, the root file system encryption key should likely be bound to TPM PCR 11, so that it can only be unlocked if a specific set of UKIs is booted (it should then, once acquired, be measured into PCR 15, as discussed above, so that later TPM objects can be bound to it, further down the chain). With the model described above this is reasonably straight-forward to do:
-
When userspace wants to bind disk encryption to a specific series of UKIs (“enrollment”), it looks for the public key passed to the
initrd
in the/.extra/
directory (which as discussed above originates in the.pcrpkey
PE section of the UKI). The relevant userspace component (e.g.systemd
) is then responsible for generating a random key to be used as symmetric encryption key for the storage volume (let’s call it disk encryption key _here, DEK_). The TPM is then used to encrypt (“seal”) the DEK with its internal Storage Root Key (TPM SRK). A TPM2 policy is bound to the encrypted DEK. The policy enforces that the DEK may only be decrypted if a valid signature is provided that matches the state of PCR 11 and the public key provided in the/.extra/
directory of theinitrd
. The plaintext DEK key is passed to the kernel to implement disk encryption (e.g. LUKS/dm-crypt). (Alternatively, hardware disk encryption can be used too, i.e. Intel MKTME, AMD SME or even OPAL, all of which are outside of the scope of this document.) The TPM-encrypted version of the DEK which the TPM returned is written to the encrypted volume’s superblock. -
When userspace wants to unlock disk encryption on a specific UKI, it looks for the signature data passed to the initrd in the
/.extra/
directory (which as discussed above originates in the.pcrsig
PE section of the UKI). It then reads the encrypted version of the DEK from the superblock of the encrypted volume. The signature and the encrypted DEK are then passed to the TPM. The TPM then checks if the current PCR 11 state matches the supplied signature from the.pcrsig
section and the public key used during enrollment. If all checks out it decrypts (“unseals”) the DEK and passes it back to the OS, where it is then passed to the kernel which implements the symmetric part of disk encryption.
Note that in this scheme the encrypted volume’s DEK is not bound to specific literal PCR hash values, but to a public key which is expected to sign PCR hash values.
Also note that the state of PCR 11 only matters during unlocking. It is not used or checked when enrolling.
In this scenario:
-
Input to the TPM part of the enrollment process are the TPM’s internal SRK, the plaintext DEK provided by the OS, and the public key later used for signing expected PCR values, also provided by the OS. – Output is the encrypted (“sealed”) DEK.
-
Input to the TPM part of the unlocking process are the TPM’s internal SRK, the current TPM PCR 11 values, the public key used during enrollment, a signature that matches both these PCR values and the public key, and the encrypted DEK. – Output is the plaintext (“unsealed”) DEK.
Note that sealing/unsealing is done entirely on the TPM chip, the host OS just provides the inputs (well, only the inputs that the TPM chip doesn’t know already on its own), and receives the outputs. With the exception of the plaintext DEK, none of the inputs/outputs are sensitive, and can safely be stored in the open. On the wire the plaintext DEK is protected via TPM parameter encryption (not discussed in detail here because though important not in scope for this document).
TPM PCR 11 is the most important of the mentioned PCRs, and its use is thus explained in detail here. The other mentioned PCRs can be used in similar ways, but signatures/public keys must be provided via other means.
This scheme builds on the functionality Linux’ LUKS2 functionality provides, i.e. key management supporting multiple slots, and the ability to embed arbitrary metadata in the encrypted volume’s superblock. Note that this means the TPM2-based logic explained here doesn’t have to be the only way to unlock an encrypted volume. For example, in many setups it is wise to enroll both this TPM-based mechanism and an additional “recovery key” (i.e. a high-entropy computer generated passphrase the user can provide manually in case they lose access to the TPM and need to access their data), of which either can be used to unlock the volume.
Boot Phases
Secrets needed during boot-up (such as the root file system encryption key) should typically not be accessible anymore afterwards, to protect them from access if a system is attacked during runtime. To implement this the scheme above is extended in one way: at certain milestones of the boot process additional fixed “words” should be measured into PCR 11. These milestones are placed at conceptual security boundaries, i.e. whenever code transitions from a higher privileged context to a less privileged context.
Specifically:
-
When the initrd initializes (“
initrd-enter
”) -
When the initrd transitions into the root file system (“
initrd-leave
”) -
When the early boot phase of the OS on the root file system has completed, i.e. all storage and file systems have been set up and mounted, immediately before regular services are started (“
sysinit
”) -
When the OS on the root file system completed the boot process far enough to allow unprivileged users to log in (“
complete
”) -
When the OS begins shut down (“
shutdown
”) -
When the service manager is mostly finished with shutting down and is about to pass control to the final phase of the shutdown logic (“
final
”)
By measuring these additional words into PCR 11 the distinct phases of the boot process can be distinguished in a relatively straight-forward fashion and the expected PCR values in each phase can be determined.
The phases are measured into PCR 11 (as opposed to some other PCR) mostly because available PCRs are scarce, and the boot phases defined are typically specific to a chosen OS, and hence fit well with the other data measured into PCR 11: the UKI which is also specific to the OS. The OS vendor generates both the UKI and defines the boot phases, and thus can safely and reliably pre-calculate/sign the expected PCR values for each phase of the boot.
Revocation/Rollback Protection
In order to secure secrets stored at rest, in particular in environments where unattended decryption shall be possible, it is essential that an attacker cannot use old, known-buggy – but properly signed – versions of software to access them.
Specifically, if disk encryption is bound to an OS vendor (via UKIs that include expected PCR values, signed by the vendor’s public key) there must be a mechanism to lock out old versions of the OS or UKI from accessing TPM based secrets once it is determined that the old version is vulnerable.
To implement this we propose making use of one of the “counters” TPM 2.0 devices provide: integer registers that are persistent in the TPM and can only be increased on request of the OS, but never be decreased. When sealing resources to the TPM, a policy may be declared to the TPM that restricts how the resources can later be unlocked: here we use one that requires that along with the expected PCR values (as discussed above) a counter integer range is provided to the TPM chip, along with a suitable signature covering both, matching the public key provided during sealing. The sealing/unsealing mechanism described above is thus extended: the signature passed to the TPM during unsealing now covers both the expected PCR values and the expected counter range. To be able to use a signature associated with an UKI provided by the vendor to unseal a resource, the counter thus must be at least increased to the lower end of the range the signature is for. By doing so the ability is lost to unseal the resource for signatures associated with older versions of the UKI, because their upper end of the range disables access once the counter has been increased far enough. By carefully choosing the upper and lower end of the counter range whenever the PCR values for an UKI shall be signed it is thus possible to ensure that updates can invalidate prior versions’ access to resources. By placing some space between the upper and lower end of the range it is possible to allow a controlled level of fallback UKI support, with clearly defined milestones where fallback to older versions of an UKI is not permitted anymore.
Example: a hypothetical distribution FooOS releases a regular stream of UKI kernels 5.1, 5.2, 5.3, … It signs the expected PCR values for these kernels with a key pair it maintains in a HSM. When signing UKI 5.1 it includes information directed at the TPM in the signed data declaring that the TPM counter must be above 100, and below 120, in order for the signature to be used. Thus, when the UKI is booted up and used for unlocking an encrypted volume the unlocking code must first increase the counter to 100 if needed, as the TPM will otherwise refuse unlocking the volume. The next release of the UKI, i.e. UKI 5.2 is a feature release, i.e. reverting back to the old kernel locally is acceptable. It thus does not increase the lower bound, but it increases the upper bound for the counter in the signature payload, thus encoding a valid range 100…121 in the signed payload. Now a major security vulnerability is discovered in UKI 5.1. A new UKI 5.3 is prepared that fixes this issue. It is now essential that UKI 5.1 can no longer be used to unlock the TPM secrets. Thus UKI 5.3 will bump the lower bound to 121, and increase the upper bound by one, thus allowing a range 121…122. Or in other words: for each new UKI release the signed data shall include a counter range declaration where the upper bound is increased by one. The lower range is left as-is between releases, except when an old version shall be cut off, in which case it is bumped to one above the upper bound used in that release.
UKI Generation
As mentioned earlier, UKIs are the combination of various resources
into one PE file. For most of these individual components there are
pre-existing tools to generate the components. For example the
included kernel image can be generated with the usual Linux kernel
build system. The initrd included in the UKI can be generated with
existing tools such as dracut
and similar. Once the basic components
(.linux
, .initrd
, .cmdline
, .splash
, .dtb
, .osrel
,
.uname
) have been acquired the combination process works roughly
like this:
-
The expected PCR 11 hashes (and signatures for them) for the UKI are calculated. The tool for that takes all basic UKI components and a signing key as input, and generates a JSON object as output that includes both the literal expected PCR hash values and a signature for them. (For all selected TPM2 banks)
-
The EFI stub binary is now combined with the basic components, the generated JSON PCR signature object from the first step (in the
.pcrsig
section) and the public key for it (in the.pcrpkey
section). This is done via a simple “objcopy
” invocation resulting in a single UKI PE binary. -
The resulting EFI PE binary is then signed for SecureBoot (via a tool such as
sbsign
or similar).
Note that the UKI model implies pre-built initrds. How to generate these (and securely extend and parameterize them) is outside of the scope of this document, but a related document will be provided highlighting these concepts.
Protection Coverage of SecureBoot Signing and PCRs
The scheme discussed here touches both SecureBoot code signing and TPM PCR measurements. These two distinct mechanisms cover separate parts of the boot process.
Specifically:
-
Firmware/Shim SecureBoot signing covers bootloader and UKI
-
TPM PCR 11 covers the UKI components and boot phase
-
TPM PCR 12 covers admin configuration
-
TPM PCR 15 covers the local identity of the host
Note that this means SecureBoot coverage ends once the system transitions from the initrd into the root file system. It is assumed that trust and integrity have been established before this transition by some means, for example LUKS/dm-crypt/dm-integrity, ideally bound to PCR 11 (i.e. UKI and boot phase).
A robust and secure update scheme for PCR 11 (i.e. UKI) has been described above, which allows binding TPM-locked resources to a UKI. For PCR 12 no such scheme is currently designed, but might be added later (use case: permit access to certain secrets only if the system runs with configuration signed by a specific set of keys). Given that resources measured into PCR 15 typically aren’t updated (or if they are updated loss of access to other resources linked to them is desired) no update scheme should be necessary for it.
This document focuses on the three PCRs discussed above. Disk encryption and other userspace may choose to also bind to other PCRs. However, doing so means the PCR brittleness issue returns that this design is supposed to remove. PCRs defined by the various firmware UEFI/TPM specifications generally do not know any concept for signatures of expected PCR values.
It is known that the industry-adopted SecureBoot signing keys are too broad to act as more than a denylist for known bad code. It is thus probably a good idea to enroll vendor SecureBoot keys wherever possible (e.g. in environments where the hardware is very well known, and VM environments), to raise the bar on preparing rogue UKI-like PE binaries that will result in PCR values that match expectations but actually contain bad code. Discussion about that is however outside of the scope of this document.
Whole OS embedded in the UKI
The above is written under the assumption that the UKI embeds an initrd whose job it is to set up the root file system: find it, validate it, cryptographically unlock it and similar. Once the root file system is found, the system transitions into it.
While this is the traditional design and likely what most systems will use, it is also possible to embed a regular root file system into the UKI and avoid any transition to an on-disk root file system. In this mode the whole OS would be encapsulated in the UKI, and signed/measured as one. In such a scenario the whole of the OS must be loaded into RAM and remain there, which typically restricts the general usability of such an approach. However, for specific purposes this might be the design of choice, for example to implement self-sufficient recovery or provisioning systems.
Proposed Implementations & Current Status
The toolset for most of the above is already implemented in systemd and related projects in one way or another. Specifically:
-
The
systemd-stub
(or short:sd-stub
) component implements the discussed UEFI stub program -
The
systemd-measure
tool can be used to pre-calculate expected PCR 11 values given the UKI components and can sign the result, as discussed in the UKI Image Generation section above. -
The
systemd-cryptenroll
andsystemd-cryptsetup
tools can be used to bind a LUKS2 encrypted file system volume to a TPM and PCR 11 public key/signatures, according to the scheme described above. (The two components also implement a “recovery key” concept, as discussed above) -
The
systemd-pcrphase
component measures specific words into PCR 11 at the discussed phases of the boot process. -
The
systemd-creds
tool may be used to encrypt/decrypt data objects called “credentials” that can be passed into services and booted systems, and are automatically decrypted (if needed) immediately before service invocation. Encryption is typically bound to the local TPM, to ensure the data cannot be recovered elsewhere.
Note that
systemd-stub
(i.e. the UEFI code glued into the UKI) is distinct from
systemd-boot
(i.e. the UEFI boot loader than can manage multiple UKIs and other
boot menu items and implements automatic fallback, an interactive menu
and a programmatic interface for the OS among other things). One can
be used without the other – both sd-stub
without sd-boot
and vice
versa – though they integrate nicely if used in combination.
Note that the mechanisms described are relatively generic, and can be implemented and be consumed in other software too, systemd should be considered a reference implementation, though one that found comprehensive adoption across Linux distributions.
Some concepts discussed above are currently not implemented. Specifically:
-
The rollback protection logic is currently not implemented.
-
The mentioned measurement of the root file system volume key to PCR 15 is implemented, but not merged into the systemd main branch yet.
The UAPI Group
We recently started a new group for discussing concepts and specifications of basic OS components, including UKIs as described above. It's called the UAPI Group. Please have a look at the various documents and specifications already available there, and expect more to come. Contributions welcome!
Glossary
TPM
Trusted Platform Module; a security chip found in many modern systems, both physical systems and increasingly also in virtualized environments. Traditionally a discrete chip on the mainboard but today often implemented in firmware, and lately directly in the CPU SoC.
PCR
Platform Configuration Register; a set of registers on a TPM that are initialized to zero at boot. The firmware and OS can “extend” these registers with hashes of data used during the boot process and afterwards. “Extension” means the supplied data is first cryptographically hashed. The resulting hash value is then combined with the previous value of the PCR and the combination hashed again. The result will become the new value of the PCR. By doing this iteratively for all parts of the boot process (always with the data that will be used next during the boot process) a concept of “Measured Boot” can be implemented: as long as every element in the boot chain measures (i.e. extends into the PCR) the next part of the boot like this, the resulting PCR values will prove cryptographically that only a certain set of boot components can have been used to boot up. A standards compliant TPM usually has 24 PCRs, but more than half of those are already assigned specific meanings by the firmware. Some of the others may be used by the OS, of which we use four in the concepts discussed in this document.
Measurement
The act of “extending” a PCR with some data object.
SRK
Storage Root Key; a special cryptographic key generated by a TPM that never leaves the TPM, and can be used to encrypt/decrypt data passed to the TPM.
UKI
Unified Kernel Image; the concept this document is about. A
combination of kernel, initrd
and other resources. See above.
SecureBoot
A mechanism where every software component involved in the boot process is cryptographically signed and checked against a set of public keys stored in the mainboard hardware, implemented in firmware, before it is used.
Measured Boot
A boot process where each component measures (i.e., hashes and extends into a TPM PCR, see above) the next component it will pass control to before doing so. This serves two purposes: it can be used to bind security policy for encrypted secrets to the resulting PCR values (or signatures thereof, see above), and it can be used to reason about used software after the fact, for example for the purpose of remote attestation.
initrd
Short for “initial RAM disk”, which – strictly speaking – is a
misnomer today, because no RAM disk is anymore involved, but a tmpfs
file system instance. Also known as “initramfs
”, which is also
misleading, given the file system is not ramfs
anymore, but tmpfs
(both of which are in-memory file systems on Linux, with different
semantics). The initrd
is passed to the Linux kernel and is
basically a file system tree in cpio
archive. The kernel unpacks the
image into a tmpfs
(i.e., into an in-memory file system), and then
executes a binary from it. It thus contains the binaries for the first
userspace code the kernel invokes. Typically, the initrd
’s job is to
find the actual root file system, unlock it (if encrypted), and
transition into it.
UEFI
Short for “Unified Extensible Firmware Interface”, it is a widely adopted standard for PC firmware, with native support for SecureBoot and Measured Boot.
EFI
More or less synonymous to UEFI, IRL.
Shim
A boot component originating in the Linux world, which in a way extends the public key database SecureBoot maintains (which is under control from Microsoft) with a second layer (which is under control of the Linux distributions and of the owner of the physical device).
PE
Portable Executable; a file format for executable binaries, originally from the Windows world, but also used by UEFI firmware. PE files may contain code and data, categorized in labeled “sections”
ESP
EFI System Partition; a special partition on a storage medium that the firmware is able to look for UEFI PE binaries in to execute at boot.
HSM
Hardware Security Module; a piece of hardware that can generate and store secret cryptographic keys, and execute operations with them, without the keys leaving the hardware (though this is configurable). TPMs can act as HSMs.
DEK
Disk Encryption Key; an asymmetric cryptographic key used for unlocking disk encryption, i.e. passed to LUKS/dm-crypt for activating an encrypted storage volume.
LUKS2
Linux Unified Key Setup Version 2; a specification for a superblock
for encrypted volumes widely used on Linux. LUKS2 is the default
on-disk format for the cryptsetup
suite of tools. It provides
flexible key management with multiple independent key slots and allows
embedding arbitrary metadata in a JSON format in the superblock.
Thanks
I’d like to thank Alain Gefflaut, Anna Trikalinou, Christian Brauner, Daan de Meyer, Luca Boccassi, Zbigniew Jędrzejewski-Szmek for reviewing this text.
TLDR: Hermetic /usr/
is awesome; let's popularize image-based OSes
with modernized security properties built around immutability,
SecureBoot, TPM2, adaptability, auto-updating, factory reset,
uniformity – built from traditional distribution packages, but
deployed via images.
Over the past years, systemd gained a number of components for building Linux-based operating systems. While these components individually have been adopted by many distributions and products for specific purposes, we did not publicly communicate a broader vision of how they should all fit together in the long run. In this blog story I hope to provide that from my personal perspective, i.e. explain how I personally would build an OS and where I personally think OS development with Linux should go.
I figure this is going to be a longer blog story, but I hope it will be equally enlightening. Please understand though that everything I write about OS design here is my personal opinion, and not one of my employer.
For the last 12 years or so I have been working on Linux OS
development, mostly around systemd
. In all those years I had a lot
of time thinking about the Linux platform, and specifically
traditional Linux distributions and their strengths and weaknesses. I
have seen many attempts to reinvent Linux distributions in one way or
another, to varying success. After all this most would probably
agree that the traditional RPM or dpkg/apt-based distributions still
define the Linux platform more than others (for 25+ years now), even
though some Linux-based OSes (Android, ChromeOS) probably outnumber
the installations overall.
And over all those 12 years I kept wondering, how would I actually build an OS for a system or for an appliance, and what are the components necessary to achieve that. And most importantly, how can we make these components generic enough so that they are useful in generic/traditional distributions too, and in other use cases than my own.
The Project
Before figuring out how I would build an OS it's probably good to figure out what type of OS I actually want to build, what purpose I intend to cover. I think a desktop OS is probably the most interesting. Why is that? Well, first of all, I use one of these for my job every single day, so I care immediately, it's my primary tool of work. But more importantly: I think building a desktop OS is one of the most complex overall OS projects you can work on, simply because desktops are so much more versatile and variable than servers or embedded devices. If one figures out the desktop case, I think there's a lot more to learn from, and reuse in the server or embedded case, then going the other way. After all, there's a reason why so much of the widely accepted Linux userspace stack comes from people with a desktop background (including systemd, BTW).
So, let's see how I would build a desktop OS. If you press me hard, and ask me why I would do that given that ChromeOS already exists and more or less is a Linux desktop OS: there's plenty I am missing in ChromeOS, but most importantly, I am lot more interested in building something people can easily and naturally rebuild and hack on, i.e. Google-style over-the-wall open source with its skewed power dynamic is not particularly attractive to me. I much prefer building this within the framework of a proper open source community, out in the open, and basing all this strongly on the status quo ante, i.e. the existing distributions. I think it is crucial to provide a clear avenue to build a modern OS based on the existing distribution model, if there shall ever be a chance to make this interesting for a larger audience.
(Let me underline though: even though I am going to focus on a desktop here, most of this is directly relevant for servers as well, in particular container host OSes and suchlike, or embedded devices, e.g. car IVI systems and so on.)
Design Goals
-
First and foremost, I think the focus must be on an image-based design rather than a package-based one. For robustness and security it is essential to operate with reproducible, immutable images that describe the OS or large parts of it in full, rather than operating always with fine-grained RPM/dpkg style packages. That's not to say that packages are not relevant (I actually think they matter a lot!), but I think they should be less of a tool for deploying code but more one of building the objects to deploy. A different way to see this: any OS built like this must be easy to replicate in a large number of instances, with minimal variability. Regardless if we talk about desktops, servers or embedded devices: focus for my OS should be on "cattle", not "pets", i.e that from the start it's trivial to reuse the well-tested, cryptographically signed combination of software over a large set of devices the same way, with a maximum of bit-exact reuse and a minimum of local variances.
-
The trust chain matters, from the boot loader all the way to the apps. This means all code that is run must be cryptographically validated before it is run. All storage must be cryptographically protected: public data must be integrity checked; private data must remain confidential.
This is in fact where big distributions currently fail pretty badly. I would go as far as saying that SecureBoot on Linux distributions is mostly security theater at this point, if you so will. That's because the initrd that unlocks your FDE (i.e. the cryptographic concept that protects the rest of your system) is not signed or protected in any way. It's trivial to modify for an attacker with access to your hard disk in an undetectable way, and collect your FDE passphrase. The involved bureaucracy around the implementation of UEFI SecureBoot of the big distributions is to a large degree pointless if you ask me, given that once the kernel is assumed to be in a good state, as the next step the system invokes completely unsafe code with full privileges.
This is a fault of current Linux distributions though, not of SecureBoot in general. Other OSes use this functionality in more useful ways, and we should correct that too.
-
Pretty much the same thing: offline security matters. I want my data to be reasonably safe at rest, i.e. cryptographically inaccessible even when I leave my laptop in my hotel room, suspended.
-
Everything should be cryptographically measured, so that remote attestation is supported for as much software shipped on the OS as possible.
-
Everything should be self descriptive, have single sources of truths that are closely attached to the object itself, instead of stored externally.
-
Everything should be self-updating. Today we know that software is never bug-free, and thus requires a continuous update cycle. Not only the OS itself, but also any extensions, services and apps running on it.
-
Everything should be robust in respect to aborted OS operations, power loss and so on. It should be robust towards hosed OS updates (regardless if the download process failed, or the image was buggy), and not require user interaction to recover from them.
-
There must always be a way to put the system back into a well-defined, guaranteed safe state ("factory reset"). This includes that all sensitive data from earlier uses becomes cryptographically inaccessible.
-
The OS should enforce clear separation between vendor resources, system resources and user resources: conceptually and when it comes to cryptographical protection.
-
Things should be adaptive: the system should come up and make the best of the system it runs on, adapt to the storage and hardware. Moreover, the system should support execution on bare metal equally well as execution in a VM environment and in a container environment (i.e.
systemd-nspawn
). -
Things should not require explicit installation. i.e. every image should be a live image. For installation it should be sufficient to
dd
an OS image onto disk. Thus, strong focus on "instantiate on first boot", rather than "instantiate before first boot". -
Things should be reasonably minimal. The image the system starts its life with should be quick to download, and not include resources that can as well be created locally later.
-
System identity, local cryptographic keys and so on should be generated locally, not be pre-provisioned, so that there's no leak of sensitive data during the transport onto the system possible.
-
Things should be reasonably democratic and hackable. It should be easy to fork an OS, to modify an OS and still get reasonable cryptographic protection. Modifying your OS should not necessarily imply that your "warranty is voided" and you lose all good properties of the OS, if you so will.
-
Things should be reasonably modular. The privileged part of the core OS must be extensible, including on the individual system. It's not sufficient to support extensibility just through high-level UI applications.
-
Things should be reasonably uniform, i.e. ideally the same formats and cryptographic properties are used for all components of the system, regardless if for the host OS itself or the payloads it receives and runs.
-
Even taking all these goals into consideration, it should still be close to traditional Linux distributions, and take advantage of what they are really good at: integration and security update cycles.
Now that we know our goals and requirements, let's start designing the OS along these lines.
Hermetic /usr/
First of all the OS resources (code, data files, …) should be
hermetic in an immutable /usr/
. This means that a /usr/
tree
should carry everything needed to set up the minimal set of
directories and files outside of /usr/
to make the system work. This
/usr/
tree can then be mounted read-only into the writable root file
system that then will eventually carry the local configuration, state
and user data in /etc/
, /var/
and /home/
as usual.
Thankfully, modern distributions are surprisingly close to working
without issues in such a hermetic context. Specifically, Fedora works
mostly just fine: it has adopted the /usr/
merge and the declarative
systemd-sysusers
and
systemd-tmpfiles
components quite comprehensively, which means the directory trees
outside of /usr/
are automatically generated as needed if missing.
In particular /etc/passwd
and /etc/group
(and related files) are
appropriately populated, should they be missing entries.
In my model a hermetic OS is hence comprehensively defined within
/usr/
: combine the /usr/
tree with an empty, otherwise unpopulated
root file system, and it will boot up successfully, automatically
adding the strictly necessary files, and resources that are necessary
to boot up.
Monopolizing vendor OS resources and definitions in an immutable
/usr/
opens multiple doors to us:
-
We can apply
dm-verity
to the whole/usr/
tree, i.e. guarantee structural, cryptographic integrity on the whole vendor OS resources at once, with full file system metadata. -
We can implement updates to the OS easily: by implementing an A/B update scheme on the
/usr/
tree we can update the OS resources atomically and robustly, while leaving the rest of the OS environment untouched. -
We can implement factory reset easily: erase the root file system and reboot. The hermetic OS in
/usr/
has all the information it needs to set up the root file system afresh — exactly like in a new installation.
Initial Look at the Partition Table
So let's have a look at a suitable partition table, taking a hermetic
/usr/
into account. Let's conceptually start with a table of four
entries:
-
An UEFI System Partition (required by firmware to boot)
-
Immutable, Verity-protected, signed file system with the
/usr/
tree in version A -
Immutable, Verity-protected, signed file system with the
/usr/
tree in version B -
A writable, encrypted root file system
(This is just for initial illustration here, as we'll see later it's going to be a bit more complex in the end.)
The Discoverable Partitions
Specification provides
suitable partition types UUIDs for all of the above partitions. Which
is great, because it makes the image self-descriptive: simply by
looking at the image's GPT table we know what to mount where. This
means we do not need a manual /etc/fstab
, and a multitude of tools
such as systemd-nspawn
and similar can operate directly on the disk
image and boot it up.
Booting
Now that we have a rough idea how to organize the partition table, let's look a bit at how to boot into that. Specifically, in my model "unified kernels" are the way to go, specifically those implementing Boot Loader Specification Type #2. These are basically kernel images that have an initial RAM disk attached to them, as well as a kernel command line, a boot splash image and possibly more, all wrapped into a single UEFI PE binary. By combining these into one we achieve two goals: they become extremely easy to update (i.e. drop in one file, and you update kernel+initrd) and more importantly, you can sign them as one for the purpose of UEFI SecureBoot.
In my model, each version of such a kernel would be associated with
exactly one version of the /usr/
tree: both are always updated at
the same time. An update then becomes relatively simple: drop in one
new /usr/
file system plus one kernel, and the update is complete.
The boot loader used for all this would be systemd-boot, of course. It's a very simple loader, and implements the aforementioned boot loader specification. This means it requires no explicit configuration or anything: it's entirely sufficient to drop in one such unified kernel file, and it will be picked up, and be made a candidate to boot into.
You might wonder how to configure the root file system to boot from
with such a unified kernel that contains the kernel command line and
is signed as a whole and thus immutable. The idea here is to use the
usrhash=
kernel command line option implemented by
systemd-veritysetup-generator
and
systemd-fstab-generator. It
does two things: it will search and set up a dm-verity
volume for
the /usr/
file system, and then mount it. It takes the root hash
value of the dm-verity
Merkle tree as the parameter. This hash is
then also used to find the /usr/
partition in the GPT partition
table, under the assumption that the partition UUIDs are derived from
it, as per the suggestions in the discoverable partitions
specification (see above).
systemd-boot
(if not told otherwise) will do a version sort of the
kernel image files it finds, and then automatically boot the newest
one. Picking a specific kernel to boot will also fixate which version
of the /usr/
tree to boot into, because — as mentioned — the Verity
root hash of it is built into the kernel command line the unified
kernel image contains.
In my model I'd place the kernels directly into the UEFI System
Partition (ESP), in order to simplify things. (systemd-boot
also
supports reading them from a separate boot partition, but let's not
complicate things needlessly, at least for now.)
So, with all this, we now already have a boot chain that goes
something like this: once the boot loader is run, it will pick the
newest kernel, which includes the initial RAM disk and a secure
reference to the /usr/
file system to use. This is already
great. But a /usr/
alone won't make us happy, we also need a root
file system. In my model, that file system would be writable, and the
/etc/
and /var/
hierarchies would be located directly on it. Since
these trees potentially contain secrets (SSH keys, …) the root file
system needs to be encrypted. We'll use LUKS2 for this, of course. In
my model, I'd bind this to the TPM2 chip (for compatibility with
systems lacking one, we can find a suitable fallback, which then
provides weaker guarantees, see below). A TPM2 is a security chip
available in most modern PCs. Among other things it contains a
persistent secret key that can be used to encrypt data, in a way that
only if you possess access to it and can prove you are using validated
software you can decrypt it again. The cryptographic measuring I
mentioned earlier is what allows this to work. But … let's not get
lost too much in the details of TPM2 devices, that'd be material for a
novel, and this blog story is going to be way too long already.
What does using a TPM2 bound key for unlocking the root file system get us? We can encrypt the root file system with it, and you can only read or make changes to the root file system if you also possess the TPM2 chip and run our validated version of the OS. This protects us against an evil maid scenario to some level: an attacker cannot just copy the hard disk of your laptop while you leave it in your hotel room, because unless the attacker also steals the TPM2 device it cannot be decrypted. The attacker can also not just modify the root file system, because such changes would be detected on next boot because they aren't done with the right cryptographic key.
So, now we have a system that already can boot up somewhat completely,
and run userspace services. All code that is run is verified in some
way: the /usr/
file system is Verity protected, and the root hash of
it is included in the kernel that is signed via UEFI SecureBoot. And
the root file system is locked to the TPM2 where the secret key is
only accessible if our signed OS + /usr/
tree is used.
(One brief intermission here: so far all the components I am
referencing here exist already, and have been shipped in systemd
and
other projects already, including the TPM2 based disk
encryption. There's one thing missing here however at the moment that
still needs to be developed (happy to take PRs!): right now TPM2 based
LUKS2 unlocking is bound to PCR hash values. This is hard to work with
when implementing updates — what we'd need instead is unlocking by
signatures of PCR hashes. TPM2 supports this, but we don't support it
yet in our systemd-cryptsetup
+ systemd-cryptenroll
stack.)
One of the goals mentioned above is that cryptographic key material
should always be generated locally on first boot, rather than
pre-provisioned. This of course has implications for the encryption
key of the root file system: if we want to boot into this system we
need the root file system to exist, and thus a key already generated
that it is encrypted with. But where precisely would we generate it if
we have no installer which could generate while installing (as it is
done in traditional Linux distribution installers). My proposed
solution here is to use
systemd-repart
,
which is a declarative, purely additive repartitioner. It can run from
the initrd to create and format partitions on boot, before
transitioning into the root file system. It can also format the
partitions it creates and encrypt them, automatically enrolling an
TPM2-bound key.
So, let's revisit the partition table we mentioned earlier. Here's what in my model we'd actually ship in the initial image:
-
An UEFI System Partition (ESP)
-
An immutable, Verity-protected, signed file system with the
/usr/
tree in version A
And that's already it. No root file system, no B /usr/
partition,
nothing else. Only two partitions are shipped: the ESP with the
systemd-boot
loader and one unified kernel image, and the A version
of the /usr/
partition. Then, on first boot systemd-repart
will
notice that the root file system doesn't exist yet, and will create
it, encrypt it, and format it, and enroll the key into the TPM2. It
will also create the second /usr/
partition (B) that we'll need for
later A/B updates (which will be created empty for now, until the
first update operation actually takes place, see below). Once done the
initrd will combine the fresh root file system with the shipped
/usr/
tree, and transition into it. Because the OS is hermetic in
/usr/
and contains all the systemd-tmpfiles
and systemd-sysuser
information it can then set up the root file system properly and
create any directories and symlinks (and maybe a few files) necessary
to operate.
Besides the fact that the root file system's encryption keys are
generated on the system we boot from and never leave it, it is also
pretty nice that the root file system will be sized dynamically,
taking into account the physical size of the backing storage. This is
perfect, because on first boot the image will automatically adapt to what
it has been dd
'ed onto.
Factory Reset
This is a good point to talk about the factory reset logic, i.e. the
mechanism to place the system back into a known good state. This is
important for two reasons: in our laptop use case, once you want to
pass the laptop to someone else, you want to ensure your data is fully
and comprehensively erased. Moreover, if you have reason to believe
your device was hacked you want to revert the device to a known good
state, i.e. ensure that exploits cannot persist. systemd-repart
already has a mechanism for it. In the declarations of the partitions
the system should have, entries may be marked to be candidates for
erasing on factory reset. The actual factory reset is then requested
by one of two means: by specifying a specific kernel command line
option (which is not too interesting here, given we lock that down via
UEFI SecureBoot; but then again, one could also add a second kernel to
the ESP that is identical to the first, with only different that it
lists this command line option: thus when the user selects this entry
it will initiate a factory reset) — and via an EFI variable that can
be set and is honoured on the immediately following boot. So here's
how a factory reset would then go down: once the factory reset is
requested it's enough to reboot. On the subsequent boot
systemd-repart
runs from the initrd, where it will honour the
request and erase the partitions marked for erasing. Once that is
complete the system is back in the state we shipped the system in:
only the ESP and the /usr/
file system will exist, but the root file
system is gone. And from here we can continue as on the original first
boot: create a new root file system (and any other partitions), and
encrypt/set it up afresh.
So now we have a nice setup, where everything is either signed or encrypted securely. The system can adapt to the system it is booted on automatically on first boot, and can easily be brought back into a well defined state identical to the way it was shipped in.
Modularity
But of course, such a monolithic, immutable system is only useful for
very specific purposes. If /usr/
can't be written to, – at least in
the traditional sense – one cannot just go and install a new software
package that one needs. So here two goals are superficially
conflicting: on one hand one wants modularity, i.e. the ability to
add components to the system, and on the other immutability, i.e. that
precisely this is prohibited.
So let's see what I propose as a middle ground in my model. First, what's the precise use case for such modularity? I see a couple of different ones:
-
For some cases it is necessary to extend the system itself at the lowest level, so that the components added in extend (or maybe even replace) the resources shipped in the base OS image, so that they live in the same namespace, and are subject to the same security restrictions and privileges. Exposure to the details of the base OS and its interface for this kind of modularity is at the maximum.
Example: a module that adds a debugger or tracing tools into the system. Or maybe an optional hardware driver module.
-
In other cases, more isolation is preferable: instead of extending the system resources directly, additional services shall be added in that bring their own files, can live in their own namespace (but with "windows" into the host namespaces), however still are system components, and provide services to other programs, whether local or remote. Exposure to the details of the base OS for this kind of modularity is restricted: it mostly focuses on the ability to consume and provide IPC APIs from/to the system. Components of this type can still be highly privileged, but the level of integration is substantially smaller than for the type explained above.
Example: a module that adds a specific VPN connection service to the OS.
-
Finally, there's the actual payload of the OS. This stuff is relatively isolated from the OS and definitely from each other. It mostly consumes OS APIs, and generally doesn't provide OS APIs. This kind of stuff runs with minimal privileges, and in its own namespace of concepts.
Example: a desktop app, for reading your emails.
Of course, the lines between these three types of modules are blurry, but I think distinguishing them does make sense, as I think different mechanisms are appropriate for each. So here's what I'd propose in my model to use for this.
-
For the system extension case I think the
systemd-sysext
images are appropriate. This tool operates on system extension images that are very similar to the host's disk image: they also contain a/usr/
partition, protected by Verity. However, they just include additions to the host image: binaries that extend the host. When such a system extension image is activated, it is merged via an immutableoverlayfs
mount into the host's/usr/
tree. Thus any file shipped in such a system extension will suddenly appear as if it was part of the host OS itself. For optional components that should be considered part of the OS more or less this is a very simple and powerful way to combine an immutable OS with an immutable extension. Note that most likely extensions for an OS matching this tool should be built at the same time within the same update cycle scheme as the host OS itself. After all, the files included in the extensions will have dependencies on files in the system OS image, and care must be taken that these dependencies remain in order. -
For adding in additional somewhat isolated system services in my model, Portable Services are the proposed tool of choice. Portable services are in most ways just like regular system services; they could be included in the system OS image or an extension image. However, portable services use
RootImage=
to run off separate disk images, thus within their own namespace. Images set up this way have various ways to integrate into the host OS, as they are in most ways regular system services, which just happen to bring their own directory tree. Also, unlike regular system services, for them sandboxing is opt-out rather than opt-in. In my model, here too the disk images are Verity protected and thus immutable. Just like the host OS they are GPT disk images that come with a/usr/
partition and Verity data, along with signing. -
Finally, the actual payload of the OS, i.e. the apps. To be useful in real life here it is important to hook into existing ecosystems, so that a large set of apps are available. Given that on Linux flatpak (or on servers OCI containers) are the established format that pretty much won they are probably the way to go. That said, I think both of these mechanisms have relatively weak properties, in particular when it comes to security, since immutability/measurements and similar are not provided. This means, unlike for system extensions and portable services a complete trust chain with attestation and per-app cryptographically protected data is much harder to implement sanely.
What I'd like to underline here is that the main system OS image, as well as the system extension images and the portable service images are put together the same way: they are GPT disk images, with one immutable file system and associated Verity data. The latter two should also contain a PKCS#7 signature for the top-level Verity hash. This uniformity has many benefits: you can use the same tools to build and process these images, but most importantly: by using a single way to validate them throughout the stack (i.e. Verity, in the latter cases with PKCS#7 signatures), validation and measurement is straightforward. In fact it's so obvious that we don't even have to implement it in systemd: the kernel has direct support for this Verity signature checking natively already (IMA).
So, by composing a system at runtime from a host image, extension images and portable service images we have a nicely modular system where every single component is cryptographically validated on every single IO operation, and every component is measured, in its entire combination, directly in the kernel's IMA subsystem.
(Of course, once you add the desktop apps or OCI containers on top, then these properties are lost further down the chain. But well, a lot is already won, if you can close the chain that far down.)
Note that system extensions are not designed to replicate the fine
grained packaging logic of RPM/dpkg. Of course, systemd-sysext
is a
generic tool, so you can use it for whatever you want, but there's a
reason it does not bring support for a dependency language: the goal
here is not to replicate traditional Linux packaging (we have that
already, in RPM/dpkg, and I think they are actually OK for what they
do) but to provide delivery of larger, coarser sets of functionality,
in lockstep with the underlying OS' life-cycle and in particular with
no interdependencies, except on the underlying OS.
Also note that depending on the use case it might make sense to also
use system extensions to modularize the initrd
step. This is
probably less relevant for a desktop OS, but for server systems it
might make sense to package up support for specific complex storage in
a systemd-sysext
system extension, which can be applied to the
initrd that is built into the unified kernel. (In fact, we have been
working on implementing signed yet modular initrd support to general
purpose Fedora this way.)
Note that portable services are composable from system extension too,
by the way. This makes them even more useful, as you can share a
common runtime between multiple portable service, or even use the host
image as common runtime for portable services. In this model a common
runtime image is shared between one or more system extensions, and
composed at runtime via an overlayfs
instance.
More Modularity: Secondary OS Installs
Having an immutable, cryptographically locked down host OS is great I think, and if we have some moderate modularity on top, that's also great. But oftentimes it's useful to be able to depart/compromise for some specific use cases from that, i.e. provide a bridge for example to allow workloads designed around RPM/dpkg package management to coexist reasonably nicely with such an immutable host.
For this purpose in my model I'd propose using systemd-nspawn
containers. The containers are focused on OS containerization,
i.e. they allow you to run a full OS with init system and everything
as payload (unlike for example Docker containers which focus on a
single service, and where running a full OS in it is a mess).
Running systemd-nspawn
containers for such secondary OS installs has
various nice properties. One of course is that systemd-nspawn
supports the same level of cryptographic image validation that we rely
on for the host itself. Thus, to some level the whole OS trust chain
is reasonably recursive if desired: the firmware validates the OS, and the OS can
validate a secondary OS installed within it. In fact, we can run our
trusted OS recursively on itself and get similar security guarantees!
Besides these security aspects, systemd-nspawn
also has really nice
properties when it comes to integration with the host. For example the
--bind-user=
permits binding a host user record and their directory
into a container as a simple one step operation. This makes it
extremely easy to have a single user and $HOME
but share it
concurrently with the host and a zoo of secondary OSes in
systemd-nspawn
containers, which each could run different
distributions even.
Developer Mode
Superficially, an OS with an immutable /usr/
appears much less
hackable than an OS where everything is writable. Moreover, an OS
where everything must be signed and cryptographically validated makes
it hard to insert your own code, given you are unlikely to possess
access to the signing keys.
To address this issue other systems have supported a "developer" mode: when entered the security guarantees are disabled, and the system can be freely modified, without cryptographic validation. While that's a great concept to have I doubt it's what most developers really want: the cryptographic properties of the OS are great after all, it sucks having to give them up once developer mode is activated.
In my model I'd thus propose two different approaches to this problem. First of all, I think there's value in allowing users to additively extend/override the OS via local developer system extensions. With this scheme the underlying cryptographic validation would remain in tact, but — if this form of development mode is explicitly enabled – the developer could add in more resources from local storage, that are not tied to the OS builder's chain of trust, but a local one (i.e. simply backed by encrypted storage of some form).
The second approach is to make it easy to extend (or in fact replace) the set of trusted validation keys, with local ones that are under the control of the user, in order to make it easy to operate with kernel, OS, extension, portable service or container images signed by the local developer without involvement of the OS builder. This is relatively easy to do for components down the trust chain, i.e. the elements further up the chain should optionally allow additional certificates to allow validation with.
(Note that systemd currently has no explicit support for a "developer" mode like this. I think we should add that sooner or later however.)
Democratizing Code Signing
Closely related to the question of developer mode is the question of code signing. If you ask me, the status quo of UEFI SecureBoot code signing in the major Linux distributions is pretty sad. The work to get stuff signed is massive, but in effect it delivers very little in return: because initrds are entirely unprotected, and reside on partitions lacking any form of cryptographic integrity protection any attacker can trivially easily modify the boot process of any such Linux system and freely collected FDE passphrases entered. There's little value in signing the boot loader and kernel in a complex bureaucracy if it then happily loads entirely unprotected code that processes the actually relevant security credentials: the FDE keys.
In my model, through use of unified kernels this important gap is closed, hence UEFI SecureBoot code signing becomes an integral part of the boot chain from firmware to the host OS. Unfortunately, code signing – and having something a user can locally hack, is to some level conflicting. However, I think we can improve the situation here, and put more emphasis on enrolling developer keys in the trust chain easily. Specifically, I see one relevant approach here: enrolling keys directly in the firmware is something that we should make less of a theoretical exercise and more something we can realistically deploy. See this work in progress making this more automatic and eventually safe. Other approaches are thinkable (including some that build on existing MokManager infrastructure), but given the politics involved, are harder to conclusively implement.
Running the OS itself in a container
What I explain above is put together with running on a bare metal
system in mind. However, one of the stated goals is to make the OS
adaptive enough to also run in a container environment (specifically:
systemd-nspawn
) nicely. Booting a disk image on bare metal or in a
VM generally means that the UEFI firmware validates and invokes the
boot loader, and the boot loader invokes the kernel which then
transitions into the final system. This is different for containers:
here the container manager immediately calls the init system, i.e. PID
1. Thus the validation logic must be different: cryptographic
validation must be done by the container manager. In my model this is
solved by shipping the OS image not only with a Verity data partition
(as is already necessary for the UEFI SecureBoot trust chain, see
above), but also with another partition, containing a PKCS#7 signature
of the root hash of said Verity partition. This of course is exactly
what I propose for both the system extension and portable service
image. Thus, in my model the images for all three uses are put
together the same way: an immutable /usr/
partition, accompanied by
a Verity partition and a PKCS#7 signature partition. The OS image
itself then has two ways "into" the trust chain: either through the
signed unified kernel in the ESP (which is used for bare metal and VM
boots) or by using the PKCS#7 signature stored in the partition
(which is used for container/systemd-nspawn
boots).
Parameterizing Kernels
A fully immutable and signed OS has to establish trust in the user
data it makes use of before doing so. In the model I describe here,
for /etc/
and /var/
we do this via disk encryption of the root
file system (in combination with integrity checking). But the point
where the root file system is mounted comes relatively late in the
boot process, and thus cannot be used to parameterize the boot
itself. In many cases it's important to be able to parameterize the
boot process however.
For example, for the implementation of the developer mode indicated above it's useful to be able to pass this fact safely to the initrd, in combination with other fields (e.g. hashed root password for allowing in-initrd logins for debug purposes). After all, if the initrd is pre-built by the vendor and signed as whole together with the kernel it cannot be modified to carry such data directly (which is in fact how parameterizing of the initrd to a large degree was traditionally done).
In my model this is achieved through system credentials, which allow passing parameters to systems (and services for the matter) in an encrypted and authenticated fashion, bound to the TPM2 chip. This means that we can securely pass data into the initrd so that it can be authenticated and decrypted only on the system it is intended for and with the unified kernel image it was intended for.
Swap
In my model the OS would also carry a swap partition. For the simple
reason that only then
systemd-oomd.service
can provide the best results. Also see In defence of swap: common
misconceptions
Updating Images
We have a rough idea how the system shall be organized now, let's next focus on the deployment cycle: software needs regular update cycles, and software that is not updated regularly is a security problem. Thus, I am sure that any modern system must be automatically updated, without this requiring avoidable user interaction.
In my model, this is the job for systemd-sysupdate. It's a relatively simple A/B image updater: it operates either on partitions, on regular files in a directory, or on subdirectories in a directory. Each entry has a version (which is encoded in the GPT partition label for partitions, and in the filename for regular files and directories): whenever an update is initiated the oldest version is erased, and the newest version is downloaded.
With the setup described above a system update becomes a really simple
operation. On each update the systemd-sysupdate
tool downloads a
/usr/
file system partition, an accompanying Verity partition, a
PKCS#7 signature partition, and drops it into the host's partition
table (where it possibly replaces the oldest version so far stored
there). Then it downloads a unified kernel image and drops it into
the EFI System Partition's /EFI/Linux
(as per Boot Loader
Specification; possibly erase the oldest such file there). And that's
already the whole update process: four files are downloaded from the
server, unpacked and put in the most straightforward of ways into the
partition table or file system. Unlike in other OS designs there's no
mechanism required to explicitly switch to the newer version, the
aforementioned systemd-boot
logic will automatically pick the newest
kernel once it is dropped in.
Above we talked a lot about modularity, and how to put systems
together as a combination of a host OS image, system extension images
for the initrd and the host, portable service images and
systemd-nspawn
container images. I already emphasized that these
image files are actually always the same: GPT disk images with
partition definitions that match the Discoverable Partition
Specification. This comes very handy when thinking about updating: we
can use the exact same systemd-sysupdate
tool for updating these
other images as we use for the host image. The uniformity of the
on-disk format allows us to update them uniformly too.
Boot Counting + Assessment
Automatic OS updates do not come without risks: if they happen
automatically, and an update goes wrong this might mean your system
might be automatically updated into a brick. This of course is less
than ideal. Hence it is essential to address this reasonably
automatically. In my model, there's systemd's Automatic Boot
Assessment for
that. The mechanism is simple: whenever a new unified kernel image is
dropped into the system it will be stored with a small integer counter
value included in the filename. Whenever the unified kernel image is
selected for booting by systemd-boot
, it is decreased by one. Once
the system booted up successfully (which is determined by userspace)
the counter is removed from the file name (which indicates "this entry
is known to work"). If the counter ever hits zero, this indicates that
it tried to boot it a couple of times, and each time failed, thus is
apparently "bad". In this case systemd-boot
will not consider the
kernel anymore, and revert to the next older (that doesn't have a
counter of zero).
By sticking the boot counter into the filename of the unified kernel
we can directly attach this information to the kernel, and thus need
not concern ourselves with cleaning up secondary information about the
kernel when the kernel is removed. Updating with a tool like
systemd-sysupdate
remains a very simple operation hence: drop one
old file, add one new file.
Picking the Newest Version
I already mentioned that systemd-boot
automatically picks the newest
unified kernel image to boot, by looking at the version encoded in the
filename. This is done via a simple
strverscmp()
call (well, truth be told, it's a modified version of that call,
different from the one implemented in libc, because real-life package
managers use more complex rules for comparing versions these days, and
hence it made sense to do that here too). The concept of having
multiple entries of some resource in a directory, and picking the
newest one automatically is a powerful concept, I think. It means
adding/removing new versions is extremely easy (as we discussed above,
in systemd-sysupdate
context), and allows stateless determination of
what to use.
If systemd-boot
can do that, what about system extension images,
portable service images, or systemd-nspawn
container images that do
not actually use systemd-boot
as the entrypoint? All these tools
actually implement the very same logic, but on the partition level: if
multiple suitable /usr/
partitions exist, then the newest is determined
by comparing the GPT partition label of them.
This is in a way the counterpart to the systemd-sysupdate
update
logic described above: we always need a way to determine which
partition to actually then use after the update took place: and this
becomes very easy each time: enumerate possible entries, pick the
newest as per the (modified) strverscmp()
result.
Home Directory Management
In my model the device's users and their home directories are managed
by
systemd-homed
. This
means they are relatively self-contained and can be migrated easily
between devices. The numeric UID assignment for each user is done at
the moment of login only, and the files in the home directory are
mapped as needed via a uidmap
mount. It also allows us to protect
the data of each user individually with a credential that belongs to
the user itself. i.e. instead of binding confidentiality of the user's
data to the system-wide full-disk-encryption each user gets their own
encrypted home directory where the user's authentication token
(password, FIDO2 token, PKCS#11 token, recovery key…) is used as
authentication and decryption key for the user's data. This brings
a major improvement for security as it means the user's data is
cryptographically inaccessible except when the user is actually logged
in.
It also allows us to correct another major issue with traditional Linux systems: the way how data encryption works during system suspend. Traditionally on Linux the disk encryption credentials (e.g. LUKS passphrase) is kept in memory also when the system is suspended. This is a bad choice for security, since many (most?) of us probably never turn off their laptop but suspend it instead. But if the decryption key is always present in unencrypted form during the suspended time, then it could potentially be read from there by a sufficiently equipped attacker.
By encrypting the user's home directory with the user's authentication token we can first safely "suspend" the home directory before going to the system suspend state (i.e. flush out the cryptographic keys needed to access it). This means any process currently accessing the home directory will be frozen for the time of the suspend, but that's expected anyway during a system suspend cycle. Why is this better than the status quo ante? In this model the home directory's cryptographic key material is erased during suspend, but it can be safely reacquired on resume, from system code. If the system is only encrypted as a whole however, then the system code itself couldn't reauthenticate the user, because it would be frozen too. By separating home directory encryption from the root file system encryption we can avoid this problem.
Partition Setup
So we discussed the organization of the partitions OS images multiple times in the above, each time focusing on a specific aspect. Let's now summarize how this should look like all together.
In my model, the initial, shipped OS image should look roughly like this:
- (1) An UEFI System Partition, with
systemd-boot
as boot loader and one unified kernel - (2) A
/usr/
partition (version "A"), with a labelfooOS_0.7
(under the assumption we called our projectfooOS
and the image version is0.7
). - (3) A Verity partition for the
/usr/
partition (version "A"), with the same label - (4) A partition carrying the Verity root hash for the
/usr/
partition (version "A"), along with a PKCS#7 signature of it, also with the same label
On first boot this is augmented by systemd-repart
like this:
- (5) A second
/usr/
partition (version "B"), initially with a label_empty
(which is the labelsystemd-sysupdate
uses to mark partitions that currently carry no valid payload) - (6) A Verity partition for that (version "B"), similar to the above case, also labelled
_empty
- (7) And ditto a Verity root hash partition with a PKCS#7 signature (version "B"), also labelled
_empty
- (8) A root file system, encrypted and locked to the TPM2
- (9) A home file system, integrity protected via a key also in TPM2 (encryption is unnecessary, since
systemd-homed
adds that on its own, and it's nice to avoid duplicate encryption) - (10) A swap partition, encrypted and locked to the TPM2
Then, on the first OS update the partitions 5, 6, 7 are filled with a
new version of the OS (let's say 0.8
) and thus get their label
updated to fooOS_0.8
. After a boot, this version is active.
On a subsequent update the three partitions fooOS_0.7
get wiped and
replaced by fooOS_0.9
and so on.
On factory reset, the partitions 8, 9, 10 are deleted, so that
systemd-repart
recreates them, using a new set of cryptographic
keys.
Here's a graphic that hopefully illustrates the partition stable from shipped image, through first boot, multiple update cycles and eventual factory reset:
Trust Chain
So let's summarize the intended chain of trust (for bare metal/VM boots) that ensures every piece of code in this model is signed and validated, and any system secret is locked to TPM2.
-
First, firmware (or possibly shim) authenticates
systemd-boot
. -
Once
systemd-boot
picks a unified kernel image to boot, it is also authenticated by firmware/shim. -
The unified kernel image contains an initrd, which is the first userspace component that runs. It finds any system extensions passed into the initrd, and sets them up through Verity. The kernel will validate the Verity root hash signature of these system extension images against its usual keyring.
-
The initrd also finds credentials passed in, then securely unlocks (which means: decrypts + authenticates) them with a secret from the TPM2 chip, locked to the kernel image itself.
-
The kernel image also contains a kernel command line which contains a
usrhash=
option that pins the root hash of the/usr/
partition to use. -
The initrd then unlocks the encrypted root file system, with a secret bound to the TPM2 chip.
-
The system then transitions into the main system, i.e. the combination of the Verity protected
/usr/
and the encrypted root files system. It then activates two more encrypted (and/or integrity protected) volumes for/home/
and swap, also with a secret tied to the TPM2 chip.
Here's an attempt to illustrate the above graphically:
This is the trust chain of the basic OS. Validation of system
extension images, portable service images, systemd-nspawn
container
images always takes place the same way: the kernel validates these
Verity images along with their PKCS#7 signatures against the kernel's
keyring.
File System Choice
In the above I left the choice of file systems unspecified. For the
immutable /usr/
partitions squashfs
might be a good candidate, but
any other that works nicely in a read-only fashion and generates
reproducible results is a good choice, too. The home directories as managed
by systemd-homed
should certainly use btrfs
, because it's the only
general purpose file system supporting online grow and shrink, which
systemd-homed
can take benefit of, to manage storage.
For the root file system btrfs
is likely also the best idea. That's
because we intend to use LUKS/dm-crypt
underneath, which by default
only provides confidentiality, not authenticity of the data (unless
combined with dm-integrity
). Since btrfs
(unlike xfs/ext4) does
full data checksumming it's probably the best choice here, since it
means we don't have to use dm-integrity
(which comes at a higher
performance cost).
OS Installation vs. OS Instantiation
In the discussion above a lot of focus was put on setting up the OS
and completing the partition layout and such on first boot. This means
installing the OS becomes as simple as dd
-ing (i.e. "streaming") the
shipped disk image into the final HDD medium. Simple, isn't it?
Of course, such a scheme is just too simple for many setups in real
life. Whenever multi-boot is required (i.e. co-installing an OS
implementing this model with another unrelated one), dd
-ing a disk
image onto the HDD is going to overwrite user data that was supposed
to be kept around.
In order to cover for this case, in my model, we'd use
systemd-repart
(again!) to allow streaming the source disk image
into the target HDD in a smarter, additive way. The tool after all is
purely additive: it will add in partitions or grow them if they are
missing or too small. systemd-repart
already has all the necessary
provisions to not only create a partition on the target disk, but also
copy blocks from a raw installer disk. An install operation would then
become a two stop process: one invocation of systemd-repart
that
adds in the /usr/
, its Verity and the signature partition to the
target medium, populated with a copy of the same partition of the
installer medium. And one invocation of bootctl
that installs the
systemd-boot
boot loader in the ESP. (Well, there's one thing
missing here: the unified OS kernel also needs to be dropped into the
ESP. For now, this can be done with a simple cp
call. In the long
run, this should probably be something bootctl
can do as well, if
told so.)
So, with this scheme we have a simple scheme to cover all bases: we
can either just dd
an image to disk, or we can stream an image onto
an existing HDD, adding a couple of new partitions and files to the
ESP.
Of course, in reality things are more complex than that even: there's
a good chance that the existing ESP is simply too small to carry
multiple unified kernels. In my model, the way to address this is by
shipping two slightly different systemd-repart
partition definition
file sets: the ideal case when the ESP is large enough, and a
fallback case, where it isn't and where we then add in an addition
XBOOTLDR partition (as per the Discoverable Partitions
Specification). In that mode the ESP carries the boot loader, but the
unified kernels are stored in the XBOOTLDR partition. This scenario is
not quite as simple as the XBOOTLDR-less scenario described first, but
is equally well supported in the various tools. Note that
systemd-repart
can be told size constraints on the partitions it
shall create or augment, thus to implement this scheme it's enough to
invoke the tool with the fallback partition scheme if invocation with
the ideal scheme fails.
Either way: regardless how the partitions, the boot loader and the
unified kernels ended up on the system's hard disk, on first boot the
code paths are the same again: systemd-repart
will be called to
augment the partition table with the root file system, and properly
encrypt it, as was already discussed earlier here. This means: all
cryptographic key material used for disk encryption is generated on
first boot only, the installer phase does not encrypt anything.
Live Systems vs. Installer Systems vs. Installed Systems
Traditionally on Linux three types of systems were common: "installed" systems, i.e. that are stored on the main storage of the device and are the primary place people spend their time in; "installer" systems which are used to install them and whose job is to copy and setup the packages that make up the installed system; and "live" systems, which were a middle ground: a system that behaves like an installed system in most ways, but lives on removable media.
In my model I'd like to remove the distinction between these three
concepts as much as possible: each of these three images should carry
the exact same /usr/
file system, and should be suitable to be
replicated the same way. Once installed the resulting image can also
act as an installer for another system, and so on, creating a certain
"viral" effect: if you have one image or installation it's
automatically something you can replicate 1:1 with a simple
systemd-repart
invocation.
Building Images According to this Model
The above explains how the image should look like and how its first boot and update cycle will modify it. But this leaves one question unanswered: how to actually build the initial image for OS instances according to this model?
Note that there's nothing too special about the images following this model: they are ultimately just GPT disk images with Linux file systems, following the Discoverable Partition Specification. This means you can use any set of tools of your choice that can put together GPT disk images for compliant images.
I personally would use mkosi
for
this purpose though. It's designed to generate compliant images, and
has a rich toolset for SecureBoot and signed/Verity file systems
already in place.
What is key here is that this model doesn't depart from RPM and dpkg, instead it builds on top of that: in this model they are excellent for putting together images on the build host, but deployment onto the runtime host does not involve individual packages.
I think one cannot underestimate the value traditional distributions bring, regarding security, integration and general polishing. The concepts I describe above are inherited from this, but depart from the idea that distribution packages are a runtime concept and make it a build-time concept instead.
Note that the above is pretty much independent from the underlying distribution.
Final Words
I have no illusions, general purpose distributions are not going to adopt this model as their default any time soon, and it's not even my goal that they do that. The above is my personal vision, and I don't expect people to buy into it 100%, and that's fine. However, what I am interested in is finding the overlaps, i.e. work with people who buy 50% into this vision, and share the components.
My goals here thus are to:
-
Get distributions to move to a model where images like this can be built from the distribution easily. Specifically this means that distributions make their OS hermetic in
/usr/
. -
Find the overlaps, share components with other projects to revisit how distributions are put together. This is already happening, see
systemd-tmpfiles
andsystemd-sysuser
support in various distributions, but I think there's more to share. -
Make people interested in building actual real-world images based on general purpose distributions adhering to the model described above. I'd love a "GnomeBook" image with full trust properties, that is built from true Linux distros, such as Fedora or ArchLinux.
FAQ
-
What about
ostree
? Doesn'tostree
already deliver what this blog story describes?ostree
is fine technology, but in respect to security and robustness properties it's not too interesting I think, because unlike image-based approaches it cannot really deliver integrity/robustness guarantees over the whole tree easily. To be able to trust anostree
setup you have to establish trust in the underlying file system first, and the complexity of the file system makes that challenging. To provide an effective offline-secure trust chain through the whole depth of the stack it is essential to cryptographically validate every single I/O operation. In an image-based model this is trivially easy, but inostree
model it's with current file system technology not possible and even if this is added in one way or another in the future (though I am not aware of anyone doing on-access file-based integrity that spans a whole hierarchy of files that was compatible withostree
's hardlink farm model) I think validation is still at too high a level, since Linux file system developers made very clear their implementations are not robust to rogue images. (There's this stuff planned, but doing structural authentication ahead of time instead of on access makes the idea to weak — and I'd expect too slow — in my eyes.)With my design I want to deliver similar security guarantees as ChromeOS does, but
ostree
is much weaker there, and I see no perspective of this changing. In a wayostree
's integrity checks are similar to RPM's and enforced on download rather than on access. In the model I suggest above, it's always on access, and thus safe towards offline attacks (i.e. evil maid attacks). In today's world, I think offline security is absolutely necessary though.That said,
ostree
does have some benefits over the model described above: it naturally shares file system inodes if many of the modules/images involved share the same data. It's thus more space efficient on disk (and thus also in RAM/cache to some degree) by default. In my model it would be up to the image builders to minimize shipping overly redundant disk images, by making good use of suitably composable system extensions. -
What about configuration management?
At first glance immutable systems and configuration management don't go that well together. However, do note, that in the model I propose above the root file system with all its contents, including
/etc/
and/var/
is actually writable and can be modified like on any other typical Linux distribution. The only exception is/usr/
where the immutable OS is hermetic. That means configuration management tools should work just fine in this model – up to the point where they are used to install additional RPM/dpkg packages, because that's something not allowed in the model above: packages need to be installed at image build time and thus on the image build host, not the runtime host. -
What about non-UEFI and non-TPM2 systems?
The above is designed around the feature set of contemporary PCs, and this means UEFI and TPM2 being available (simply because the PC is pretty much defined by the Windows platform, and current versions of Windows require both).
I think it's important to make the best of the features of today's PC hardware, and then find suitable fallbacks on more limited hardware. Specifically this means: if there's desire to implement something like the this on non-UEFI or non-TPM2 hardware we should look for suitable fallbacks for the individual functionality, but generally try to add glue to the old systems so that conceptually they behave more like the new systems instead of the other way round. Or in other words: most of the above is not strictly tied to UEFI or TPM2, and for many cases already there are reasonably fallbacks in place for more limited systems. Of course, without TPM2 many of the security guarantees will be weakened.
-
How would you name an OS built that way?
I think a desktop OS built this way if it has the GNOME desktop should of course be called GnomeBook, to mimic the ChromeBook name. ;-)
But in general, I'd call hermetic, adaptive, immutable OSes like this "particles".
How can you help?
-
Help making Distributions Hermetic in
/usr/
!One of the core ideas of the approach described above is to make the OS hermetic in
/usr/
, i.e. make it carry a comprehensive description of what needs to be set up outside of it when instantiated. Specifically, this means that system users that are needed are declared insystemd-sysusers
snippets, and skeleton files and directories are created viasystemd-tmpfiles
. Moreover additional partitions should be declared viasystemd-repart
drop-ins.At this point some distributions (such as Fedora) are (probably more by accident than on purpose) already mostly hermetic in
/usr/
, at least for the most basic parts of the OS. However, this is not complete: many daemons require to have specific resources set up in/var/
or/etc/
before they can work, and the relevant packages do not carrysystemd-tmpfiles
descriptions that add them if missing. So there are two ways you could help here: politically, it would be highly relevant to convince distributions that an OS that is hermetic in/usr/
is highly desirable and it's a worthy goal for packagers to get there. More specifically, it would be desirable if RPM/dpkg packages would ship with enoughsystemd-tmpfiles
information so that configuration files the packages strictly need for operation are symlinked (or copied) from/usr/share/factory/
if they are missing (even better of course would be if packages from their upstream sources on would just work with an empty/etc/
and/var/
, and create themselves what they need and default to good defaults in absence of configuration files).Note that distributions that adopted
systemd-sysusers
,systemd-tmpfiles
and the/usr/
merge are already quite close to providing an OS that is hermetic in/usr/
. These were the big, the major advancements: making the image fully hermetic should be less controversial – at least that's my guess.Also note that making the OS hermetic in
/usr/
is not just useful in scenarios like the above. It also means that stuff like this and like this can work well. -
Fill in the gaps!
I already mentioned a couple of missing bits and pieces in the implementation of the overall vision. In the
systemd
project we'd be delighted to review/merge any PRs that fill in the voids. -
Build your own OS like this!
Of course, while we built all these building blocks and they have been adopted to various levels and various purposes in the various distributions, no one so far built an OS that puts things together just like that. It would be excellent if we had communities that work on building images like what I propose above. i.e. if you want to work on making a secure GnomeBook as I suggest above a reality that would be more than welcome.
How could this look like specifically? Pick an existing distribution, write a set of
mkosi
descriptions plus some additional drop-in files, and then build this on some build infrastructure. While doing so, report the gaps, and help us address them.
Further Documentation of Used Components and Concepts
systemd-tmpfiles
systemd-sysusers
systemd-boot
systemd-stub
systemd-sysext
systemd-portabled
, Portable Services Introductionsystemd-repart
systemd-nspawn
systemd-sysupdate
systemd-creds
, System and Service Credentialssystemd-homed
- Automatic Boot Assessment
- Boot Loader Specification
- Discoverable Partitions Specification
- Safely Building Images
Earlier Blog Stories Related to this Topic
- The Strange State of Authenticated Boot and Disk Encryption on Generic Linux Distributions
- The Wondrous World of Discoverable GPT Disk Images
- Unlocking LUKS2 volumes with TPM2, FIDO2, PKCS#11 Security Hardware on systemd 248
- Portable Services with systemd v239
- mkosi — A Tool for Generating OS Images
And that's all for now.
I recently
blogged
about how to run a volatile systemd-nspawn
container from your
host's /usr/
tree, for quickly testing stuff in your host
environment, sharing your home drectory, but all that without making a
single modification to your host, and on an isolated node.
The one-liner discussed in that blog story is great for testing during
system software development. Let's have a look at another systemd
tool that I regularly use to test things during systemd
development,
in a relatively safe environment, but still taking full benefit of my
host's setup.
Since a while now, systemd has been shipping with a simple component
called
systemd-sysext
. It's
primary usecase goes something like this: on one hand OS systems with
immutable /usr/
hierarchies are fantastic for security, robustness,
updating and simplicity, but on the other hand not being able to
quickly add stuff to /usr/
is just annoying.
systemd-sysext
is supposed to bridge this contradiction: when
invoked it will merge a bunch of "system extension" images into
/usr/
(and /opt/
as a matter of fact) through the use of read-only
overlayfs
, making all files shipped in the image instantly and
atomically appear in /usr/
during runtime — as if they always had
been there. Now, let's say you are building your locked down OS, with
an immutable /usr/
tree, and it comes without ability to log into,
without debugging tools, without anything you want and need when
trying to debug and fix something in the system. With systemd-sysext
you could use a system extension image that contains all this, drop it
into the system, and activate it with systemd-sysext
so that it
genuinely extends the host system.
(There are many other usecases for this tool, for example, you could build systems that way that at their base use a generic image, but by installing one or more system extensions get extended to with additional more specific functionality, or drivers, or similar. The tool is generic, use it for whatever you want, but for now let's not get lost in listing all the possibilites.)
What's particularly nice about the tool is that it supports
automatically discovered dm-verity
images, with signatures and
everything. So you can even do this in a fully authenticated,
measured, safe way. But I am digressing…
Now that we (hopefully) have a rough understanding what
systemd-sysext
is and does, let's discuss how specficially we can
use this in the context of system software development, to safely use
and test bleeding edge development code — built freshly from your
project's build tree – in your host OS without having to risk that the
host OS is corrupted or becomes unbootable by stuff that didn't quite
yet work the way it was envisioned:
The images systemd-sysext
merges into /usr/
can be of two kinds:
disk images with a file system/verity/signature, or simple, plain
directory trees. To make these images available to the tool, they can
be placed or symlinked into /usr/lib/extensions/
,
/var/lib/extensions/
, /run/extensions/
(and a bunch of
others). So if we now install our freshly built development software
into a subdirectory of those paths, then that's entirely sufficient to
make them valid system extension images in the sense of
systemd-sysext
, and thus can be merged into /usr/
to try them out.
To be more specific: when I develop systemd
itself, here's what I do
regularly, to see how my new development version would behave on my
host system. As preparation I checked out the systemd development git
tree first of course, hacked around in it a bit, then built it with
meson/ninja. And now I want to test what I just built:
sudo DESTDIR=/run/extensions/systemd-test meson install -C build --quiet --no-rebuild &&
sudo systemd-sysext refresh --force
Explanation: first, we'll install my current build tree as a system
extension into /run/extensions/systemd-test/
. And then we apply it
to the host via the systemd-sysext refresh
command. This command
will search for all installed system extension images in the
aforementioned directories, then unmount (i.e. "unmerge") any
previously merged dirs from /usr/
and then freshly mount
(i.e. "merge") the new set of system extensions on top of /usr/
. And
just like that, I have installed my development tree of systemd
into
the host OS, and all that without actually modifying/replacing even a
single file on the host at all. Nothing here actually hit the disk!
Note that all this works on any system really, it is not necessary
that the underlying OS even is designed with immutability in
mind. Just because the tool was developed with immutable systems in
mind it doesn't mean you couldn't use it on traditional systems where
/usr/
is mutable as well. In fact, my development box actually runs
regular Fedora, i.e. is RPM-based and thus has a mutable /usr/
tree. As long as system extensions are applied the whole of /usr/
becomes read-only though.
Once I am done testing, when I want to revert to how things were without the image installed, it is sufficient to call:
sudo systemd-sysext unmerge
And there you go, all files my development tree generated are gone
again, and the host system is as it was before (and /usr/
mutable
again, in case one is on a traditional Linux distribution).
Also note that a reboot (regardless if a clean one or an abnormal
shutdown) will undo the whole thing automatically, since we installed
our build tree into /run/
after all, i.e. a tmpfs
instance that is
flushed on boot. And given that the overlayfs
merge is a runtime
thing, too, the whole operation was executed without any
persistence. Isn't that great?
(You might wonder why I specified --force
on the systemd-sysext
refresh
line earlier. That's because systemd-sysext
actually does
some minimal version compatibility checks when applying system
extension images. For that it will look at the host's
/etc/os-release
file with
/usr/lib/extension-release.d/extension-release.<name>
, and refuse
operaton if the image is not actually built for the host OS
version. Here we don't want to bother with dropping that file in
there, we know already that the extension image is compatible with
the host, as we just built it on it. --force
allows us to skip the
version check.)
You might wonder: what about the combination of the idea from the
previous blog story (regarding running container's off the host
/usr/
tree) with system extensions? Glad you asked. Right now we
have no support for this, but it's high on our TODO list (patches
welcome, of course!). i.e. a new switch for systemd-nspawn
called
--system-extension=
that would allow merging one or more such
extensions into the container tree booted would be stellar. With that,
with a single command I could run a container off my host OS but with
a development version of systemd dropped in, all without any
persistence. How awesome would that be?
(Oh, and in case you wonder, all of this only works with distributions
that have completed the /usr/
merge. On legacy distributions that
didn't do that and still place parts of /usr/
all over the hierarchy
the above won't work, since merging /usr/
trees via overlayfs
is
pretty pointess if the OS is not hermetic in /usr/
.)
And that's all for now. Happy hacking!
Apparently, in some parts of this
world, the /usr/
-merge
transition is still ongoing. Let's take the opportunity to have a look
at one specific way to take benefit of the /usr/
-merge (and
associated work) IRL.
I develop system-level software as you might know. Oftentimes I want
to run my development code on my PC but be reasonably sure it cannot
destroy or otherwise negatively affect my host system. Now I could set
up a container tree for that, and boot into that. But often I am too
lazy for that, I don't want to bother with a slow package manager
setting up a new OS tree for me. So here's what I often do instead —
and this only works because of the /usr/
-merge.
I run a command like the following (without any preparatory work):
systemd-nspawn \
--directory=/ \
--volatile=yes \
-U \
--set-credential=passwd.hashed-password.root:$(mkpasswd mysecret) \
--set-credential=firstboot.locale:C.UTF-8 \
--bind-user=lennart \
-b
And then I very quickly get a login prompt on a container that runs
the exact same software as my host — but is also isolated from the
host. I do not need to prepare any separate OS tree or anything
else. It just works. And my host user lennart
is just there,
ready for me to log into.
So here's what these
systemd-nspawn
options specifically do:
-
--directory=/
tellssystemd-nspawn
to run off the host OS' file hierarchy. That smells like danger of course, running two OS instances off the same directory hierarchy. But don't be scared, because: -
--volatile=yes
enables volatile mode. Specifically this means what we configured with--directory=/
as root file system is slightly rearranged. Instead of mounting that tree as it is, we'll mount atmpfs
instance as actual root file system, and then mount the/usr/
subdirectory of the specified hierarchy into the/usr/
subdirectory of the container file hierarchy in read-only fashion – and only that directory. So now we have a container directory tree that is basically empty, but imports all host OS binaries and libraries into its/usr/
tree. All software installed on the host is also available in the container with no manual work. This mechanism only works because on/usr/
-merged OSes vendor resources are monopolized at a single place:/usr/
. It's sufficient to share that one directory with the container to get a second instance of the host OS running. Note that this means/etc/
and/var/
will be entirely empty initially when this second system boots up. Thankfully, forward looking distributions (such as Fedora) have adoptedsystemd-tmpfiles
andsystemd-sysusers
quite pervasively, so that system users and files/directories required for operation are created automatically should they be missing. Thus, even though at boot the mentioned directories are initially empty, once the system is booted up they are sufficiently populated for things to just work. -
-U
means we'll enable user namespacing, in fully automatic mode. This does three things: it picks a free host UID range dynamically for the container, then sets up user namespacing for the container processes mapping host UID range to UIDs 0…65534 in the container. It then sets up a similar UID mapped mount on the/usr/
tree of the container. Net effect: file ownerships as set on the host OS tree appear as they belong to the very same users inside of the container environment, except that we use user namespacing for everything, and thus the users are actually neatly isolated from the host. -
--set-credential=passwd.hashed-password.root:$(mkpasswd mysecret)
passes a credential to the container. Credentials are bits of data that you can pass to systemd services and whole systems. They are actually awesome concepts (e.g. they support TPM2 authentication/encryption that just works!) but I am not going to go into details around that, given it's off-topic in this specific scenario. Here we just take benefit of the fact thatsystemd-sysusers
looks for a credential calledpasswd.hashed-password.root
to initialize the root password of the system from. We set it tomysecret
. This means once the system is booted up we can log in asroot
and the supplied password. Yay. (Remember,/etc/
is initially empty on this container, and thus also carries no/etc/passwd
or/etc/shadow
, and thus has no root user record, and thus no root password.)mkpasswd
is a tool then converts a plain text password into a UNIX hashed password, which is what this specific credential expects. -
Similar,
--set-credential=firstboot.locale:C.UTF-8
tells thesystemd-firstboot
service in the container to initialize/etc/locale.conf
with this locale. -
--bind-user=lennart
binds the host userlennart
into the container, also as userlennart
. This does two things: it mounts the host user's home directory into the container. It also copies a minimal user record of the specified user into the container thatnss-systemd
then picks up and includes in the regular user database. This means, once the container is booted up I can log in aslennart
with my regular password, and once I logged in I will see my regular host home directory, and can make changes to it. Yippieh! (This does a couple of more things, such as UID mapping and things, but let's not get lost in too much details.)
So, if I run this, I will very quickly get a login prompt, where I can
log into as my regular user. I have full access to my host home
directory, but otherwise everything is nicely isolated from the host,
and changes outside of the home directory are either prohibited or are
volatile, i.e. go to a tmpfs
instance whose lifetime is bound to the
container's lifetime: when I shut down the container I just started,
then any changes outside of my user's home directory are lost.
Note that while here I use --volatile=yes
in combination with
--directory=/
you can actually use it on any OS hierarchy, i.e. just
about any directory that contains OS binaries.
Similar, the --bind-user=
stuff works with any OS hierarchy too (but
do note that only systemd 249 and newer will pick up the user records
passed to the container that way, i.e. this requires at least v249
both on the host and in the container to work).
Or in short: the possibilities are endless!
Requirements
For this all to work, you need:
-
A recent kernel (5.15 should suffice, as it brings UID mapped mounts for the most common file systems, so that
-U
and--bind-user=
can work well.) -
A recent systemd (249 should suffice, which brings
--bind-user=
, and a-U
switch backed by UID mapped mounts). -
A distribution that adopted the
/usr/
-merge,systemd-tmpfiles
andsystemd-sysusers
so that the directory hierarchy and user databases are automatically populated when empty at boot. (Fedora 35 should suffice.)
Limitations
While a lot of today's software actually out of the box works well on
systems that come up with an unpopulated /etc/
and /var/
, and
either fall back to reasonable built-in defaults, or deploy
systemd-tmpfiles
to create what is missing, things aren't perfect:
some software typically installed an desktop OSes will fail to start
when invoked in such a container, and be visible as ugly failed
services, but it won't stop me from logging in and using the system
for what I want to use it. It would be excellent to get that fixed,
though. This can either be fixed in the relevant software upstream
(i.e. if opening your configuration file fails with ENOENT
, then
just default to reasonable defaults), or in the distribution packaging
(i.e. add a
tmpfiles.d/
file that copies or symlinks in skeleton configuration from
/usr/share/factory/etc/
via the C
or L
line types).
And then there's certain software dealing with hardware management and
similar that simply cannot work in a container (as device APIs on
Linux are generally not virtualized for containers) reasonably. It
would be excellent if software like that would be updated to carry
ConditionVirtualization=!container
or
ConditionPathIsReadWrite=/sys
conditionalization in their unit
files, so that it is automatically – cleanly – skipped when executed
in such a container environment.
And that's all for now.