Introduction
NixOS-based firmware image with atomic A/B OTA updates, automatic rollback, and watchdog integration (currently disabled on Rock64 during development).
AtomicNix is a purpose-built firmware platform for Rock64 (RK3328, aarch64) edge gateway devices. Each device serves as a network security boundary compliant with EN18031, isolating legacy LAN devices from the internet while supporting provisioned application containers and Nixstasis-hosted remote management.
Why AtomicNix?
Remote embedded devices that receive over-the-air updates face a fundamental reliability problem: if an update fails
mid-write or the new image doesn’t boot correctly, the device is bricked. Traditional package-manager approaches (e.g.,
apt upgrade) have a measurable failure rate from power loss, partial writes, and dependency conflicts.
AtomicNix eliminates this class of failure through:
- Atomic A/B updates – installs to the inactive slot pair while the active slot stays online; no partial state
- Automatic rollback – U-Boot boot-count logic falls back to the previous working slot after 3 consecutive boot failures
- Hardware watchdog (currently disabled on Rock64) – integration and tests are in place; runtime enablement is pending final boot-stability validation on hardware
- Local health-check confirmation – commits new slots only after verifying that all services and containers are healthy for a sustained 60-second window
- Signed RAUC bundles – reproducible, CA-signed
.raucbartifacts built from the Nix flake - Read-only root filesystem – squashfs rootfs with OverlayFS (tmpfs upper layer) prevents runtime drift; every boot starts from a known-good state
Supported Hardware
| Board | SoC | Architecture | Storage |
|---|---|---|---|
| Rock64 | RK3328 | aarch64 | 16 GB eMMC |
Key Properties
- Reproducible – the entire system image is built from a single Nix flake with pinned inputs; same flake, same image
- Immutable – the squashfs root filesystem is read-only; writable state lives on a dedicated
/datapartition - Testable – a NixOS VM integration test suite covers the update lifecycle, provisioning paths, forensic log durability, network security, and rollback behavior without physical hardware
- EN18031 compliant – ships without default credentials; per-device credentials are provisioned at factory time; IP forwarding is disabled by default
Network Role
Each AtomicNix device acts as a gateway between an isolated LAN and the internet:
- WAN (eth0): DHCP client, deny-by-default inbound; application/VPN ports are provisioned explicitly
- LAN (eth1): Provisioned static IP, runs DHCP/DNS server (dnsmasq) and NTP server (chrony) for local devices
- No routing: IP forwarding is disabled; LAN devices have zero internet access
- Remote management: Nixstasis-hosted management and SSH key-only access; bootstrap stays LAN-local
Quick Start
# Build the flashable disk image set
mise run build
# Flash to eMMC (macOS)
mise run flash /dev/disk4
# Run all E2E tests
mise run e2e
# Run all E2E tests inside a Lima VM
mise run e2e --lima
See Building and Provisioning for detailed instructions.
Architecture
AtomicNix combines several architectural patterns to achieve reliable over-the-air updates on embedded hardware:
- A/B partition scheme with paired boot and rootfs slots
- Read-only squashfs rootfs with OverlayFS root (squashfs lower + tmpfs upper) for runtime state
- U-Boot boot-count rollback with watchdog integration (currently disabled on Rock64 during development)
- Network isolation with no IP forwarding between WAN and LAN interfaces
- EN18031-compliant authentication with no embedded credentials
This chapter covers each of these in detail. For the rationale behind specific design choices, see Design Decisions.
Partition Layout
The Rock64’s 16 GB eMMC uses a fixed A/B partition layout with raw U-Boot at the beginning and a persistent data
partition at the end. The flash image carries slot A only; initrd systemd-repart creates slot B and /data on first
boot.
General host and application logging stays tmpfs-first during runtime and is
forwarded through an rsyslog RAM queue before buffered appends land in
/data/logs.
Layout
Offset Size Content Filesystem Notes
0 16 MB U-Boot raw idbloader @ sector 64, u-boot.itb @ sector 16384
16 MB 128 MB boot-a vfat kernel Image, initrd, DTB, boot.scr
144 MB 1024 MB rootfs-a squashfs zstd compressed, 1 MB blocks; used as OverlayFS lower layer
1168 MB 128 MB boot-b vfat created on first boot by initrd systemd-repart
1296 MB 1024 MB rootfs-b -- created on first boot by initrd systemd-repart
2320 MB remaining data f2fs created on first boot by initrd systemd-repart
Slot Pairing
RAUC manages two slot pairs. Each pair contains a boot partition and a rootfs partition that are always written together atomically:
| Slot | Boot Partition | Rootfs Partition |
|---|---|---|
| A | boot-a (p1) | rootfs-a (p2) |
| B | boot-b (p3) | rootfs-b (p4) |
An update writes the new kernel/DTB to the inactive boot partition and the new squashfs to the inactive rootfs partition. The active slot pair is never modified during an update.
U-Boot Region
U-Boot occupies the first 16 MB of the eMMC as raw data (no partition). The RK3328 boot ROM loads the initial bootloader from fixed sector offsets:
| Component | Sector Offset | Byte Offset | Description |
|---|---|---|---|
idbloader.img | 64 | 32 KB | First-stage loader (TPL + SPL) |
u-boot.itb | 16384 | 8 MB | U-Boot proper (FIT image) |
U-Boot environment is stored in SPI flash exposed to Linux as /dev/mtd0 at offset 0x140000 with size 0x2000.
AtomicNix uses this single SPI environment for RAUC boot variables instead of raw eMMC environment writes.
Data Partition
The flashable image leaves the space after rootfs-a unallocated. On first boot,
initrd systemd-repart creates boot-b, rootfs-b, and /data there before the
live system is mounted. This avoids repartitioning from the switched-root system
while still preserving the inactive slot and /data across all updates and
rollbacks.
Contents created during provisioning:
/data/
.completed_first_boot First-boot sentinel
config/
ssh-authorized-keys/admin Operator's SSH public key
nixstasis/ Planned enrollment key and agent state
openvpn/client.conf OpenVPN recovery tunnel config (optional)
containers/ Reserved for future application workloads
logs/ Buffered general host and application logs appended from rsyslog
Logging Tiers
AtomicNix uses two runtime logging tiers with different durability goals:
Tier 1 journald runtime General host and container logs, tmpfs-first (`Storage=volatile`, runtime capped)
Tier 2 /data/logs Buffered rsyslog appends for bounded durable host and application diagnostics
Network Topology
Each AtomicNix device has two Ethernet interfaces forming a security boundary between the internet and an isolated LAN.
Interface Roles
flowchart LR
WAN["WAN<br/>internet"] --> ETH0["eth0<br/>DHCP client<br/>Deny-by-default inbound"]
LAN["LAN<br/>isolated devices"] --> ETH1["eth1<br/>Provisioned static IP<br/>DHCP/DNS: dnsmasq<br/>NTP: chrony"]
subgraph DEVICE["AtomicNix device"]
direction TB
ETH0
CORE["No IP forwarding<br/>FORWARD chain: DROP all"]
ETH1
APPS["Provisioned application containers<br/>No packet forwarding"]
end
ETH0 -. provisioned inbound ports .-> APPS
APPS -. local service access .-> ETH1
WAN Interface (eth0)
- Mapped to the onboard RK3328 GMAC via systemd
.linkfile (platform pathplatform-ff540000.ethernet) - DHCP v4 client via systemd-networkd
- Uses DHCP-provided DNS servers
- Firewall drops new inbound traffic by default
- Provisioned firewall state may open application or VPN ports from
/data/config/firewall-inbound.json
LAN Interface (eth1)
- USB Ethernet adapter (any supported chipset: r8152, ax88179, cdc_ether)
- Static IP: provisioned LAN gateway, falling back to
172.20.30.1/24 - Runs dnsmasq DHCP server from the provisioned range, with fallback
172.20.30.10–172.20.30.254 - Runs chrony NTP server for the provisioned LAN subnet, with fallback
172.20.30.0/24 - Runs gateway-local DNS only: dnsmasq serves local names on
53and does not forward upstream
Isolation Model
IP forwarding is explicitly disabled at the kernel level:
boot.kernel.sysctl = {
"net.ipv4.ip_forward" = 0;
"net.ipv6.conf.all.forwarding" = 0;
};
The nftables FORWARD chain has a drop policy with no exceptions. LAN devices get DHCP, DNS, NTP, SSH, and first-boot
bootstrap access on eth1, but no packet-level internet routing. WAN application or VPN exposure is created only from
provisioned firewall state.
NIC Naming
Deterministic interface naming uses systemd .link files rather than udev rules:
| Link File | Match | Name |
|---|---|---|
10-onboard-eth | Platform path platform-ff540000.ethernet | eth0 |
20-usb-eth | USB Ethernet drivers (r8152, ax88179, cdc_ether) | enabled as modules in Rock64 kernel config |
| WiFi | Unsupported until hardware selection | not part of current Rock64 image |
The onboard Ethernet is always eth0 regardless of USB device enumeration order. USB Ethernet adapters receive
kernel-assigned names (e.g., eth1, eth2).
Firewall Summary
| Interface | Direction | Allowed Ports |
|---|---|---|
| eth0 (WAN) | Inbound | provisioned firewall ports only |
| eth0 (WAN) | Inbound | TCP 22 (SSH) – only with flag file |
| eth1 (LAN) | Inbound | UDP 53, UDP 67-68, UDP 123, TCP 22, TCP 53, TCP 8080 |
| tun0 (VPN) | Inbound | TCP 22 (SSH) |
| any | Forward | DROP (no exceptions) |
Provisioned WAN ports come from /data/config/firewall-inbound.json. SSH on WAN is controlled by the presence of
/data/config/ssh-wan-enabled. See the Firewall module for implementation
details.
Update & Rollback Flow
AtomicNix uses RAUC for A/B slot management combined with U-Boot boot-count logic and watchdog integration for automatic recovery from failed updates.
Normal Update Cycle
sequenceDiagram
participant Upgrade as os-upgrade.service
participant RAUC
participant Boot as U-Boot
participant Verify as os-verification.service
Upgrade->>RAUC: Poll update server with compact X-Device-ID and install new bundle
RAUC->>RAUC: Write boot + rootfs to the inactive slot pair
RAUC->>Boot: Set BOOT_ORDER=B A and BOOT_B_LEFT=3
Boot->>Boot: Reboot into updated slot and decrement BOOT_B_LEFT
Boot->>Verify: Start updated system
Verify->>Verify: Check network, services, and 60s stability
alt Checks pass
Verify->>RAUC: rauc status mark-good
RAUC->>Boot: Commit updated slot
else Checks fail across 3 boots
Verify-->>Boot: Exit non-zero
Boot->>Boot: Keep decrementing boot counter
Boot->>Boot: Fall back to previous good slot
end
Boot-Count Mechanism
U-Boot maintains three environment variables for slot selection:
| Variable | Purpose | Example |
|---|---|---|
BOOT_ORDER | Slot priority (first = preferred) | "A B" |
BOOT_A_LEFT | Remaining boot attempts for slot A | 3 |
BOOT_B_LEFT | Remaining boot attempts for slot B | 3 |
On each boot, U-Boot RAUC bootmeth selects the slot and decrements the boot attempt counter before loading
boot.scr. The script:
- Reads the bootmeth-provided boot partition and root partition variables
- Sets
rauc.slotandatomicnix.lowerdevfor the selected slot - Loads kernel, initrd, and DTB from that slot’s boot partition
- Boots with
root=fstabso initrd mounts the selected squashfs by lower device
If a slot’s counter reaches 0, RAUC bootmeth skips it and tries the next slot in BOOT_ORDER. This ensures automatic
rollback after 3 consecutive boot failures.
Health Check Details
The os-verification.service performs these checks before committing a slot:
- Service checks: dnsmasq and chronyd must be active
- Network checks: eth0 must have a WAN IP; eth1 must have the expected LAN gateway IP
- Sustained check: all above conditions must hold for 60 seconds (checked every 5 seconds) to catch restart loops
Only after all checks pass does the service run rauc status mark-good, which resets the boot counter and commits the
slot.
First Boot Exception
On initial device provisioning, first-boot.service writes the sentinel file
(/data/.completed_first_boot) after successful provisioning import/validation and marks the slot good only when RAUC
is enabled. After this, all subsequent boots use the full health-check path.
Watchdog Integration (currently disabled on Rock64 during development)
The RK3328 hardware watchdog (dw_wdt) integration is implemented with these target settings:
- Runtime watchdog: 30 seconds – if systemd hangs, the device reboots
- Reboot watchdog: 10 minutes – if a reboot hangs, the watchdog forces a hard reset
These target settings are not enabled in the current release. When enabled later, both scenarios feed into the boot-count rollback path: repeated unsuccessful boots decrement the selected slot counter until U-Boot returns to the previous slot.
Update Polling
The os-upgrade.service runs on a systemd timer:
- First check: 5 minutes after boot
- Subsequent checks: every 1 hour (configurable)
- Random delay: up to 10 minutes (prevents thundering herd across fleet)
The service queries the update server with the compact lowercase 12-hex eth0 MAC as X-Device-ID and the current
version. If a newer bundle is available, it downloads to /data, installs via rauc install, and reboots. The hawkBit
path is reserved for future implementation and is not an operational update mode in the current image.
Authentication (EN18031)
AtomicNix ships with no embedded credentials. EN18031 compliance requires that each device has unique credentials provisioned at factory time – there are no default passwords or shared secrets.
Provisioning State
Persisted device-local state lives on /data:
| Item | Storage Path | Notes |
|---|---|---|
| SSH public key | /data/config/ssh-authorized-keys/admin | Local operator key for LAN/VPN access |
| Nixstasis registration key | /data/config/nixstasis/registration-key | Planned persistent device enrollment credential |
| Nixstasis agent state | /data/config/nixstasis/ | Planned client state, tunnel config, and metadata |
Authentication Flows
SSH Access
- LAN (eth1): Key-only authentication via SSH public key
- VPN (tun0): Key-only authentication via SSH public key
- WAN (eth0): Disabled by default; enabled only when
/data/config/ssh-wan-enabledflag file exists
When DEVELOPMENT=1 is enabled at build time, first boot can still seed
/data/config/ssh-wan-enabled to simplify SSH testing, but SSH remains
key-only.
Physical Recovery
Rock64 keeps a separate physical break-glass path. If _RUT_OH_=1 is set in
U-Boot, the next boot starts a serial-only root autologin on ttyS2 and clears
that flag after use. This is a local recovery mechanism, not part of normal
network authentication.
Nixstasis Enrollment
The target remote-management model is Nixstasis-managed enrollment and access:
- The device identifies itself to Nixstasis using the
eth0MAC address. - Nixstasis checks that MAC against an approved inventory list.
- Approved devices receive a registration key and persist it on
/data. - Future device requests authenticate with that registration key.
- Nixstasis issues short-lived SSH credentials and establishes remote sessions over the reverse tunnel managed by the Nixstasis client.
The MAC address is an eligibility identifier, not a secret. The registration key is the first durable credential in the management flow.
Remote Management
Remote web access is intended to run from the Nixstasis environment rather than from services hosted directly on the device. The device remains responsible for SSH, LAN gateway services, update logic, and the Nixstasis client.
Device Identity
Each device is identified by the compact lowercase 12-hex MAC address of its onboard Ethernet (eth0). For example,
aa:bb:cc:dd:ee:ff becomes aabbccddeeff in the X-Device-ID header when polling for updates.
SSH Configuration
services.openssh = {
enable = true;
settings = {
PermitRootLogin = "no";
PasswordAuthentication = false;
};
};
The admin user’s authorized keys are read from /data/config/ssh-authorized-keys/admin, which is populated during
provisioning.
Nixstasis Enrollment
Building
All build outputs target aarch64-linux. Builds require an aarch64-linux builder – either the nix-darwin
linux-builder (recommended on macOS), a Lima VM, or a native Linux system.
Prerequisites
- Nix with flakes enabled
- mise for task running (recommended)
- An
aarch64-linuxbuilder (nix-darwinlinux-builder, Lima VM, or native)
Building with mise
# Install tools and hooks
mise install
# Check the flake evaluates cleanly
mise run check
# Build individual artifacts
mise run build:squashfs # result-squashfs/
mise run build:rauc-bundle # result-rauc-bundle/
mise run build:boot-script # result-boot-script/
# Build everything and retain the latest image/bundle roots under .gcroots/
mise run build
mise run build refreshes the rooted build outputs under .gcroots/, keeps the
latest two distinct images and the latest two RAUC bundles, and can optionally
copy the newest .img to an explicit output path with -o <path>.
Building via Lima VM
All build tasks accept --lima to run inside a Lima VM. This is useful when the Lima VM has a warm Nix store cache or
when the nix-darwin linux-builder is not configured.
# Build the retained artifacts inside the default Lima VM
mise run build -- --lima
# Use a specific Lima VM
mise run build -- --lima --vm my-builder
# Build everything via Lima
mise run build -- --lima
The task ensures the Lima VM is started before building. The macOS home directory is mounted at the same path inside Lima, so the flake path works unchanged.
Build Artifacts
| Artifact | mise Task | Nix Output | Description |
|---|---|---|---|
| Squashfs rootfs | build:squashfs | packages.aarch64-linux.squashfs | Compressed root filesystem (~300 MB) |
| RAUC bundle | build:rauc-bundle | packages.aarch64-linux.rauc-bundle | Signed .raucb for OTA updates |
| Boot script | build:boot-script | packages.aarch64-linux.boot-script | Compiled U-Boot boot.scr |
| Disk image | build | packages.aarch64-linux.image | Latest .img rooted under .gcroots/images/image.1/ |
Building with Nix Directly
# Build the flashable image
nix build .#image -o result-image
# Build only the squashfs
nix build .#squashfs -o result-squashfs
Image Naming
The flashable image filename includes the pinned NixOS release series from flake.nix:
- Current:
atomicnix-25.11.img(fromnixpkgs.url = "github:NixOS/nixpkgs/nixos-25.11") - Pattern:
atomicnix-<series>.img
When you move to a new NixOS series (e.g., nixos-26.05), update flake.nix/flake.lock and rebuild. The image name
updates automatically.
Squashfs Size Constraint
The squashfs image must fit within the 1 GB rootfs partition slot. The build script enforces this with a size check – the build fails if the image exceeds the limit. The current NixOS closure compresses to approximately 300-400 MB.
To keep the closure small, the flake uses an overlay to strip unnecessary dependencies:
crunis built without CRIU support (removescriu+python3, saving ~102 MB)- Documentation, man pages, fonts, and XDG utilities are all disabled
security.sudois disabled (usesrun0instead)environment.defaultPackagesis emptied
Testing
The core mise run e2e task runs 9 NixOS VM integration tests that validate the RAUC update lifecycle, network
security, and rollback behavior. Additional provisioning and forensics checks are also available directly under the
flake checks.* outputs. Tests run on both Linux (TCG software emulation) and macOS (Apple Virtualization Framework).
Running Tests
All tests
mise run e2e
# Run all tests inside a Lima VM
mise run e2e --lima
mise run e2e --lima --vm my-builder
Individual tests
mise run e2e:rauc-slots # RAUC sees all 4 A/B slots after boot
mise run e2e:rauc-update # Bundle install writes to inactive slot pair, slot switches A->B
mise run e2e:rauc-rollback # Install to B, mark bad, verify rollback to A
mise run e2e:rauc-confirm # os-verification health checks pass, slot marked good (~3 min)
mise run e2e:rauc-power-loss # Crash VM mid-install, verify slot A intact after reboot
mise run e2e:rauc-watchdog # Freeze systemd to trigger watchdog, verify boot-count rollback
mise run e2e:firewall # 2-node test: WAN allows HTTPS/VPN, LAN allows SSH/DHCP/NTP
mise run e2e:network-isolation # 2-node test: LAN gets DHCP/NTP, cannot reach WAN
mise run e2e:ssh-wan-toggle # Flag file enables/disables SSH on WAN via nftables reload
# Run an individual test inside Lima
mise run e2e:rauc-slots --lima
Test Descriptions
| Test | Nodes | What it validates |
|---|---|---|
rauc-slots | 1 | RAUC detects all 4 A/B slots after first-boot repartitioning creates boot-b/rootfs-b |
rauc-update | 1 | Bundle install writes to inactive slot pair; slot switches from A to B |
rauc-rollback | 1 | Install to slot B, mark bad, verify automatic rollback to slot A |
rauc-confirm | 1 | Health checks pass within timeout, slot committed as good |
rauc-power-loss | 1 | Crash VM mid-install, verify slot A is intact after reboot |
rauc-watchdog | 1 | Freeze systemd to trigger watchdog reboot, verify boot-count rollback |
firewall | 2 | WAN node can reach HTTPS (443) and VPN (1194); LAN node can reach SSH, DHCP, NTP; all other ports blocked |
network-isolation | 2 | LAN node gets DHCP lease and NTP, cannot reach WAN addresses |
ssh-wan-toggle | 1 | SSH on WAN blocked by default; enabled when flag file created; disabled when removed |
Platform Performance
The mise task wrappers auto-detect the platform and select the correct flake output.
| Test | macOS (apple-virt) | Linux (TCG, Lima) | Speedup |
|---|---|---|---|
rauc-slots | 34s | 132s | 3.9x |
rauc-update | 25s | 137s | 5.5x |
rauc-rollback | 22s | 120s | 5.5x |
rauc-confirm | 95s | 171s | 1.8x |
rauc-power-loss | 46s | 184s | 4.0x |
rauc-watchdog | 57s | 315s | 5.5x |
firewall | 65s | 205s | 3.2x |
network-isolation | 68s | – | – |
ssh-wan-toggle | 35s | – | – |
| Total | ~7.5 min | ~21 min | ~3.7x |
The rauc-confirm test has the smallest speedup because most of its runtime is a fixed 60-second sustained health check
timer.
Interactive Debugging
Launch an interactive QEMU VM with a Python REPL:
# Debug the default test (rauc-slots)
mise run e2e:debug
# Debug a specific test
mise run e2e:debug -t update
mise run e2e:debug -t confirm
mise run e2e:debug -t watchdog
# Keep VM state between runs
mise run e2e:debug -t slots --keep
Available test short names: slots, update, rollback, confirm, power-loss, watchdog, firewall, net-iso,
ssh-toggle.
Inside the REPL:
gateway.start() # boot the VM
gateway.wait_for_unit("multi-user.target")
gateway.succeed("rauc status") # run a command
gateway.shell_interact() # drop into a root shell
gateway.screenshot("name") # save a screenshot
# Ctrl+D to exit
Running Tests with Nix Directly
# Linux (TCG, no KVM required)
nix build .#checks.aarch64-linux.rauc-slots --no-link -L
# macOS (requires nix-darwin with linux-builder enabled)
nix build .#checks.aarch64-darwin.rauc-slots --no-link -L
# Local Darwin eval/builds that depend on nix/tests/rauc-qemu-config.nix should
# use a path flake ref so local files remain visible even if they are untracked.
nix build "path:$PWD#checks.aarch64-darwin.rauc-slots" --no-link -L
When iterating on a single Darwin check locally, evaluate and build the exact
derivation with the same path: flake ref:
drv=$(nix eval --raw "path:$PWD#checks.aarch64-darwin.rauc-slots.drvPath")
nix-store -r "$drv"
Test Architecture
Tests use the NixOS test framework (nixos-lib.runTest). Each test:
- Defines one or two virtual machines with the full AtomicNix service stack (using
hardware-qemu.nixinstead ofhardware-rock64.nix) - Boots the VM(s) and runs a Python test script that interacts via QEMU’s monitor interface
- Asserts on command output, service states, and network behavior
The QEMU target uses a custom RAUC backend that simulates U-Boot’s slot selection using files instead of environment
variables, allowing the full A/B update lifecycle to be tested without real hardware. The shared slot mapping for the
RAUC tests lives in nix/tests/rauc-qemu-config.nix.
Provisioning
Deploy AtomicNix to a Rock64 device by building a flashable disk image and writing it
to eMMC with dd (or mise run flash).
After Provisioning
On first boot:
- U-Boot loads
boot.scrfrom boot-a, echoes build ID, boots the kernel with initrd - The initrd mounts the selected squashfs slot at
/run/rootfs-base, thensysroot.mountassembles/as OverlayFS with a tmpfs-backed upper/work directory under/run/overlay-root - Initrd
systemd-repartcreates the/datapartition (f2fs) on first boot using the remaining eMMC space - Initrd persists a fresh-flash marker so switched-root provisioning can distinguish a new flash from a later
reprovisioned
/datawipe first-boot.servicelooks for/boot/config.tomlonly on a fresh flash, then USBconfig.toml, then starts the bootstrap web console on172.20.30.1:8080- The imported config is validated, persisted under
/data/config/, rendered into canonical Quadlet files, and synced into the active rootful and rootless Quadlet paths first-boot.serviceapplies Quadlets, LAN settings, and provisioned firewall rules, then marks the RAUC slot as good only if those runtime apply steps succeed- Network interfaces come up (eth0 via DHCP, eth1 static);
systemd-networkd-wait-onlineuses 30s timeout withanyInterface=true - Services start: dnsmasq, chrony, sshd, and the RAUC update timer when RAUC is enabled
The device is then ready to receive OTA updates and serve LAN clients.
For the canonical persisted state and runtime schemas, see Firmware Data Flow and Runtime Boundaries.
Reprovisioning
Wiping /data returns the device to the unprovisioned state without changing the A/B slot layout.
On the next boot:
- Initrd sees that
boot-balready exists, so it does not mark the boot as a fresh flash /boot/config.tomlis not replayedfirst-boot.servicesearches USBconfig.tomlsources first- If no USB seed is found, the bootstrap web console starts on
172.20.30.1:8080
Imported operator state remains bounded to /data/config/, including the imported config.toml, rendered Quadlet
files, admin SSH authorized keys, and other provisioning-derived runtime inputs.
USB Recovery Mode
If the reset button is held from power-on for 10 seconds, U-Boot enters USB mass storage mode instead of booting Linux. The Rock64 OTG USB port then exposes the full eMMC as a removable disk, allowing the host to write a fresh image directly.
Flashable Disk Image
Build a complete .img file that can be written to eMMC (or SD card) using dd or any raw disk writer.
Build the Image
# Build with mise (stores the latest image under .gcroots/images/image.1)
mise run build
# Copy the latest image to a specific output path
mise run build -- -o atomicnix-25.11.img
# Build via Lima VM
mise run build -- --lima
# Or with Nix directly (result stays in Nix store, symlinked to result-image/)
nix build .#image -o result-image
Flash to eMMC
macOS
Connect the eMMC module via a USB adapter. Identify the device (usually /dev/disk4):
diskutil list
Flash using the mise task:
# Auto-detect image, specify target disk
mise run flash /dev/disk4
# Specify image explicitly
mise run flash -i atomicnix-25.11.img /dev/disk4
# Skip confirmation prompt
mise run flash -y /dev/disk4
The flash task automatically:
- Converts
/dev/diskNto/dev/rdiskN(raw device for faster writes) - Unmounts all partitions on the target disk
- Refuses to write to the macOS boot disk
- Runs
ddwithbs=4Mand progress reporting - Syncs and ejects when done
Linux
# With mise
mise run flash -y /dev/mmcblk0
# With dd directly
sudo dd if=atomicnix-25.11.img of=/dev/mmcblk0 bs=4M status=progress
sudo sync
What’s in the Image
The flashable image contains:
| Region | Content |
|---|---|
| Raw (0-16 MB) | U-Boot (idbloader + u-boot.itb) |
| Partition 1 (boot-a) | Kernel Image, initrd, DTB, boot.scr (vfat) |
| Partition 2 (rootfs-a) | Squashfs root filesystem |
The image intentionally does not include slot B or /data. On first boot,
initrd systemd-repart creates boot-b (vfat), rootfs-b, and /data
(f2fs) using the remaining eMMC space before the real system mounts it.
First Boot Provisioning
The flashable image method does not embed credentials in the image. After
flashing, the device boots into the local provisioning flow and imports operator
configuration into /data/config/ from one of these sources:
/boot/config.tomlon a fresh flash- USB
config.tomlor supported config bundle - the local bootstrap web console on
172.20.30.1:8080
When a new config.toml is applied through one of those paths, the device
persists it under /data/config/, writes admin SSH authorized keys, renders the
declared Quadlet units, and continues first boot without requiring a second
reboot.
Reprovisioning is done by wiping /data and rebooting. Because initrd only
treats /boot/config.toml as a seed on a true fresh flash, reprovisioning uses
USB config.toml first and then falls back to the bootstrap UI instead of
replaying an old /boot/config.toml.
For local development only, you can opt into a build-time development mode with
a gitignored .env file:
cat > .env <<'EOF'
DEVELOPMENT=1
EOF
When DEVELOPMENT=1 is set during the image build, first boot only enables the
existing WAN SSH flag on /data/config/ssh-wan-enabled for easier testing.
Operator access still comes from normal SSH-key provisioning.
The image keeps both root and admin passwords locked. On Rock64,
_RUT_OH_=1 enables a deterministic serial-only root recovery path on UART2
(ttyS2, 1.5 Mbaud) for the next boot.
LAN Range Configuration
The default LAN subnet is 172.20.30.0/24 with the gateway at 172.20.30.1. To change this, use the config:lan-range
mise task, which updates all configuration files in a single command.
Usage
mise run config:lan-range \
--gateway-cidr 10.50.0.1/24 \
--dhcp-start 10.50.0.10 \
--dhcp-end 10.50.0.254
What it Updates
The task modifies four files to keep the LAN configuration consistent:
| File | What Changes |
|---|---|
modules/networking.nix | eth1 static Address |
modules/lan-gateway.nix | dnsmasq dhcp-range, gateway DHCP option (3), NTP DHCP option (42), chrony allow subnet |
scripts/os-verification.sh | Expected eth1 IP in health checks |
After Changing
Rebuild:
mise run check
mise run build
Constraints
- Only
/24subnets are currently supported - DHCP start and end addresses must be within the specified subnet
- The gateway address (first part of
--gateway-cidr) is used as the static IP for eth1
Firmware Data Flow
AtomicNix keeps immutable firmware, provisioned runtime state, and update state in separate paths so A/B slot switches do not rewrite operator data.
Boot Flow
- U-Boot RAUC bootmeth selects the slot using
BOOT_ORDERandBOOT_x_LEFTfrom the SPI environment. boot.scrloads kernel, initrd, and DTB from the selected boot partition.boot.scrpassesroot=fstab,rauc.slot, andatomicnix.lowerdevto Linux.- Initrd mounts the selected squashfs rootfs as
/run/rootfs-base. sysroot.mountassembles/as OverlayFS with squashfs lowerdir and tmpfs upper/work dirs.- Initrd
systemd-repartcreates missingboot-b,rootfs-b, and/datapartitions on a fresh flash.
Provisioning Flow
Provisioning imports exactly one operator configuration into /data/config/ from /boot/config.toml on fresh flash, a
USB seed, a supported seed bundle, or the LAN bootstrap console.
Persisted outputs are:
| Output | Path |
|---|---|
| Imported source config | /data/config/config.toml |
| Admin SSH keys | /data/config/ssh-authorized-keys/admin |
| WAN inbound policy | /data/config/firewall-inbound.json |
| LAN runtime settings | /data/config/lan-settings.json |
| Required health units | /data/config/health-required.json |
| Rendered Quadlets | /data/config/quadlet/*.container |
| Quadlet runtime metadata | /data/config/quadlet-runtime.json |
| Bundle payload files | /data/config/files/ |
first-boot.service fails before RAUC slot confirmation if Quadlet sync, LAN runtime apply, or provisioned firewall apply
fails.
Update Flow
os-upgrade.service sends the compact lowercase 12-hex eth0 MAC in X-Device-ID, compares available bundle metadata
with the booted version, downloads the bundle to /data, installs it with RAUC, and reboots into the newly selected slot.
os-verification.service commits a slot only after service, network, LAN, and required-unit checks remain healthy through
the sustained verification window.
Firewall and LAN Apply Flow
lan-gateway-apply.service consumes /data/config/lan-settings.json, writes the eth1 network drop-in, updates dnsmasq
and chrony runtime snippets, and restarts the affected services. provisioned-firewall-inbound.service consumes
/data/config/firewall-inbound.json and adds only the requested WAN inbound nftables rules. The baseline firewall keeps
new eth0 inbound traffic denied unless provisioned state opens a port.
Application Runtime Flow
Provisioned Quadlets are rendered under /data/config/quadlet/, mirrored into the active rootful or rootless systemd
Quadlet search path, and described by /data/config/quadlet-runtime.json. Rootless containers are constrained to pasta
networking with loopback publish rewrites; privileged rootful containers use host networking.
Runtime Boundaries
AtomicNix separates immutable platform code from operator-provisioned runtime behavior.
Immutable Platform
The image owns boot, kernel, initrd, RAUC, firewall defaults, SSH policy, local provisioning, LAN gateway services, OpenVPN recovery plumbing, and update confirmation logic. These live in the active squashfs slot and are replaced only by RAUC updates.
Persistent Operator State
/data/config/ owns runtime configuration imported during provisioning. RAUC slot writes do not modify /data.
The bootstrap API is LAN-local and exposes POST /api/config for complete config.toml files or supported config
bundles. It uses the same validation and persistence path as the web console and returns JSON success or validation
errors for programmatic clients.
The accepted config.toml schema is:
version = 1
[admin]
ssh_keys = ["ssh-ed25519 ..."]
[firewall.inbound]
tcp = [443]
udp = [1194]
[lan]
gateway_cidr = "172.20.30.1/24"
dhcp_start = "172.20.30.10"
dhcp_end = "172.20.30.254"
domain = "local"
gateway_aliases = ["atomicnix"]
hostname_pattern = "atomicnix-{mac}"
[health]
required = ["myapp"]
[container.myapp]
privileged = false
[container.myapp.Container]
Image = "ghcr.io/example/myapp:latest"
PublishPort = ["10080:8080"]
[firewall.inbound] is required and must contain at least one TCP or UDP port. [lan] is optional; omitted fields use
the fallback LAN gateway contract.
Firewall JSON
/data/config/firewall-inbound.json is a JSON object with optional tcp and udp arrays of integer ports in
1..65535.
{
"tcp": [443],
"udp": [1194]
}
Provisioned rules are added only on WAN eth0. The baseline firewall remains deny-by-default for new eth0 inbound
traffic and drops all forwarding.
LAN JSON
/data/config/lan-settings.json is generated from config.toml and includes the validated runtime fields consumed by
lan-gateway-apply.py.
{
"gateway_cidr": "172.20.30.1/24",
"gateway_ip": "172.20.30.1",
"subnet_cidr": "172.20.30.0/24",
"netmask": "255.255.255.0",
"dhcp_start": "172.20.30.10",
"dhcp_end": "172.20.30.254",
"domain": "local",
"hostname_pattern": "atomicnix-{mac}",
"gateway_aliases": ["atomicnix"]
}
The DHCP range must stay inside the /24 gateway subnet, must be ordered, and must not include the gateway IP.
Quadlet Safety Boundary
Provisioned containers are rendered into canonical Quadlet files under /data/config/quadlet/ before being synced into
Podman systemd search paths.
Rootful containers require privileged = true and are forced onto Network=host. Rootless containers use the appsvc
user, are forced onto Network=pasta, and non-loopback PublishPort binds are rewritten to 127.0.0.1.
Bundle imports may include files/; Quadlet values may reference ${CONFIG_DIR} and ${FILES_DIR} to bind files from
/data/config/ without embedding host-specific absolute paths in the seed.
Operational Unknowns
These items are intentionally outside the current firmware contract and must be resolved before changing the contract.
| Area | Current State | Resolution Needed |
|---|---|---|
| Active watchdog enforcement | Hardware driver is present; systemd manager watchdog settings are disabled | Complete Rock64 boot reliability validation, then enable RuntimeWatchdogSec=30s and RebootWatchdogSec=10min |
| USB WiFi | Kernel WiFi and Bluetooth stacks are disabled in the current image | Select supported hardware and firmware, then update kernel config, tests, and docs |
| hawkBit updates | useHawkbit disables polling and installs rauc-hawkbit-updater only | Define server configuration, credentials, systemd unit, and verification tests |
| Nixstasis client | Device-side state paths and management model are documented | Implement enrollment client, tunnel lifecycle, and credential rotation |
| Provisioned applications | AtomicNix renders and starts Quadlets from operator config | Define fleet policy for image provenance, registry auth, and rollout approval |
Hardware Testing
Source:
HARDWARE-TEST-PLAN.md
This chapter provides the physical verification plan for Rock64 hardware testing. These tests cannot be run in QEMU and require a physical Rock64 board with eMMC, serial console, and network connectivity.
Prerequisites
- Rock64 v2 board with 16 GB eMMC module
- USB-to-serial adapter connected to UART2 (1.5 Mbaud)
- USB Ethernet adapter (for eth1/LAN interface)
- Supported USB Ethernet adapter for eth1/LAN (
r8152,ax88179_178a, orcdc_ether) - Built disk image (
atomicnix-25.11.img) - Built RAUC bundle (
rock64.raucb) - Network with DHCP and internet access (for WAN/eth0)
- A second device on the LAN subnet for client testing
Phase 1: Provisioning & First Boot
Test 1.1: Flash image and verify U-Boot output
# Flash the image
mise run flash /dev/disk4 # macOS
# or
sudo dd if=atomicnix-25.11.img of=/dev/mmcblk0 bs=4M status=progress
# Connect serial console
screen /dev/tty.usbserial-DM02496T 1500000
Pass criteria:
- U-Boot banner appears on serial console
bootflow scanfindsboot.scron boot-a- Kernel loads and prints boot messages
- System reaches
multi-user.target - If
/boot/config.tomlor a USB seed is present,first-boot.servicecompletes provisioning - Without a seed, the bootstrap UI appears on
172.20.30.1:8080and first boot waits for operator input
Test 1.2: Verify first-boot service
systemctl status first-boot
[ -f /data/.completed_first_boot ] && cat /data/.completed_first_boot
[ -x "$(command -v rauc)" ] && rauc status
Pass criteria:
- With a seed config present,
first-boot.servicecompleted successfully - Without a seed config, the bootstrap UI is reachable and
first-boot.serviceremains waiting - After provisioning succeeds, the sentinel exists at
/data/.completed_first_boot - On RAUC-enabled images,
rauc statusshows the booted slot as “good” after provisioning succeeds
Phase 2: Kernel & Hardware Detection
Test 2.1: eMMC and core hardware
dmesg | grep -i mmc
dmesg | grep -i dwmac
dmesg | grep -i ehci
dmesg | grep -i watchdog
lsblk
Pass criteria:
- eMMC detected as
/dev/mmcblk1(ormmcblk0depending on boot media) - Ethernet MAC driver (DWMAC/STMMAC) loaded
- USB host controller (EHCI/OHCI/XHCI) initialized
- Watchdog device (
dw_wdt) registered
Test 2.2: USB Ethernet module
modprobe r8152 # or ax88179_178a/cdc_ether for your adapter
ip link show
Pass criteria:
- Supported USB Ethernet module loads without errors
- A second Ethernet interface appears in
ip link - USB WiFi and Bluetooth are not part of the current image contract
Phase 3: Network Configuration
Test 3.1: eth0 is onboard Ethernet
udevadm info /sys/class/net/eth0 | grep ID_PATH
ip addr show eth0
Pass criteria:
eth0matches the onboard GMAC (platform pathplatform-ff540000.ethernet)- eth0 has a DHCP-assigned IP address
Test 3.2: DHCP server on LAN
Connect a client device to eth1 (USB Ethernet adapter).
# On the gateway
systemctl status dnsmasq
journalctl -u dnsmasq | tail -20
# On the LAN client
dhclient eth0 # or equivalent
ip addr show
Pass criteria:
- Client receives an IP in
172.20.30.10-254range - Gateway is
172.20.30.1 - dnsmasq logs the DHCP transaction
Test 3.3: NTP server on LAN
# On the gateway
chronyc tracking
chronyc clients
# On the LAN client
ntpdate -q 172.20.30.1
Pass criteria:
- Chrony is synced to upstream NTP (or using local stratum 10 fallback)
- LAN client can query NTP from
172.20.30.1
Test 3.4: LAN isolation
# On the LAN client
ping -c 3 8.8.8.8 # should fail
curl https://example.com # should fail
ping -c 3 172.20.30.1 # should succeed
Pass criteria:
- LAN client cannot reach any internet address
- LAN client can reach the gateway
Phase 4: Firewall Verification
Test 4.1: WAN baseline port access
From an external machine (or the WAN side):
# These should fail until explicitly provisioned
curl -k https://<wan-ip>:443
nc -uz <wan-ip> 1194
# This should fail (connection refused/timeout)
ssh <wan-ip>
Pass criteria:
- HTTPS (443) is blocked until provisioned
- OpenVPN (1194) is blocked until provisioned
- SSH (22) is blocked
Test 4.2: SSH-on-WAN toggle
# Enable SSH on WAN
touch /data/config/ssh-wan-enabled
systemctl start ssh-wan-reload
# Test from WAN side
ssh admin@<wan-ip> # should now work
# Disable SSH on WAN
rm /data/config/ssh-wan-enabled
systemctl start ssh-wan-reload
# Test from WAN side
ssh admin@<wan-ip> # should fail again
Pass criteria:
- SSH is blocked by default
- Creating the flag file and reloading enables SSH
- Removing the flag file and reloading disables SSH
Phase 5: Services
Test 5.1: Update confirmation
systemctl restart os-verification
journalctl -u os-verification -f
Pass criteria:
- Local service and network checks pass
- 60-second sustained check completes
- Slot is marked as “good”
Phase 6: Authentication
Test 6.1: SSH key authentication
# From an external machine on the LAN
ssh -i ~/.ssh/id_ed25519 admin@172.20.30.1
# Password auth should remain disabled
auth_line="$({ ssh -vv -o PreferredAuthentications=none -o PubkeyAuthentication=no \
-o BatchMode=yes -o NumberOfPasswordPrompts=0 \
-o StrictHostKeyChecking=accept-new \
-o UserKnownHostsFile=/tmp/atomicnix-rock64-known_hosts \
-o ConnectTimeout=10 admin@172.20.30.1 true; } \
2>&1 | grep 'Authentications that can continue:' | tail -n 1)"
[ -n "$auth_line" ] && ! printf '%s\n' "$auth_line" | grep -Fq 'password'
Pass criteria:
- Key-based authentication succeeds
- The auth-method probe exits successfully, confirming
passwordis excluded
Test 6.2: Serial root recovery
# On the device
fw_setenv _RUT_OH_ 1
reboot
# `_RUT_OH_` should remain a serial-only recovery path
# On UART2/ttyS2 at 1500000 baud, expect serial root autologin on the next boot.
# From an external machine on the LAN after the reboot
ssh -i ~/.ssh/id_ed25519 admin@172.20.30.1
auth_line="$({ ssh -vv -o PreferredAuthentications=none -o PubkeyAuthentication=no \
-o BatchMode=yes -o NumberOfPasswordPrompts=0 \
-o StrictHostKeyChecking=accept-new \
-o UserKnownHostsFile=/tmp/atomicnix-rock64-known_hosts \
-o ConnectTimeout=10 admin@172.20.30.1 true; } \
2>&1 | grep 'Authentications that can continue:' | tail -n 1)"
[ -n "$auth_line" ] && ! printf '%s\n' "$auth_line" | grep -Fq 'password'
# On the device after boot completes
fw_printenv -n _RUT_OH_ # expect: empty / unset
Pass criteria:
_RUT_OH_enables one-shot serial root autologin on UART2 only- SSH behavior on the network is unchanged after the recovery boot
_RUT_OH_is cleared after use
Phase 7: RAUC Update Lifecycle
Test 7.1: RAUC status
rauc status
Pass criteria:
- Shows 4 slots (boot.0, rootfs.0, boot.1, rootfs.1)
- One pair is marked as booted and good
Test 7.2: Bundle install
# Copy bundle to device
scp rock64.raucb admin@172.20.30.1:/data/
# Install
rauc install /data/rock64.raucb
Pass criteria:
- Install completes without errors
rauc statusshows the inactive slot has been writtenBOOT_ORDERreflects the new slot priority
Test 7.3: Boot-count rollback
# After installing to slot B, intentionally corrupt it
dd if=/dev/zero of=/dev/mmcblk1p4 bs=1M count=1
# Reboot 3 times and observe the serial console
reboot
Pass criteria:
- Each boot attempt decrements
BOOT_B_LEFT - After 3 failures, U-Boot falls back to slot A
- Slot A boots successfully with the previous working image
Phase 8: Watchdog
Test 8.1: Hardware watchdog presence
dmesg | grep -i watchdog
ls /dev/watchdog*
Pass criteria:
dw_wdtdriver is loaded/dev/watchdogdevice exists
Test 8.2: Watchdog-triggered reboot
Deferred: active watchdog enforcement is disabled in the current release. Run this only after enabling the deferred
RuntimeWatchdogSec=30starget on a test device.
# Freeze PID 1 (systemd) to stop watchdog kicks
kill -STOP 1
# Wait 30+ seconds -- the hardware watchdog should force a reboot when enabled
Pass criteria:
- With the deferred target enabled, device reboots within ~30 seconds of the SIGSTOP
- Serial console shows watchdog reset
- U-Boot boot-count is decremented for the current slot
Task Checklist
| # | Test | Status |
|---|---|---|
| 1.1 | Flash + U-Boot output | |
| 1.2 | First-boot service | |
| 2.1 | eMMC + core hardware | |
| 2.2 | USB Ethernet module | |
| 3.1 | eth0 is onboard | |
| 3.2 | DHCP server on LAN | |
| 3.3 | NTP server on LAN | |
| 3.4 | LAN isolation | |
| 4.1 | WAN port access | |
| 4.2 | SSH-on-WAN toggle | |
| 5.1 | Update confirmation | |
| 6.1 | SSH key auth | |
| 6.2 | Serial root recovery | |
| 7.1 | RAUC status | |
| 7.2 | Bundle install | |
| 7.3 | Boot-count rollback | |
| 8.1 | Watchdog presence | |
| 8.2 | Watchdog reboot |
Nix Flake Configuration
Source:
openspec/changes/rock64-ab-image/specs/nix-flake-config/spec.md
Requirements
ADDED: Flake defines Rock64 NixOS configuration
The flake provides nixosConfigurations.rock64 targeting aarch64-linux (RK3328). The configuration includes all
service modules: systemd, openssh, chrony, dnsmasq, RAUC, nftables, watchdog, and the health-check/update services.
Scenario: Rock64 system evaluates cleanly
- Given the flake is checked with
nix flake check - Then
nixosConfigurations.rock64evaluates without errors - And the system target is
aarch64-linux
ADDED: Produces squashfs rootfs image
The flake builds a compressed squashfs root filesystem via packages.aarch64-linux.squashfs. The image must not exceed
the partition slot size (1 GB).
Scenario: Squashfs image fits within slot
- Given the squashfs is built with
nix build .#squashfs - Then the resulting image is less than or equal to 1 GB
- And it uses zstd compression with 1 MB block size
ADDED: Produces signed RAUC bundle
The flake builds a multi-slot RAUC bundle (.raucb) containing both boot
(kernel + initrd + DTB + boot.scr) and rootfs (squashfs) images,
signed with the project’s CA key.
Scenario: RAUC bundle is valid
- Given the bundle is built with
nix build .#rauc-bundle - Then the
.raucbfile passesrauc info --no-verify - And it contains entries for both
bootandrootfsslots - And it is signed with the development CA certificate
ADDED: Stripped kernel with modular USB support
The kernel is configured with built-in drivers for essential hardware (eMMC, Ethernet, USB host, watchdog, squashfs, f2fs) and loadable modules for selected USB Ethernet and USB-serial peripherals. USB WiFi is unsupported until specific hardware and firmware are selected.
Scenario: Kernel has required drivers
- Given the NixOS configuration is evaluated
- Then the kernel includes
MMC_DW_ROCKCHIP=y,STMMAC_ETH=y,DW_WATCHDOG=y,SQUASHFS=y - And selected USB Ethernet drivers (
USB_RTL8152,USB_NET_AX88179_178A,USB_NET_CDCETHER) are built as modules
Scenario: USB serial works for debugging
- Given a USB-serial adapter is plugged in
- When the
ftdi_sioorcp210xmodule is loaded - Then
/dev/ttyUSB0appears
ADDED: OpenVPN as system service
OpenVPN is included in the rootfs for recovery tunnel access. It does not auto-start; it requires a config file at
/data/config/openvpn/client.conf.
Scenario: OpenVPN service is conditional
- Given no OpenVPN config file exists on
/data - Then
openvpn-recovery.servicedoes not start - When a config file is placed at
/data/config/openvpn/client.conf - And the service is started manually
- Then a
tun0interface appears
ADDED: QEMU testing target
The flake provides nixosConfigurations.rock64-qemu targeting aarch64-virt with virtio block devices. It shares the
full service configuration from base.nix but uses a custom RAUC backend (file-based) instead of U-Boot.
Scenario: QEMU target boots and runs tests
- Given the QEMU configuration is built
- When a test VM is started
- Then all services from
base.nixare present - And RAUC uses the custom file-based backend
Partition Layout Specification
Source:
openspec/changes/rock64-ab-image/specs/partition-layout/spec.md
Requirements
ADDED: eMMC A/B layout
The 16 GB eMMC uses a fixed partition layout with raw U-Boot at the beginning, two pairs of A/B slots (boot + rootfs),
and a persistent data partition using all remaining space. The flash image contains slot A only; initrd
systemd-repart creates slot B and /data on first boot.
Scenario: Partition table matches specification
- Given a provisioned eMMC
- Then
sfdisk -dshows 2 GPT partitions in the flashable image (boot-a, rootfs-a) - And the raw region (0-16 MB) contains U-Boot
- And after the first successful boot, initrd
systemd-reparthas created three additional GPT partitions labeledboot-b,rootfs-b, anddata - And
/data(f2fs) uses the remaining space
ADDED: Per-slot boot partitions
Each slot pair has its own boot partition (vfat) containing the kernel, initrd, DTB, and boot script. This ensures boot and rootfs are always consistent for a given slot.
Scenario: Boot partition contents match slot
- Given slot A is active
- Then boot-a contains
Image,initrd,rk3328-rock64.dtb, andboot.scr - And boot-b is absent before first boot or contains the other slot’s kernel afterward
ADDED: Flashable disk image
The build task produces a flashable .img file containing U-Boot, boot slot A, rootfs slot A, and a
remaining-space region reserved for first-boot creation of boot slot B, rootfs slot B, and /data by initrd
systemd-repart.
ADDED: U-Boot at RK3328 offsets
U-Boot is written as raw data (no partition) at the offsets expected by the RK3328 boot ROM:
idbloader.imgat sector 64 (byte offset 32 KB)u-boot.itbat sector 16384 (byte offset 8 MB)
Both come from the custom Rock64 U-Boot package built by this flake.
Scenario: U-Boot loads from eMMC
- Given U-Boot is written at the correct offsets
- When the Rock64 powers on
- Then the serial console shows U-Boot initialization
- And
bootflow scanfindsboot.scron boot-a
ADDED: /data survives updates
The /data partition is never modified by RAUC slot writes or slot switches. Container data, configuration, and
credentials persist across all updates and rollbacks.
Scenario: Persist data survives update
- Given a file exists at
/data/config/test-file - When a RAUC update installs a new image and the device reboots
- Then
/data/config/test-filestill exists with the same content
ADDED: U-Boot env for slot selection
U-Boot environment variables (BOOT_ORDER, BOOT_A_LEFT, BOOT_B_LEFT) control which slot boots and how many attempts
remain. On Rock64 these are stored in SPI flash and are seeded safely from Linux on first boot when missing.
Scenario: Environment survives power loss
- Given
BOOT_ORDERis set to"B A" - When power is lost during env write
- Then U-Boot falls back to its compiled defaults or a previously valid SPI environment
RAUC Integration
Source:
openspec/changes/rock64-ab-image/specs/rauc-integration/spec.md
Requirements
ADDED: A/B multi-slot configuration
RAUC system.conf defines two slot pairs (A and B), each containing a boot partition and a rootfs partition. The boot
partition is the parent; the rootfs partition inherits its slot assignment.
Scenario: RAUC sees all slots
- Given the device has booted
- When
rauc statusis run - Then 4 slots are listed: boot.0 (A), rootfs.0 (A), boot.1 (B), rootfs.1 (B)
- And one pair is marked as “booted”
ADDED: U-Boot bootloader backend
RAUC uses the uboot backend to communicate slot priority and boot-count via U-Boot environment variables. On the QEMU
target, a custom backend simulates the same behavior using files.
Scenario: RAUC can switch slots
- Given slot A is active
- When
rauc installwrites a bundle to slot B - Then RAUC sets
BOOT_ORDER=B AandBOOT_B_LEFT=3 - And the next boot loads from slot B
ADDED: Bundle signature verification
RAUC verifies bundle signatures against the CA keyring (dev.ca.cert.pem). Unsigned or invalidly signed bundles are
rejected.
Scenario: Invalid bundle is rejected
- Given a
.raucbbundle signed with a different key - When
rauc installis attempted - Then the install fails with a signature verification error
- And no slot data is modified
ADDED: Writes to inactive slot only
RAUC only writes to the slot pair that is not currently booted. The active slot is never modified during an update.
Scenario: Active slot is protected
- Given slot A is booted
- When
rauc installruns - Then data is written to boot-b and rootfs-b only
- And boot-a and rootfs-a remain unchanged
ADDED: Bundle contains boot and rootfs
Each RAUC bundle contains two images: a vfat boot image (kernel + initrd + DTB + boot.scr) and the squashfs rootfs. Both are installed atomically to the target slot pair.
Scenario: Bundle structure
- Given a bundle is built with
nix build .#rauc-bundle - When
rauc infois run on the bundle - Then it shows an image for
boot(type: raw) and an image forrootfs(type: raw) - And the
compatiblefield isrock64
ADDED: Update polling service
The os-upgrade.service polls an update server on a timer, downloads new bundles, and installs them via RAUC. The
hawkBit path is reserved for future server-push updates.
Scenario: Polling finds new version
- Given the update server has a newer bundle
- When the timer fires
- Then the bundle is downloaded to
/data - And
rauc installis run - And the device reboots into the new slot
ADDED: Swappable with hawkBit
The os-upgrade module has a useHawkbit option that disables the polling service and installs the
rauc-hawkbit-updater package. AtomicNix does not configure an operational hawkBit service in the current image.
Scenario: hawkBit mode
- Given
os-upgrade.useHawkbit = true - Then the
os-upgradepolling timer is not created - And
rauc-hawkbit-updaterpackage is included in the system - And no configured
rauc-hawkbit-updatersystemd service is created by AtomicNix
ADDED: NixOS RAUC module
RAUC is enabled via the upstream NixOS services.rauc module and wired from atomicnix.rauc.* options. The rauc
client is available in the system environment.
Scenario: RAUC is available
- Given the device has booted
- When
rauc --versionis run - Then a valid version string is returned
- And
rauc.serviceis active
Boot & Rollback
Source:
openspec/changes/rock64-ab-image/specs/boot-rollback/spec.md
Requirements
ADDED: U-Boot tracks boot attempts per slot
U-Boot maintains BOOT_A_LEFT and BOOT_B_LEFT counters (initial value: 3). RAUC bootmeth selects the slot and
decrements the counter before loading boot.scr.
Scenario: Counter decrements on boot
- Given
BOOT_A_LEFT=3and slot A is first inBOOT_ORDER - When the device boots
- Then
BOOT_A_LEFTis decremented to 2 beforeboot.scrloads the kernel - And the SPI flash environment is updated
Scenario: Counter reaches zero
- Given
BOOT_A_LEFT=0 - When U-Boot attempts to boot slot A
- Then slot A is skipped
- And U-Boot tries the next slot in
BOOT_ORDER
ADDED: Boot order reflects RAUC slot priority
When RAUC installs an update to slot B, it sets BOOT_ORDER=B A so the updated slot is tried first. When slot A is
installed, it sets BOOT_ORDER=A B.
Scenario: RAUC sets boot order
- Given slot A is active
- When a RAUC bundle is installed
- Then
BOOT_ORDERchanges to"B A" - And
BOOT_B_LEFTis set to 3
ADDED: Successful boot commits slot
After the health-check service passes, rauc status mark-good resets the boot counter for the current slot. This
prevents further rollback attempts.
Scenario: Health check commits slot
- Given the device booted into slot B with
BOOT_B_LEFT=2 - When
os-verification.servicepasses all checks - Then
rauc status mark-goodis called - And
BOOT_B_LEFTis reset to 3
ADDED: Rollback recovers previous image
After 3 consecutive failed boots (counter reaches 0), U-Boot skips the failing slot and boots the previous working slot. The failed slot’s data is preserved for diagnostics but is not booted.
Scenario: Automatic rollback after 3 failures
- Given slot B was just installed with
BOOT_ORDER=B A - And slot B fails to boot 3 times (kernel panic, hang, or health check failure)
- Then
BOOT_B_LEFTreaches 0 - And U-Boot boots slot A (the previous working image)
- And slot A still has its original content
ADDED: SPI flash U-Boot environment
The U-Boot environment is stored in SPI flash exposed to Linux as /dev/mtd0 at offset 0x140000 with size 0x2000.
AtomicNix does not store redundant U-Boot environment copies on eMMC.
Scenario: Userspace tools address SPI env
- Given the device has booted
- When
/etc/fw_env.configis inspected - Then it points to
/dev/mtd0 0x140000 0x2000 0x1000
Watchdog
Source:
openspec/changes/rock64-ab-image/specs/watchdog/spec.md
Requirements
Current status: implementation hooks are present, but the Rock64 runtime watchdog is intentionally disabled during development. The scenarios below define the current release behavior and the deferred target settings.
ADDED: Hardware watchdog target is deferred
The RK3328 hardware watchdog (dw_wdt) target is documented, but systemd manager watchdog settings are not enabled in
the current release.
Scenario: Watchdog triggers on hang
- Given the current Rock64 image boots
- Then AtomicNix leaves
RuntimeWatchdogSecunset - And the deferred target remains
RuntimeWatchdogSec=30s
ADDED: Reboot watchdog
A separate reboot watchdog (RebootWatchdogSec) remains deferred until Rock64 boot reliability validation approves active
watchdog enforcement.
Scenario: Reboot hang recovery
- Given the current Rock64 image boots
- Then AtomicNix leaves
RebootWatchdogSecunset - And the deferred target remains
RebootWatchdogSec=10min
ADDED: Configurable timeouts
The watchdog timeouts are set in modules/watchdog.nix:
systemd.settings.Manager = {
# RuntimeWatchdogSec = "30s";
# RebootWatchdogSec = "10min";
};
- Runtime: 30 seconds – aggressive enough to catch hangs quickly, long enough to avoid false triggers during normal operation
- Reboot: 10 minutes – generous because clean shutdown may need time to stop containers
ADDED: Watchdog interacts with boot-count rollback
A watchdog reboot is indistinguishable from any other abnormal reboot from U-Boot’s perspective. Each watchdog-triggered reboot:
- Decrements the boot counter for the current slot
- If the counter reaches 0, the slot is skipped
- The previous working slot boots instead
This means a systemd hang on a newly updated slot will trigger automatic rollback within 3 watchdog cycles (approximately 90 seconds total).
Scenario: Watchdog-triggered rollback
- Given slot B was just installed
- And slot B causes a systemd hang on every boot
- Then the watchdog reboots 3 times (30s each)
- And
BOOT_B_LEFTdecrements from 3 to 0 - And U-Boot falls back to slot A
Update Confirmation
Source:
openspec/changes/rock64-ab-image/specs/update-confirmation/spec.md
Requirements
ADDED: Local health-check service
The os-verification.service runs after multi-user.target on every boot (except the first). It validates that the
system is healthy before committing the RAUC slot. No external network dependency is required for the check itself.
Scenario: Health check runs on update boot
- Given
/data/.completed_first_bootexists (not first boot) - When the device reaches
multi-user.target - Then
os-verification.servicestarts - And it checks service health
ADDED: Sustained health check
After initial checks pass, the service monitors for 60 seconds (checking every 5 seconds) to catch restart loops, transient service failures, network regressions, and required provisioned-unit failures.
Scenario: Restart loop detected
- Given
dnsmasq.servicepasses the initial check - But it crashes and restarts during the 60-second sustain window
- Then the sustained health check fails
- And the slot is not committed
Scenario: Network or required unit regression detected
- Given eth0, eth1, and provisioned required units pass the initial check
- But one check fails during the 60-second sustain window
- Then the sustained health check fails
- And the slot is not committed
ADDED: Successful confirmation commits slot
When all checks pass (services and sustained), the service runs rauc status mark-good to commit the current
slot. This resets the boot counter and prevents further rollback.
Scenario: Slot committed on success
- Given all health checks pass for 60 seconds
- When
rauc status mark-goodis called - Then the booted slot is committed as “good”
- And
BOOT_x_LEFTis reset to the maximum value
ADDED: Failed confirmation leaves slot uncommitted
If any check fails, the service exits non-zero. The slot remains uncommitted, and the boot counter continues to decrement on each subsequent boot until rollback occurs.
Scenario: Gradual rollback on failure
- Given health checks fail on every boot of slot B
- Then each boot decrements
BOOT_B_LEFT - And after 3 boots, U-Boot rolls back to slot A
LAN Gateway
Source:
openspec/changes/rock64-ab-image/specs/lan-gateway/spec.md
Requirements
ADDED: Deterministic NIC naming
The onboard RK3328 GMAC is always named eth0 via a systemd .link file matching the platform path
(platform-ff540000.ethernet). USB Ethernet adapters receive kernel-assigned names. USB WiFi dongles are unsupported
until specific hardware and firmware are selected.
Scenario: Onboard Ethernet is eth0
- Given the Rock64 boots with the onboard Ethernet connected
- Then
ip linkshowseth0as the onboard GMAC - Regardless of USB device enumeration order
ADDED: eth0 as WAN (DHCP client)
The WAN interface acquires its address via DHCP v4. IPv6 RA is disabled. The DHCP-provided DNS servers are used.
Scenario: WAN gets DHCP address
- Given eth0 is connected to a network with a DHCP server
- When the device boots
- Then eth0 acquires an IPv4 address
- And DNS resolution works
ADDED: eth1 as LAN (static IP)
The LAN interface has a static IP from the provisioned LAN config. When no provisioned LAN config exists or it is
malformed, the fallback static IP is 172.20.30.1/24. It does not run a DHCP client.
Scenario: LAN has static IP
- Given the device has booted
- And
/data/config/lan-settings.jsoncontainsgateway_ip - Then
ip addr show eth1shows the configuredgateway_ipwith its provisioned prefix
Scenario: LAN uses fallback static IP
- Given the device has booted
- And no valid provisioned LAN config is available
- Then
ip addr show eth1shows172.20.30.1/24
ADDED: IP forwarding disabled
IP forwarding is disabled at the kernel level for both IPv4 and IPv6. The nftables FORWARD chain has a drop policy
with no exceptions. This creates a hard network boundary compliant with EN18031.
Scenario: No packet forwarding
- Given a LAN client sends a packet destined for the internet
- Then the packet is dropped at the gateway
- And it never reaches eth0
ADDED: DHCP server on LAN
dnsmasq runs on eth1 only. It assigns addresses from the provisioned DHCP range with a 24-hour lease and serves gateway-local DNS names without forwarding queries upstream.
Scenario: LAN client gets DHCP lease
- Given a client is connected to eth1
- When it sends a DHCP discover
- Then it receives an address in the provisioned DHCP range
- And the gateway is the provisioned
gateway_ip - And the DNS server is the provisioned
gateway_ip
Scenario: LAN DNS stays local-only
- Given a client on the LAN queries the gateway DNS server
- When the query is for a configured gateway-local name
- Then dnsmasq returns the local gateway address
- And dnsmasq does not forward unknown names to upstream resolvers
ADDED: NTP server on LAN
chrony acts as both an NTP client (syncing from pool.ntp.org via WAN) and an NTP server for LAN clients. The
provisioned LAN subnet is allowed to query. When no valid provisioned LAN config exists, the fallback subnet is
172.20.30.0/24.
Scenario: LAN client syncs time
- Given a client on the LAN queries NTP at the provisioned
gateway_ip - Then it receives a valid time response
- And chrony is synced to an upstream NTP pool
Scenario: Offline fallback
- Given the device has no WAN connectivity
- Then chrony uses
local stratum 10as a fallback - And LAN clients still receive time (lower accuracy)
ADDED: nftables firewall
The firewall uses nftables with per-interface rules:
| Interface | Allowed Inbound |
|---|---|
| eth0 (WAN) | established/related; provisioned inbound only |
| eth1 (LAN) | UDP 53, UDP 67-68, UDP 123, TCP 22, TCP 53, TCP 8080, established/related |
| tun0 (VPN) | TCP 22, established/related |
| FORWARD | DROP all |
WAN application and VPN ports are opened only from /data/config/firewall-inbound.json by
provisioned-firewall-inbound.service. SSH on WAN is controlled by a dynamic nftables rule toggled via
/data/config/ssh-wan-enabled.
Scenario: WAN ports are closed before provisioning
- Given no provisioned firewall state exists
- When a connection is attempted to eth0 on TCP 443 or UDP 1194
- Then the packet is dropped
Scenario: Provisioned WAN ports are allowed
- Given
/data/config/firewall-inbound.jsonincludes TCP 443 and UDP 1194 - And
provisioned-firewall-inbound.servicehas applied it - Then inbound traffic on eth0 TCP 443 and UDP 1194 is accepted
Scenario: WAN SSH blocked by default
- Given no flag file exists at
/data/config/ssh-wan-enabled - When an SSH connection is attempted from the WAN
- Then the connection is rejected
Scenario: WAN SSH enabled with flag
- Given
/data/config/ssh-wan-enabledis created - And
ssh-wan-reload.serviceis triggered - Then SSH connections from the WAN are accepted
ADDED: WAN SSH toggle is manual only
Enabling SSH on WAN requires creating a flag file on the device (via LAN SSH or physical console). There is no automated mechanism to enable it remotely – this is a deliberate security constraint.
ADDED: Device identity via MAC address
The device is identified by the MAC address of eth0 (the onboard Ethernet). This MAC is stable across reboots and
updates, and is used as the X-Device-ID header when polling for updates.
Scenario: MAC-based identity
- Given
eth0has MACaa:bb:cc:dd:ee:ff - When
os-upgrade.servicepolls for updates - Then the request includes
X-Device-ID: aabbccddeeff
Design Decisions
Source:
openspec/changes/rock64-ab-image/design.md
This chapter documents the key architectural decisions made during the design of AtomicNix, including the rationale, alternatives considered, and known trade-offs.
Context
AtomicNix is a greenfield project targeting thousands of Rock64 (RK3328, aarch64) devices deployed remotely as network gateways. The devices have 16 GB eMMC storage and must comply with EN18031 security requirements. The previous Debian-based system had a ~3% update failure rate from power loss and partial writes, bricking devices in the field.
Decision 1: RAUC over SWUpdate
Choice: RAUC for A/B slot management.
Rationale: RAUC has native U-Boot integration, well-documented slot configuration, and a straightforward NixOS module. SWUpdate offers more flexibility (scripted handlers, delta updates) but adds complexity that isn’t needed for the current use case.
Trade-off: RAUC’s update model is image-based (full slot writes), which means no delta updates. A full rootfs write (~300 MB) takes longer than a delta, but is simpler and more reliable.
Decision 2: Squashfs rootfs
Choice: Read-only squashfs root filesystem with OverlayFS (tmpfs upper layer).
Rationale: Squashfs eliminates runtime drift – every boot starts from a known-good state. It compresses well (zstd,
1 MB blocks), fitting the NixOS closure into the 1 GB slot with room to spare. A single OverlayFS (squashfs lower +
tmpfs upper) set up in the initrd provides a unified writable root, which is required for systemd’s mount namespace
sandboxing (PrivateTmp, ProtectHome, etc.) to work correctly. Writable state lives on /data (f2fs).
Trade-off: Any runtime state not explicitly persisted to /data is lost on reboot. This is intentional for an
appliance but requires careful placement of writable directories.
Decision 3: Per-slot boot partitions
Choice: Each A/B slot has its own boot partition (vfat) containing the kernel, initrd, DTB, and boot script.
Rationale: Pairing boot and rootfs in the same slot ensures they are always consistent. If kernel and rootfs were in different slot pairs, a failed update could leave mismatched versions.
Alternative considered: Single shared boot partition with both kernels. Rejected because it creates a single point of failure and complicates the U-Boot boot script.
Decision 4: eMMC partition layout
Choice: Fixed layout: 16 MB raw U-Boot, 128 MB boot A/B, 1 GB rootfs A/B, remaining space for /data.
Rationale: 128 MB per boot slot provides ample space for the kernel (~25 MB compressed), initrd, DTB, and boot
script. 1 GB per rootfs slot gives 2-3x headroom over the current squashfs size (~300-400 MB). The /data partition
(~13.3 GB) holds containers, logs, and configuration.
Risk: If the NixOS closure grows beyond 1 GB, the rootfs slot size must be increased, which reduces /data space
and requires re-provisioning all devices.
Decision 5: U-Boot from nixpkgs
Choice: Use pkgs.ubootRock64 from nixpkgs rather than a custom U-Boot build.
Rationale: The nixpkgs U-Boot package is tested, reproducible, and tracks upstream releases. Custom patches are applied via the kernel config (not U-Boot patches), keeping the build simple.
Trade-off: Limited to the U-Boot version and configuration in nixpkgs. The current version (2025.10) lacks
setexpr, requiring a manual if/elif chain for boot counter decrement.
Decision 6: Watchdog strategy
Choice: defer active systemd hardware watchdog enforcement while keeping 30s runtime / 10min reboot timeouts as the target settings.
Rationale: Rock64 boot reliability validation is not complete. The target values remain documented, but the current
release leaves systemd.settings.Manager = { } to avoid watchdog-triggered reset loops during development.
Integration: Once enabled, watchdog reboots feed directly into the boot-count rollback path.
Decision 7: Local health-check (no phone-home)
Choice: os-verification.service runs local checks only. No external server is contacted for update confirmation.
Rationale: The device must be self-sufficient. If the WAN is down after an update, the device should still be able to commit the slot (or roll back) based on local service health. Phoning home would create a dependency on network availability during the critical confirmation window.
Decision 8: Nixstasis-hosted remote management
Choice: Move remote web management out of the device image and treat Nixstasis as the primary control plane.
Rationale: The Nixstasis client already establishes reverse tunnels and receives short-lived SSH credentials from the server. Hosting remote web management and the auth layer in Nixstasis removes first-boot registry pulls, reduces device complexity, and keeps the device focused on local gateway and update responsibilities.
Trade-off: Remote management now depends on successful Nixstasis enrollment and tunnel establishment. Local recovery falls back to SSH rather than an on-device HTTPS UI.
Decision 9: OpenVPN in rootfs
Choice: Include OpenVPN in the root filesystem (not as a container).
Rationale: OpenVPN provides a recovery tunnel for remote management. If it ran as a container and the container runtime failed, there would be no remote access. Including it in the rootfs ensures it survives container-layer failures.
Decision 10: Network isolation (no IP forwarding)
Choice: Disable IP forwarding at the kernel level. The nftables FORWARD chain drops all packets.
Rationale: EN18031 requires a hard network boundary. LAN devices must not be able to reach the internet. WAN application or VPN ports are opened only by provisioned firewall state. Packet forwarding between WAN and LAN stays disabled.
Decision 11: NIC naming via .link files
Choice: Use systemd .link files for deterministic NIC naming rather than udev rules.
Rationale: .link files are the native systemd-networkd mechanism and are processed earlier in boot than udev
rules. They match on stable platform paths (e.g., platform-ff540000.ethernet for the onboard GMAC), ensuring eth0 is
always the onboard Ethernet regardless of USB enumeration order.
Decision 12: nftables firewall
Choice: nftables with per-interface rules, replacing iptables.
Rationale: nftables is the modern Linux firewall framework with better performance and a cleaner rule syntax. The
NixOS networking.nftables module provides native integration.
Decision 13: hawkBit-ready architecture
Choice: Design the update system to be swappable between polling and hawkBit push models.
Rationale: The initial deployment uses simple HTTP polling (os-upgrade.service). As the fleet scales, migration to
hawkBit can provide centralized update management, rollout policies, and device inventory. The os-upgrade.useHawkbit
option currently reserves this path and installs the package, but AtomicNix does not configure an operational hawkBit
service yet.
Decision 14: QEMU testing target
Choice: Provide a rock64-qemu NixOS configuration that shares the full service stack with the hardware target but
uses virtio devices and a file-based RAUC backend.
Rationale: Hardware testing is slow and requires physical devices. QEMU testing validates all software behavior (RAUC lifecycle, firewall rules, health checks) in CI-friendly VMs. The custom RAUC backend simulates U-Boot’s slot selection using files.
Decision 15: EN18031 authentication
Choice: no default passwords, locked local root/admin passwords, SSH key-only access, serial break-glass recovery, and Nixstasis-hosted remote management.
Rationale: The base image does not host the web management/authentication stack. SSH key-only access and locked passwords prevent brute-force attacks on the device, while Nixstasis handles remote management credentials outside the device image.
Decision 16: Squashfs closure optimization
Choice: Aggressive closure size reduction through overlays, disabled features, and stripped dependencies.
Techniques applied:
crunwithout CRIU (removes python3, saving ~102 MB)- Disabled: documentation, man pages, fonts, XDG, sudo, bash completion
- Emptied
defaultPackagesandfsPackages - Disabled: bcache, kexec, LVM
Result: Approximately 27% reduction in closure size compared to a default NixOS system with the same services.
Decision 17: Two-tier runtime logging model
Choice: Use tmpfs-first journald during runtime for host and container log ingress, then drain it through an
rsyslog RAM queue that appends buffered logs to /data/logs.
Rationale: Making the full journal always persistent would increase steady-state eMMC wear. The selected design keeps
runtime logging memory-first, caps journal memory use, routes Podman logs through the same path, and still retains
broader diagnostics durably on /data/logs in larger sequential batches instead of many small writes.
Trade-off: This is a bounded-loss durability model rather than an always-durable one. Sudden power loss can still drop the newest in-memory journal or rsyslog queue entries, but routine runtime writes remain much friendlier to eMMC than fully persistent journal storage.
Risks and Trade-offs
| Risk | Mitigation |
|---|---|
| eMMC wear from frequent writes | /data uses f2fs (wear-leveling aware); squashfs slots are written only during updates |
| U-Boot env corruption | Single-copy environment storage; corruption is handled through normal recovery and reprovisioning flows |
| 1 GB rootfs slot too small | Current closure is ~300-400 MB; aggressive optimization keeps headroom |
| Missing or empty health-required list | first-boot.service commits only when RAUC is enabled; os-verification uses gateway health checks alone unless /data/config/health-required.json names additional required units |
| Provisioned application failure | OpenVPN in rootfs and SSH key-only access provide alternate recovery paths |
| No delta updates | Full-image updates are ~300 MB; acceptable on broadband WAN connections |
| No automatic WAN SSH | Deliberate security constraint; manual flag file required |
Task Reference
All tasks are run with mise run <task>. Run mise tasks to list them.
Build Tasks
All build:* tasks accept --lima to run inside a Lima VM and --vm <name> to specify which VM (default: default).
| Task | Description |
|---|---|
check | Verify flake evaluates cleanly (nix flake check) |
build | Build and retain image artifacts under .gcroots/ |
build:squashfs | Build squashfs rootfs → result-squashfs/ |
build:rauc-bundle | Build signed RAUC bundle → result-rauc-bundle/ |
build:boot-script | Build U-Boot boot script → result-boot-script/ |
build also accepts -o <path> to copy the latest .img to a path.
E2E Test Tasks
| Task | Description |
|---|---|
e2e | Run the core 9-task E2E suite sequentially |
e2e:rauc-slots | RAUC slot detection after boot |
e2e:rauc-update | Bundle install + slot switch A→B |
e2e:rauc-rollback | Install → mark bad → rollback to previous slot |
e2e:rauc-confirm | os-verification health check → mark-good (~3 min) |
e2e:rauc-power-loss | Crash mid-install, verify recovery |
e2e:rauc-watchdog | Watchdog + boot-count rollback |
e2e:firewall | WAN/LAN/VPN port allow/deny (2-node VLAN) |
e2e:network-isolation | DHCP/NTP/WAN isolation (2-node VLAN) |
e2e:ssh-wan-toggle | SSH-on-WAN flag enable/disable |
e2e:debug | Interactive QEMU VM for debugging (-t <test>, --keep) |
Provisioning Tasks
| Task | Description |
|---|---|
flash | Flash image to disk device with dd + progress (macOS/Linux) |
Configuration Tasks
config:lan-range: Update LAN gateway/DHCP range across all config files.
Utility Tasks
| Task | Description |
|---|---|
gc | Delete old generations and collect unrooted store paths (--lima; --vm <name> when using --lima) |
serial:capture | Capture serial output (1.5 Mbaud, auto-reconnect). --bg for background |
serial:shell | Interactive serial shell via minicom (1.5 Mbaud) |
Flake Outputs
The Nix flake (flake.nix) provides the following outputs:
NixOS Configurations
| Output | Description |
|---|---|
nixosConfigurations.rock64 | Real hardware NixOS system (RK3328, eMMC, all service modules) |
nixosConfigurations.rock64-qemu | QEMU aarch64-virt testing target (virtio devices, custom RAUC backend) |
Both configurations share modules/base.nix and all service modules. They differ only in hardware-specific
configuration (kernel drivers, device paths, boot method).
Packages
All packages target aarch64-linux. An aarch64-darwin alias is provided so that nix build .#image works directly
from macOS when a linux-builder is available (the alias points to the same aarch64-linux package set):
| Output | Description |
|---|---|
packages.aarch64-linux.squashfs | Compressed squashfs root filesystem (~300-400 MB) |
packages.aarch64-linux.rauc-bundle | Signed multi-slot .raucb bundle for OTA updates |
packages.aarch64-linux.boot-script | Compiled U-Boot boot.scr |
packages.aarch64-linux.uboot | Custom Rock64 U-Boot package providing the bootloader artifacts |
packages.aarch64-linux.uboot-env-tools | fw_printenv / fw_setenv binaries used with the Rock64 SPI env |
packages.aarch64-linux.image | Flashable eMMC disk image (U-Boot + boot-a + rootfs-a, ~1.2 GB) |
Apps
| Output | Description |
|---|---|
apps.aarch64-linux.rock64-qemu-vm | QEMU VM runner (nix run .#rock64-qemu-vm) |
Checks (Tests)
Tests are available for both Linux and macOS:
| Output | Description |
|---|---|
checks.aarch64-linux.* | E2E tests running under TCG (software emulation) |
checks.aarch64-darwin.* | Same tests running natively on macOS via Apple Virtualization Framework |
Available test names: rauc-slots, rauc-update, rauc-rollback, rauc-confirm, rauc-power-loss, rauc-watchdog,
firewall, initrd-fresh-flash-marker, first-boot-provision, first-boot-source-discovery, forensics-podman-log-path,
forensics-rsyslog-path, forensics-rsyslog-buffering, forensics-shutdown-flush, network-isolation, ssh-wan-toggle.
Overlay
The flake includes an embeddedOverlay that strips unnecessary dependencies to reduce closure size:
crunis built without CRIU support (removescriu+python3, saving ~102 MB)
This overlay is applied to both NixOS configurations via the overlayModule.
Project Structure
flake.nix Main flake (pinned nixpkgs release, aarch64-linux)
flake.lock Pinned nixpkgs
mise.toml Tool versions, build tasks, hooks
modules/
base.nix Shared NixOS config (systemd, ssh, auth, closure opts)
hardware-rock64.nix RK3328 kernel, DTB, eMMC/watchdog drivers
hardware-qemu.nix QEMU aarch64-virt target for testing
networking.nix NIC naming (.link files), eth0/eth1 config
firewall.nix nftables rules (WAN/LAN/VPN/FORWARD)
lan-gateway.nix dnsmasq DHCP, chrony NTP, IP forwarding off
rauc.nix RAUC system.conf, slot definitions
watchdog.nix systemd watchdog config
os-verification.nix Post-update health check service
os-upgrade.nix Update polling + reserved hawkBit package path
first-boot.nix First-boot provisioning import + slot commit
logging.nix journald ingress + buffered rsyslog durability
boot-storage-debug.nix Boot-partition mount helpers for debugging
openvpn.nix OpenVPN recovery tunnel
nix/
squashfs.nix Squashfs image derivation (closureInfo + mksquashfs)
rauc-bundle.nix Multi-slot RAUC bundle derivation
boot-script.nix U-Boot boot.scr compilation
image.nix Flashable eMMC disk image derivation
tests/ NixOS VM integration tests (nixos-lib.runTest)
rauc-slots.nix RAUC slot detection + custom backend
rauc-update.nix Bundle install + slot switch
rauc-rollback.nix Install -> mark-bad -> rollback
rauc-confirm.nix os-verification health check -> mark-good
rauc-power-loss.nix Crash mid-install, verify recovery
rauc-watchdog.nix Watchdog + boot-count rollback
firewall.nix 2-node WAN/LAN port allow/deny
initrd-fresh-flash-marker.nix Initrd fresh-flash detection
first-boot-provision.nix Provisioning import + Quadlet rendering
first-boot-source-discovery.nix USB/boot seed discovery rules
forensics-*.nix journald/rsyslog durability and log-path tests
network-isolation.nix 2-node DHCP/NTP/WAN isolation
ssh-wan-toggle.nix SSH-on-WAN flag enable/disable
scripts/
build-squashfs.sh Squashfs build template (Nix derivation)
build-rauc-bundle.sh RAUC bundle build template (Nix derivation)
build-image.sh Disk image assembly template (Nix derivation)
os-verification.sh Runtime health check script
os-upgrade.sh Runtime update polling script
ssh-wan-toggle.sh SSH-on-WAN flag check
ssh-wan-reload.sh SSH-on-WAN runtime reload
first-boot.sh First-boot provisioning import + mark-good
first-boot-provision.py Provisioning importer/bootstrap/Quadlet renderer
quadlet-sync.sh Rootful/rootless Quadlet sync + startup
watchdog-boot-count.sh Boot-count decrement and rollback journal logging
boot.cmd U-Boot A/B boot script source
fw_env.config U-Boot SPI env config
.mise/tasks/
flash Flash image to disk device (macOS/Linux)
serial/
capture Serial console capture (1.5 Mbaud, --bg for background)
shell Interactive serial console (minicom)
config/
lan-range Update LAN gateway/DHCP range across all configs
e2e/
rauc-slots ... ssh-wan-toggle Individual E2E test runners
debug Interactive QEMU debugging
docs/
build Build the documentation site
serve Serve docs locally with hot reload
certs/
dev.ca.cert.pem Development RAUC CA certificate (public)
dev.signing.cert.pem Development RAUC signing certificate (public)
dev.*.key.pem Development private keys (committed for dev/test only)
docs/
book.toml mdBook configuration
src/ Documentation source (this site)
_typos.toml Typos checker config
Code Reference
This section documents the internal interfaces of AtomicNix: the NixOS modules, Nix derivations, and shell scripts that make up the system.
- NixOS Modules – the NixOS configuration modules in
modules/ - Nix Derivations – the build derivations in
nix/ - Scripts – the shell scripts in
scripts/and.mise/tasks/
NixOS Modules
All NixOS modules live in the modules/ directory. base.nix imports all service modules and is itself imported by the
hardware-specific modules (hardware-rock64.nix, hardware-qemu.nix).
Module Dependency Graph
flowchart TD
KERNEL["kernel-config.nix<br/>shared stripped kernel baseline"]
subgraph HARDWARE["hardware targets"]
direction LR
ROCK64["hardware-rock64.nix"]
QEMU["hardware-qemu.nix"]
end
ROCK64 --> BASE["base.nix"]
QEMU --> BASE
subgraph IMPORTS["base.nix imports"]
direction LR
LOGGING["logging.nix"]
NETWORKING["networking.nix"]
FIREWALL["firewall.nix"]
LAN["lan-gateway.nix"]
OPENVPN["openvpn.nix"]
RAUC["rauc.nix"]
FIRSTBOOT["first-boot.nix"]
VERIFY["os-verification.nix"]
UPGRADE["os-upgrade.nix"]
WATCHDOG["watchdog.nix"]
end
BASE --> NETWORKING
BASE --> LOGGING
BASE --> FIREWALL
BASE --> LAN
BASE --> OPENVPN
BASE --> RAUC
BASE --> FIRSTBOOT
BASE --> VERIFY
BASE --> UPGRADE
BASE --> WATCHDOG
KERNEL -. shared baseline .-> ROCK64
KERNEL -. shared baseline .-> QEMU
base.nix
Purpose: Shared NixOS configuration for both hardware and QEMU targets. Defines the core system layout, filesystem mounts, user accounts, and system packages.
Key configuration:
| Setting | Value | Notes |
|---|---|---|
system.stateVersion | "25.11" | NixOS release |
networking.hostName | "gateway" | |
nix.enable | false | No Nix daemon on read-only rootfs |
documentation.enable | false | Saves closure space |
security.sudo.enable | false | Uses run0 instead |
Filesystem layout (OverlayFS root):
The root filesystem uses a single OverlayFS assembled in the initrd from the selected squashfs slot and tmpfs-backed upper/work directories:
| Layer | Mount | Filesystem | Size | Description |
|---|---|---|---|---|
| overlay (combined) | / | overlay | – | Unified writable root presented to userspace |
| lower (read-only) | /run/rootfs-base | squashfs | – | Immutable NixOS system from the selected RAUC rootfs slot |
| upper (writable) | /run/overlay-root/* | tmpfs | runtime | Ephemeral writes, lost on reboot |
| persistent state | /data | f2fs | dynamic | Created on first boot (PARTLABEL=data, nofail, noatime) |
The overlay is assembled in the initrd before switch_root:
boot.scrpassesroot=fstabandatomicnix.lowerdev=/dev/...for the selected squashfs slotinitrd-prepare-overlay-lower.servicemounts that slot read-only at/run/rootfs-basesysroot.mountmounts/as overlay withlowerdir=/run/rootfs-base,upperdir=/run/overlay-root/upper, andworkdir=/run/overlay-root/worksysroot-run.mountbind-mounts/runinto the switched root
This approach replaces the older /sysroot mutation logic and keeps the root mount fstab-driven, which fits systemd’s
initrd model more cleanly.
The lower squashfs is selected by U-Boot/RAUC, while /data remains outside the A/B slots and survives updates.
Sandboxing note: nsncd (the NSS lookup daemon) runs as root due to permission issues on the overlay filesystem.
Network wait: systemd-networkd-wait-online is configured with a 30s timeout and anyInterface=true.
Build ID: The NixOS login banner (/etc/issue) displays the build ID for easy identification.
Data partition: Not included in the flashable image. Initrd systemd-repart creates it from the remaining eMMC space
on first boot.
tmpfiles.d rules (created on boot):
/var/empty, /var/lib, /var/lib/systemd/network, /var/lib/private,
/var/lib/private/systemd/resolve, /var/lib/chrony, /var/lib/dnsmasq,
/var/cache, /var/cache/nscd, /var/log, /var/log/journal, /var/db, /var/run
User accounts:
| User | Groups | Authentication |
|---|---|---|
root | – | Locked by default; Rock64 serial-root recovery only when _RUT_OH_=1 |
admin | wheel | SSH key from /data/config/ssh-authorized-keys/admin; password remains locked |
System packages: nano, htop, curl, jq, f2fs-tools, kmod
logging.nix
Purpose: Configure the runtime logging path: volatile journald as
ingress, buffered rsyslog appends to /data/logs, and a shutdown flush
hook.
Key configuration:
| Setting | Value | Notes |
|---|---|---|
| journald storage | Storage=volatile | Keeps runtime logs in tmpfs-backed journal storage |
| journald cap | RuntimeMaxUse=32M | Bounds memory use for runtime logs |
| rsyslog output | buffered omfile appends to /data/logs/*.log | Uses async buffered writes instead of direct per-line sync |
| Podman log driver | journald | Routes container stdout/stderr into the same journald path |
Services:
| Service | Purpose |
|---|---|
syslog.service | Runs rsyslogd and drains journald into buffered files |
logging-shutdown-flush.service | Flushes journald and asks rsyslog to sync buffered output |
This module no longer installs slot-local forensic helpers. Runtime service and
script output is expected to go to stdout/stderr under systemd, which places
it into journald and then through the buffered rsyslog path.
hardware-rock64.nix
Purpose: Rock64 (RK3328) hardware-specific kernel, device tree, and RAUC slot mapping.
Kernel configuration:
| Category | Drivers | Build |
|---|---|---|
| eMMC | MMC_DW, MMC_DW_ROCKCHIP | built-in (=y) |
| Ethernet | STMMAC_ETH, DWMAC_ROCKCHIP | built-in |
| USB | DWC2, USB_XHCI_HCD, USB_EHCI_HCD, USB_OHCI_HCD | built-in |
| Watchdog | DW_WATCHDOG | built-in |
| Filesystems | SQUASHFS, SQUASHFS_ZSTD, F2FS_FS, OVERLAY_FS | built-in |
| USB Ethernet | USB_RTL8152, USB_NET_AX88179_178A, USB_NET_CDCETHER | module (=m) |
| USB Serial | FTDI_SIO, CP210X | module |
| WiFi/BT | WLAN, CFG80211, MAC80211, RFKILL, BT | unsupported |
RAUC slot mapping:
atomicnix.rauc.slots = {
boot0 = "/dev/mmcblk1p1"; # boot-a
boot1 = "/dev/mmcblk1p3"; # boot-b
rootfs0 = "/dev/mmcblk1p2"; # rootfs-a
rootfs1 = "/dev/mmcblk1p4"; # rootfs-b
};
Serial console: ttyS2 at 1.5 Mbaud (Rock64 UART2), enabled via serial-getty@ttyS2.service.
kernel-config.nix
Purpose: Shared stripped kernel baseline used by both Rock64 and QEMU so the VM target stays close to the real device kernel.
Contents:
baseKernelConfig: the common stripped ARM64 gateway kernel baselineoptionalKernelConfig: isolated optional USB serial support
hardware-qemu.nix imports this file and layers only the minimal aarch64-virt, virtio, and test-harness-specific
requirements on top.
hardware-qemu.nix
Purpose: QEMU aarch64-virt configuration for development and testing.
Differences from hardware-rock64.nix:
| Setting | Rock64 | QEMU |
|---|---|---|
| Boot method | U-Boot boot.scr | extlinux |
| Block devices | /dev/mmcblk1pN | /dev/vdN (virtio) |
| RAUC backend | uboot | custom (file-based) |
| Kernel modules | Hardware-specific | virtio_pci, virtio_blk, etc. |
The QEMU RAUC tests share their slot mapping through nix/tests/rauc-qemu-config.nix:
atomicnix.rauc = {
slots = {
boot0 = "/dev/vdb";
boot1 = "/dev/vdc";
rootfs0 = "/dev/vdd";
rootfs1 = "/dev/vde";
};
bootloader = "custom";
};
networking.nix
Purpose: Deterministic NIC naming and systemd-networkd configuration.
Link files:
| Priority | Match | Result |
|---|---|---|
10-onboard-eth | Platform platform-ff540000.ethernet | Name = eth0 |
20-usb-eth | Drivers r8152, ax88179_178a, cdc_ether | Enabled as modules in Rock64 kernel config |
| WiFi | Unsupported until hardware selection | not part of current Rock64 image |
Network files:
| Priority | Interface | Configuration |
|---|---|---|
10-wan | eth0 | DHCP v4, uses DHCP DNS, no NTP from DHCP |
20-lan | eth1 | Static 172.20.30.1/24, no DHCP |
Sysctl: net.ipv4.ip_forward = 0, net.ipv6.conf.all.forwarding = 0
firewall.nix
Purpose: nftables firewall with per-interface rules and dynamic SSH-on-WAN toggle.
nftables rules (inet filter):
| Chain | Policy | Rules |
|---|---|---|
input | drop | lo: accept; established: accept; eth1: UDP 53, UDP 67-68, UDP 123, TCP 22, TCP 53, TCP 8080; tun0: TCP 22 |
forward | drop | (no exceptions) |
output | accept |
Dynamic SSH toggle services:
| Service | When | What |
|---|---|---|
ssh-wan-toggle | Boot (after nftables) | Reads flag file, adds SSH rule if present |
ssh-wan-reload | On demand | Removes old rule, re-adds if flag file exists |
Flag file: /data/config/ssh-wan-enabled
Provisioned WAN inbound: /data/config/firewall-inbound.json is applied by provisioned-firewall-inbound.service.
The baseline eth0 policy remains closed for TCP/443 and UDP/1194 until those ports are provisioned.
lan-gateway.nix
Purpose: DHCP and NTP server for isolated LAN devices.
dnsmasq configuration:
| Setting | Value |
|---|---|
| Interface | eth1 (bind-dynamic) |
| DHCP range | provisioned range, fallback 172.20.30.10 – 172.20.30.254, 24h lease |
| Gateway option | provisioned gateway IP, fallback 172.20.30.1 |
| DNS option | provisioned gateway IP (gateway-local DNS only) |
| NTP option | provisioned gateway IP |
| DNS port | 53 (local-only, no upstream forwarding) |
chrony configuration:
| Setting | Value |
|---|---|
| Upstream | pool pool.ntp.org iburst |
| Serve to | provisioned LAN subnet, fallback 172.20.30.0/24 |
| Fallback | local stratum 10 |
rauc.nix
Purpose: RAUC A/B update system configuration. Defines project options (atomicnix.rauc.*) and maps them onto the
upstream NixOS services.rauc module.
Custom NixOS options (atomicnix.rauc.*):
| Option | Type | Default | Description |
|---|---|---|---|
compatible | string | "rock64" | RAUC compatible string |
bootloader | enum | "uboot" | Backend (uboot, custom, etc.) |
statusFile | string | /data/rauc/status.raucs | RAUC status file |
bundleFormats | list of strings | [-plain, +verity] | Allowed bundle formats |
slots.boot0 | string | (required) | Boot slot A device path |
slots.boot1 | string | (required) | Boot slot B device path |
slots.rootfs0 | string | (required) | Rootfs slot A device path |
slots.rootfs1 | string | (required) | Rootfs slot B device path |
When bootloader = "custom", a file-based shell script is generated that simulates U-Boot environment management using
files in /var/lib/rauc/.
watchdog.nix
Purpose: systemd hardware watchdog integration plus boot-count and rollback bookkeeping.
systemd.settings.Manager = {
# RuntimeWatchdogSec = "30s";
# RebootWatchdogSec = "10min";
};
The hardware watchdog manager settings remain disabled during development, but
watchdog-boot-count.service is installed so the real boot-count and rollback
path records lifecycle markers to the journal through normal service stdout.
os-verification.nix
Purpose: Post-update health-check service.
| Setting | Value |
|---|---|
| Type | oneshot |
| Condition | ConditionPathExists=/data/.completed_first_boot |
| Timeout | 180s |
| Script | scripts/os-verification.sh |
| PATH | rauc, jq, systemd, iproute2 |
os-upgrade.nix
Purpose: OTA update polling service.
Custom NixOS options (os-upgrade.*):
| Option | Type | Default | Description |
|---|---|---|---|
useHawkbit | bool | false | Reserve hawkBit path and install package |
pollingInterval | string | "1h" | Timer interval |
serverUrl | string | "http://localhost/updates" | Update server URL |
Timer: OnBootSec=5min, OnUnitActiveSec=<pollingInterval>, RandomizedDelaySec=10min
When useHawkbit = true, AtomicNix disables the polling service and installs rauc-hawkbit-updater, but does not
configure an operational hawkBit systemd service in the current image.
first-boot.nix
Purpose: One-time first-boot provisioning and optional slot confirmation.
| Setting | Value |
|---|---|
| Type | oneshot |
| Condition | ConditionPathExists=!/data/.completed_first_boot |
| Script | scripts/first-boot.sh |
| Effect | provision config, optionally rauc status mark-good, then write sentinel |
Mutually exclusive with os-verification.service via the sentinel file.
openvpn.nix
Purpose: OpenVPN recovery tunnel.
| Setting | Value |
|---|---|
| Config path | /data/config/openvpn/client.conf |
| Auto-start | false |
| Condition | ConditionPathExists=/data/config/openvpn/client.conf |
Nix Derivations
The nix/ directory contains four derivations that produce the build artifacts. Each is called from flake.nix via
pkgs.callPackage.
Build Pipeline
flowchart LR
SQUASHFS["squashfs.nix"] --> ROOTFS["rootfs.squashfs"]
BOOTSCRIPT["boot-script.nix"] --> BOOTSCR["boot.scr"]
ROOTFS --> IMAGE["image.nix"]
BOOTSCR --> IMAGE
IMAGE --> IMGOUT["flashable .img"]
ROOTFS --> RAUCBUNDLE["rauc-bundle.nix"]
BOOTSCR --> RAUCBUNDLE
RAUCBUNDLE --> BUNDLEOUT["signed .raucb for OTA"]
squashfs.nix
Purpose: Builds a read-only squashfs image from the full NixOS system closure.
Function signature:
{ stdenv, squashfsTools, closureInfo, nixosConfig, maxSquashfsSize }:
| Parameter | Source | Description |
|---|---|---|
nixosConfig | rock64System.config | Evaluated NixOS configuration |
maxSquashfsSize | flake.nix (1 GB) | Maximum allowed image size |
Delegates to: scripts/build-squashfs.sh
Build steps:
- Compute all Nix store paths from
closureInfoofsystem.build.toplevel - Copy all store paths into a pseudo-root directory
- Create
/initand/sbin/initsymlinks to the NixOS init - Create empty mount-point directories (
/proc,/sys,/dev,/run,/etc,/var,/tmp, etc.) - Run
mksquashfswith zstd compression (level 19), 1 MB block size - Fail if the image exceeds
maxSquashfsSize
Output: $out/rootfs.squashfs
Compression options:
- Algorithm: zstd (level 19)
- Block size: 1 MiB (1048576)
- No xattrs
- All files owned by root
rauc-bundle.nix
Purpose: Builds a signed RAUC bundle containing boot (kernel + initrd + DTB + boot.scr) and rootfs (squashfs) images.
Function signature:
{ stdenv, rauc, dosfstools, mtools, squashfsTools,
nixosConfig, squashfsImage, bootScript, signingCert, signingKeyPath, caCert }:
| Parameter | Source | Description |
|---|---|---|
nixosConfig | rock64System.config | Provides kernel/initrd/DTB paths |
squashfsImage | packages.squashfs | The squashfs derivation output |
bootScript | packages.boot-script | Compiled boot.scr |
signingCert | ./certs/dev.signing.cert.pem | RAUC signing certificate |
signingKeyPath | ./certs/dev.signing.key.pem | RAUC signing private key |
caCert | ./certs/dev.ca.cert.pem | CA certificate for verification |
Delegates to: scripts/build-rauc-bundle.sh
Build steps:
- Create a 128 MB vfat image (
boot.vfat) - Copy kernel
Image,initrd, DTB, andboot.scrinto it using mtools - Copy
rootfs.squashfsinto the bundle directory - Generate
manifest.raucmwithcompatible=rock64and image definitions - Sign and package with
rauc bundle
Output: $out/rock64.raucb
Manifest structure:
[update]
compatible=rock64
version=<nixosConfig.system.nixos.version>
[image.boot]
filename=boot.vfat
type=raw
[image.rootfs]
filename=rootfs.squashfs
type=raw
boot-script.nix
Purpose: Compiles the U-Boot boot script from source (boot.cmd -> boot.scr).
Function signature:
{ stdenv, ubootTools, buildId }:
| Parameter | Source | Description |
|---|---|---|
buildId | flake.nix | Build identifier echoed during U-Boot boot |
Build step:
mkimage -C none -A arm64 -T script -d boot.cmd boot.scr
Output: $out/boot.scr (compiled) and $out/boot.cmd (source copy)
image.nix
Purpose: Assembles the complete flashable disk image for eMMC provisioning.
Function signature:
{ stdenv, dosfstools, mtools, util-linux,
ubootRock64, nixosConfig, squashfsImage, bootScript }:
| Parameter | Source | Description |
|---|---|---|
ubootRock64 | nixpkgs | U-Boot package for Rock64 |
nixosConfig | rock64System.config | Provides kernel, initrd, DTB |
squashfsImage | packages.squashfs | Squashfs derivation |
bootScript | packages.boot-script | Compiled boot.scr |
Delegates to: scripts/build-image.sh
Image layout (total ~1170 MiB sparse):
| Offset | Size | Content | Filesystem |
|---|---|---|---|
| 0 | 16 MB | U-Boot raw | – |
| 16 MB | 128 MB | boot-a | vfat |
| 144 MB | 1024 MB | rootfs-a | squashfs |
| 1168 MB | remaining | unallocated | – |
Output: $out/atomicnix-<series>.img
The image name is derived from the pinned NixOS release series (e.g., atomicnix-25.11.img). The image leaves the
remaining eMMC space unallocated so initrd systemd-repart can create boot-b, rootfs-b, and /data on first
boot.
GPT partition types: Boot partitions use the xbootldr GUID (BC13C2FF-...). Rootfs partitions use the Linux root
aarch64 GUID (B921B045-...), which is the architecturally correct type for aarch64 root filesystems.
U-Boot raw writes:
idbloader.imgat sector 64 (32 KB)u-boot.itbat sector 16384 (8 MB)
boot-a contents: kernel Image, initrd, DTB (rockchip/rk3328-rock64.dtb), boot.scr
Scripts
Shell scripts in scripts/ and .mise/tasks/ implement the runtime services and build/provisioning tooling.
Build Scripts (Nix Derivation Templates)
These scripts run inside Nix derivations. Variables like @kernel@ are substituted by Nix at build time.
build-squashfs.sh
Location: scripts/build-squashfs.sh
Builds the squashfs rootfs image from a NixOS closure.
| Input | Description |
|---|---|
@systemClosure@ | Path to system.build.toplevel |
@closureInfo@ | Closure info (contains store-paths file) |
@maxSize@ | Maximum image size in bytes |
Steps: Copy store paths to pseudo-root, create init symlinks and mount-point dirs, run mksquashfs with zstd/19,
check size limit.
build-rauc-bundle.sh
Location: scripts/build-rauc-bundle.sh
Builds a signed RAUC bundle (.raucb).
| Input | Description |
|---|---|
@kernel@ | Kernel package (contains Image and dtbs/) |
@initrd@ | Initrd package (contains initrd) |
@dtbPath@ | Relative DTB path (e.g., rockchip/rk3328-rock64.dtb) |
@squashfs@ | Squashfs image directory |
@bootScript@ | Compiled U-Boot script (boot.scr) |
@signingCert@ / @signingKey@ | RAUC signing credentials |
@version@ | Bundle version string |
Steps: Create 128 MB vfat with kernel + initrd + DTB + boot.scr (mtools), generate manifest, sign with rauc bundle.
build-image.sh
Location: scripts/build-image.sh
Assembles the flashable disk image.
| Input | Description |
|---|---|
@kernel@, @initrd@, @dtbPath@ | Kernel artifacts |
@squashfs@ | Squashfs image |
@bootScript@ | Compiled boot.scr |
@uboot@ | U-Boot package |
@imageName@ | Output filename |
Steps: Create sparse image, write U-Boot at raw offsets, create GPT with slot A partitions (boot-a, rootfs-a),
create the slot A vfat boot partition with mtools, and write squashfs to rootfs-a. Slot B and /data are created by
initrd systemd-repart on first boot.
Runtime Scripts
These scripts run on the device at runtime, invoked by systemd services.
watchdog-boot-count.sh
Location: scripts/watchdog-boot-count.sh
Records watchdog boot-count state and rollback decisions for the configured RAUC bootloader backend.
Responsibilities:
- Detect the active bootloader mode from
ATOMICNIX_RAUC_BOOTLOADER - For the
custombackend, decrement/var/lib/rauc/boot-count.<slot>on boot - Mark the failed slot bad and switch primary when the count is exhausted
- For the
ubootbackend, read the post-bootBOOT_*_LEFTvalue viafw_printenv - Emit journal-visible lifecycle lines through normal stdout
boot.cmd
Location: scripts/boot.cmd
U-Boot boot script loaded after RAUC bootmeth selects the slot and decrements the boot-count. Compiled to boot.scr by
mkimage.
Key logic:
- Echo build ID (squashfs store hash) to console for identification
- If the reset button (Linux
gpiochip3line 4, U-Boot GPIO100) is held low for 10 seconds, runums 0 mmc 1so the Rock64 OTG port exposes the full eMMC to a host computer - Auto-detect boot device number from
devnum - Override
ramdisk_addr_r=0x08000000(avoids kernel overlap) - Read RAUC bootmeth variables for selected boot/root partitions
- Set
rauc.slotandatomicnix.lowerdev - Load kernel/initrd/DTB from the selected boot partition, set
root=fstab, andbooti
Console: ttyS2,1500000 (Rock64 UART2)
fw_env.config
Location: scripts/fw_env.config
Configuration for fw_setenv / fw_printenv (userspace U-Boot env tools). The installed Rock64 config points to the
single SPI flash environment exposed through /dev/mtd0.
| Entry | Device | Offset | Size | Erase size |
|---|---|---|---|---|
| Primary env | /dev/mtd0 | 0x140000 | 0x2000 | 0x1000 |
The old raw eMMC environment offsets are not used.
os-verification.sh
Location: scripts/os-verification.sh
Post-update health check. Runs after every boot (except first).
Checks performed:
- RAUC slot status – skip if already committed
dnsmasq.serviceis activechronyd.serviceis activeeth0has a WAN IPeth1has the provisioned gateway IP, falling back to172.20.30.1- Provisioned required units from
/data/config/health-required.jsonare active - Sustained 60s check (every 5s): all service, network, and required-unit checks still pass
- On success:
rauc status mark-good
Logging: Emits progress and failure details through normal service output,
which is captured by journald and forwarded to /data/logs by rsyslog.
Dependencies: rauc, jq, systemctl, ip
os-upgrade.sh
Location: scripts/os-upgrade.sh
OTA update polling script. Checks for new RAUC bundles and installs them.
Environment: OS_UPGRADE_URL (update server base URL)
Steps:
- Get current version from
rauc statusand compact lowercase 12-hex device ID from eth0 MAC - Query
$URL/api/v1/updates/latestwith version and device headers - If newer version found: download to
/data/config/bundles/,rauc install, reboot - Non-fatal on network errors (timer retries later)
Forensics: Emits Tier 0 install and managed reboot markers, but avoids noisy polling or “no update” chatter in the durable forensic log.
first-boot.sh
Location: scripts/first-boot.sh
First-boot provisioning/import/bootstrap flow plus boot confirmation.
Steps:
- Check for
/data/.completed_first_bootand exit if it already exists - Discover provisioning input from fresh-flash
/boot/config.toml, USB media, or the LAN bootstrap console - Validate and import the config into
/data/config/ - Render and sync rootful and rootless Quadlet units
- Restart Quadlet sync, LAN apply, and provisioned firewall apply services; fail before slot confirmation if LAN or firewall apply fails
- Mark the current RAUC slot good when RAUC is enabled
- Write timestamp to
/data/.completed_first_boot
ssh-wan-toggle.sh
Location: scripts/ssh-wan-toggle.sh
Boot-time SSH-on-WAN rule application.
Logic: If /data/config/ssh-wan-enabled exists, add nftables rule iifname "eth0" tcp dport 22 accept with
comment SSH-WAN-dynamic.
ssh-wan-reload.sh
Location: scripts/ssh-wan-reload.sh
Runtime SSH-on-WAN toggle (remove and re-add rule).
Logic: Find and delete existing SSH-WAN-dynamic rule by handle, then re-add if flag file exists. Idempotent.
mise Task Scripts
These are the .mise/tasks/ scripts invoked via mise run.
flash
Location: .mise/tasks/flash
Cross-platform disk flasher (macOS + Linux).
| Flag | Description |
|---|---|
<disk> | Target device (e.g., /dev/disk4) |
-i <path> | Image file (auto-detects if not specified) |
-y | Skip confirmation |
macOS features: Converts /dev/diskN to /dev/rdiskN for unbuffered I/O; refuses to write to boot disk; ejects
after flash.
serial:capture
Location: .mise/tasks/serial/capture
Serial console capture wrapper with auto-reconnect.
| Flag | Default | Description |
|---|---|---|
-p | /dev/cu.usbserial-DM02496T | Serial device |
-l | /tmp/rock64-serial.log | Log file |
-t | 0 (infinite) | Capture timeout |
--bg | (flag) | Run in background |
Launches scripts/serial-capture.py in a nix-shell with pyserial.
serial:shell
Location: .mise/tasks/serial/shell
Interactive serial console via minicom (1.5 Mbaud, no hardware flow control). Uses nix build nixpkgs#minicom to
resolve the minicom binary.
config/lan-range
Location: .mise/tasks/config/lan-range
Updates LAN gateway/DHCP configuration across all files.
| Flag | Default | Description |
|---|---|---|
--gateway-cidr | 172.20.30.1/24 | Gateway IP and subnet |
--dhcp-start | 172.20.30.10 | DHCP pool start |
--dhcp-end | 172.20.30.254 | DHCP pool end |
Modifies: modules/networking.nix, modules/lan-gateway.nix, scripts/os-verification.sh.