Exploring Docker containers on FreeBSD
1835 words, 9 minutes
When it comes to running (Linux) Docker containers on BSD hosts, I usually end
up popping an Alpine or Debian virtual machine with the Docker engine.
But I read that podman was available on FreeBSD and able to run Docker
containers.
Here’s what I learned, so far.
BTW, it’s probably worth reading the following before going further.
- Handbook Chapter 12. Linux Binary Compatibility
- Handbook Chapter 17. Jails and Containers
- FreeBSD Containers and Orchestration
- Installing podman on FreeBSD
- podman package message
- A brief introduction to OCI containers on FreeBSD
- Turn podman containers into jails
Install and configure podman
Once the FreeBSD system is up and running, enable Linux emulation on the host to be able to actually run Linux binaries.
# service linux enable
# service linux start
Dedicate a ZFS dataset to the containers if that makes sense; for
storage layout, backup or feature (compression etc) reasons. In my case,
the main reason was not using zroot/ROOT/default.
# zfs create -o mountpoint=/var/db/containers znvme/podman
Install podman your preferred way; I went for binary packages
utilization. Then proceed to post-installation recommendations.
# pkg install podman
# service podman enable
# mount -t fdescfs fdesc /dev/fd
# echo 'fdesc /dev/fd fdescfs rw 0 0' >> /etc/fstab
Enable (if not already done) pf and configure it for podman using
the content of the /usr/local/etc/containers/pf.conf.sample file.
Depending on the host configuration, one can either replace pf.conf
with the podman example or append relevant directives to an already
existing one.
Hello World container
The simple Hello World container can now be run:
# podman run --rm quay.io/dougrabson/hello
Trying to pull quay.io/dougrabson/hello:latest...
Getting image source signatures
Copying blob b13a5ec7f3d2 done |
Copying config f81c971736 done |
Writing manifest to image destination
!... Hello Podman World ...!
.--"--.
/ - - \
/ (O) (O) \
~~~| -=(,Y,)=- |
.---. /` \ |~~
~/ o o \~~~~.----. ~~
| =(X)= |~ / (O (O) \
~~~~~~~ ~| =(Y_)=- |
~~~~ ~~~| U |~~
Project: https://github.com/containers/podman
Website: https://podman.io
Documents: https://docs.podman.io
Twitter: @Podman_io
As with Docker, the --rm flags make everything ephemeral.
The container image is fetched and locally assembled. Then an instance is created, started, run and deleted. It’s just that simple; from a user’s perspective.
Uptime Kuma container
According to the documentation, running Uptime Kuma is all about passing the relevant parameters.
Using podman, a simple incantation would be:
# podman run --os=linux -p 3001:3001 \
--name uptime-kuma --rm docker.io/louislam/uptime-kuma:2
From there, browsing to http://host:3001 starts the configuration
wizard. Once done, sending a Ctrl-C to the podman command stops the
container and deletes (because of --rm) the instance.
To have a real-life instance, a persistent volume can be attached to the
container and podman parameters can be adjusted.
# podman run --os=linux -p 3001:3001 \
--name uptime-kuma --volume uptime-kuma:/app/data \
docker.io/louislam/uptime-kuma:2
When the podman command ends, data are still available.
# podman volume list
DRIVER VOLUME NAME
local uptime-kuma
# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cc32aa285efe docker.io/louislam/uptime-kuma:2 node server/serve... About a minute ago Exited (0) 7 seconds ago 0.0.0.0:3001->3001/tcp uptime-kuma
Using the “podman start uptime-kuma” command will start the container
with all its persistent data.
Creating the container with a --restart=policy flag enables it to be
restarted automatically. It is also possible to update the policy if it
needs to be changed.
# podman update --restart=always uptime-kuma
In my testing, “always” behaves like Docker’s “unless-stopped”.
For such “simple” containers, using podman on FreeBSD really feels
like using docker (and probably also podman) on Linux.
Uptime Kuma jail
When you have a look at what happens on the host when using podman,
you can see that a container is run as a FreeBSD jail. You can see a
reference to it using jls, you can log into it using jexec, you can kill
it using jail.
When you take a look at what happens at the ZFS storage, you can see
that a dataset is created for the Docker image. Volumes are not visible
as datasets but appear as directories under
/var/db/containers/storage/volumes. There seem to be ways to use ZFS
datasets as volumes if they are setup before the container is started.
But I haven’t found a parameter that says “create a ZFS dataset for each
podman volume”.
Given those two previous observations, it seems pretty straight forward
to convert a Docker container into a Linux Jail. Write down the
Jail parameters that were used by podman. This may be needed later on
when creating the FreeBSD jail configuration.
# jls -s -j <JID> | tr ' ' '\n' > /tmp/uptime.conf
Write down the information about what’s run in the container.
# podman inspect --format 'WORKDIR={{.Config.WorkingDir}}\n\
ENTRYPOINT={{.Config.Entrypoint}}\nCMD={{.Config.Cmd}}' uptime-kuma
WORKDIR=/app
ENTRYPOINT=[/usr/bin/dumb-init --]
CMD=[node server/server.js]
Stop the container to access its storage in a stable state.
# podman stop uptime-kuma
Use ZFS send/receive feature to create a Jail dataset that is the copy of the container dataset.
# grep '^path=' /tmp/uptime.conf
path=/var/db/containers/storage/zfs/graph/5dc6ecad748201378fd761852c79f65665b9f7af63661be7c5bb55f259b2351e
# zfs list | grep 5dc6ecad748201378fd761852c79f65665b9f7af63661be7c5bb55f259b2351e
znvme/podman/5dc6ecad748201378fd761852c79f65665b9f7af63661be7c5bb55f259b2351e 4.55M 2.41T 969M legacy
# zfs send znvme/podman/5dc6ecad748201378fd761852c79f65665b9f7af63661be7c5bb55f259b2351e | \
zfs receive znvme/jails/uptime-kuma
# zfs set mountpoint=/jails/uptime-kuma znvme/jails/uptime-kuma
Create a ZFS dataset that will host persistent data and copy those created during the previous container’s run.
# zfs create znvme/jails/uptime-kuma-data
# zfs set mountpoint=/jails/uptime-kuma/app/data \
znvme/jails/uptime-kuma-data
# cd /var/db/containers/storage/volumes/uptime-kuma/_data
# tar cpf - . | tar xpf - -C /jails/uptime-kuma/data/
# cd -
Create the jail configuration file. Based on previous gathered information. Mine looks like:
# vi /etc/jail.conf.d/uptime-kuma.conf
uptime-kuma {
host.hostname = "${name}";
ip4 = inherit;
interface = igc0;
enforce_statfs = 1;
mount += "devfs $path/dev devfs rw 0 0";
mount += "tmpfs $path/dev/shm tmpfs rw 0 0";
mount += "fdescfs $path/dev/fd fdescfs rw,linrdlnk 0 0";
mount += "linprocfs $path/proc linprocfs rw 0 0";
mount += "linsysfs $path/sys linsysfs rw 0 0";
exec.start = ". /etc/profile; cd /app; /usr/bin/dumb-init -- node server/server.js &";
exec.stop = "/bin/pkill -P 1 || /usr/bin/true";
}
For some reasons, using service jail on a Linux Jail always sends me
Starting jails: cannot start jail “uptime-kuma”:
jail: uptime-kuma: getpwnam: No such file or directory
jail: uptime-kuma: /bin/sh -c cd /app; /bin/bash: failed
This does not happen when using jail on the command line. So I guess
this is something due to using FreeBSD shell and stuff to actually
prepare the environment and run the jail command. Anyway, after some
digging, it can be solved by adding a generic password file in FreeBSD
format to the jail.
# fetch -o /tmp/base.txz \
https://download.freebsd.org/releases/amd64/14.3-RELEASE/base.txz
# tar -xzpf /tmp/base.txz -C /jails/uptime-kuma/ ./etc/pwd.db
The container is now running as a classical FreeBSD Linux jail.
The update process would be something like:
- generate a new updated container using
podman. - create a new ZFS Dataset from this content.
- stop the Jail, configure the new Dataset as the Jail storage.
- start the Jail with the updated image content.
Uptime Kuma jail (alternate)
I have installed podman on my local FreeBSD test machine. But I don’t
want to install it onto my remote Production server, if I don’t have to.
Luckily, it is possible to use a docker container export equivalent to
get the container’s filesystem as an archive; archive that can be
transferred to a remote server and used to populate a ZFS dataset.
Grab a Docker container and turn its filesystem into a tar file:
# podman create --os=linux --name kuma docker.io/louislam/uptime-kuma:2
# podman export -o /tmp/uptime-kuma.tar kuma
# podman rm kuma
Transfer the archive to some remote host where Linux emulation and Jails services have been configured and started.
Create the system and data ZFS datasets:
# zfs create -o mountpoint=/jails/uptime-kuma znvme/jails/uptime-kuma
# zfs create -o mountpoint=/jails/uptime-kuma/app/data znvme/jails/uptime-kuma-data
Replicate the container filesystem onto the dataset. To address further updates, I just use rsync every time:
# TMPDIR="$(mktemp -d /tmp/uptime-kuma.XXXXXXXXXX)"
# tar xpf /tmp/uptime-kuma.tar -C "$TMPDIR/"
# cd "$TMPDIR" && rsync -aqvH --delete . /jails/uptime-kuma/ && \
cd - && rm -rf "$TMPDIR" && unset TMPDIR
Solve the getpwnam issue:
# fetch -o /tmp/base.txz \
https://download.freebsd.org/releases/amd64/14.3-RELEASE/base.txz
# tar -xzpf /tmp/base.txz -C /jails/uptime-kuma/ ./etc/pwd.db
Be sure it uses my DNS servers:
# cp -p /etc/resolv.conf /jails/uptime-kuma/etc
The Jail configuration is the same as the one in the previous step . You can just (re)use it as-is.
When the Jail is started (service jail start uptime-kuma), use a Web
browser and target http://host:3001. Et voilĂ !
Update the jail
Updating the Jail is pretty like deploying a new Jail content while taking care of not overwriting the persistent data.
Stop the running Jail, create stateful snapshot and unmount the data ZFS dataset:
# service jail stop uptime-kuma
# zfs snapshot znvme/jails/uptime-kuma@working
# zfs snapshot znvme/jails/uptime-kuma-data@working
# zfs umount znvme/jails/uptime-kuma-data
Create the archive for the new container version:
# podman create --os=linux --name kuma docker.io/louislam/uptime-kuma:2
# podman export -o /tmp/uptime-kuma.tar kuma
# podman rm kuma
Deploy the new software version onto the Jail dataset:
# TMPDIR="$(mktemp -d /tmp/uptime-kuma.XXXXXXXXXX)"
# tar xpf /tmp/uptime-kuma.tar -C "$TMPDIR/"
# cd "$TMPDIR" && rsync -aqvH --delete . /jails/uptime-kuma/ && \
cd - && rm -rf "$TMPDIR" && unset TMPDIR
Reapply final touches to the Jail:
# tar -xzpf /tmp/base.txz -C /jails/uptime-kuma/ ./etc/pwd.db
# cp -p /etc/resolv.conf /jails/uptime-kuma/etc
Mount the data ZFS dataset, start the Jail and check that everything went as expected and service is back online.
# zfs mount znvme/jails/uptime-kuma-data
# service jail start uptime-kuma
When you confident that everything went well, snapshots can be deleted.
# zfs destroy znvme/jails/uptime-kuma@working
# zfs destroy znvme/jails/uptime-kuma-data@working
Final thoughts
This Uptime Kuma implementation works well as it is very simple: a single container shipping all its dependencies.
podman is highlighted for “rootless containers allow you to contain
privileges without compromising functionality”. Unfortunately, when you
try to run podman as an unprivileged user on FreeBSD, you get:
Error: rootless mode is not supported on FreeBSD
This is where Jails come to the rescue.
If I wanted to run several such instances, in my current configuration,
I would get in trouble because the container expects to be exposed on
port 3001. There are no environment variable that can be set to tell the
container to listen on another port. A quick recursive grep reveals
that port 3001 is hard-coded in many places… Using “podman run --os=linux -p 8666:3001 (...)”, I could manage it. But using the jail
configuration would require using a different IP for each instance. No
big deal as I could add some NAT etc. But that’s things to plan.
Not all applications come as a single container. Many are composed of
several containers (web server, application runtime, database, etc). In
such a case, I’m not sure yet how easy it would be to use them with
podman and/or convert to Jails. Docker Compose takes care of
everything on Linux and such apps are quite easy to manage. It seems you
need some extra podman tool to manager docker-compose files. I guess
turning such application into Jails would be as simple/complicated as
creating a Jail for each container and have them talk to each other.
That’s probably when one would have to learn FreeBSD VNET
Jails
…
Until then, That’s All Folks!