Running OpenBSD on OmniOS using bhyve
3009 words, 15 minutes
The bhyve hypervisor has been ported to Illumos and provides an altervative to KVM. SmartOS created an OpenBSD image but it’s quite old. I don’t know (yet) how to upgrade or make more up-to-date images. But I could manage to run OpenBSD 7.4 on OmniOS.
Preparation
Have a look at zadm doc -b bhyve
to see all the available options for
bhyve virtual machines.
The OpenBSD 7.4 installation ISO does not support UEFI. This makes it impossible to use with bhyve to run the installer, AFAIK. This will soon be old tales but until then, an option is to use the disk image that you normally transfer to an USB device to boot on.
I’m using a dedicated dataset to store ISOs.
# zfs create -o mountpoint=/zones/iso tank/iso
# cd /zones/iso
# wget https://cdn.openbsd.org/pub/OpenBSD/7.4/amd64/install74.img
Turn the image file into a device that can be attached to a VM:
# lofiadm -r -a /zones/iso/install74.img
/dev/lofi/1
Create an OpenBSD virtual machine using zadm
:
# zadm create -b bhyve openbsd74
{
"autoboot" : "false",
"attr" : [
{
"name" : "disk",
"type" : "string",
"value" : "/dev/lofi/1"
}
],
"bootdisk" : {
"blocksize" : "8k",
"path" : "tank/zones/openbsd74/root",
"size" : "10G",
"sparse" : "false"
},
"brand" : "bhyve",
"device" : [
{
"match" : "/dev/lofi/1"
}
],
"ip-type" : "exclusive",
"net" : [
{
"global-nic" : "igb0",
"physical" : "openbsd74"
}
],
"ram" : "2G",
"rng" : "on",
"type" : "generic",
"vcpus" : "2",
"vnc" : "off",
"zonename" : "openbsd74",
"zonepath" : "/zones/openbsd74"
}
A ZFS file system has been created for this zone.
There are various (old) posts online that points to using an “AMD”
hostbridge or alternate diskif / netif values. From what I could test,
the best parameter to modify from defaults is the “type” one. This solved
all my weird VM behaviour.
Edit 2024-01-20: I had a few issues with VM where the network would stop
working after about 400GB of data were transferred. After a bunch of
tests with different hardware and configuration, it is not clear what
happened. It seems using the default bhyve parameters is enough to run
the OpenBSD VM properly.
Installation
Start the zone while attaching to the console. Tell the OpenBSD boot
loader to use com0
.
# zadm start -c openbsd74
[Connected to zone 'openbsd74' console]
[NOTICE: Zone booting up]
probing: pc0 com0 com1 mem[640K 2025M 424K 16M 20K 3M]
disk: hd0 hd1*
>> OpenBSD/amd64 BOOTX64 3.65
boot> set tty com0
>> OpenBSD/amd64 BOOTX64 3.65
boot> <ENTER>
cannot open hd0a:/etc/random.seed: No such file or directory
booting hd0a:/7.4/amd64/bsd.rd: 3969732+1655808+3886664+0+708608
[109+444888+297417]=0xa76798
entry point at 0x1001000
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California. All rights
reserved.
Copyright (c) 1995-2023 OpenBSD. All rights reserved.
https://www.OpenBSD.org
OpenBSD 7.4 (RAMDISK_CD) #1322: Tue Oct 10 09:07:38 MDT 2023
deraadt@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
real mem = 2108211200 (2010MB)
avail mem = 2040360960 (1945MB)
random: good seed from bootblocks
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.8 @ 0x7eafa000 (11 entries)
bios0: vendor BHYVE version "14.0" date 10/10/2021
bios0: OmniOS OmniOS HVM
acpi0 at bios0: ACPI 4.0
(...)
Welcome to the OpenBSD/amd64 7.4 installation program.
(I)nstall, (U)pgrade, (A)utoinstall or (S)hell?
Proceed to install as usual.
With “ip-type” set to “exclusive”, the VM has the same network access as the OmniOS host ; as if they were both connected to the same switch. In my case, the VM can get an IP using my LAN DHCP server.
Don’t forget to configure the console. I onced used the “9600” value and this seems to also works properly. Not sure what the best value is. But so far, “115200” works properly too.
Change the default console to com0? [yes] yes
Available speeds are: 9600 19200 38400 57600 115200.
Which speed should com0 use? (or 'done') [9600] 115200
The zvol and install disk image appear as two disks. Don’t install OpenBSD on the wrong one :)
Available disks are: sd0 sd1.
Which disk is the root disk? ('?' for details) [sd0] ?
sd0: VirtIO, Block Device (10.0G)
sd1: VirtIO, Block Device (0.6G)
Available disks are: sd0 sd1.
Which disk is the root disk? ('?' for details) [sd0] <ENTER>
The bhyve VM expects an UEFI system. Don’t forget to select the (G)PT configuration if you want the VM to boot on its own.
No valid MBR or GPT.
Use (W)hole disk MBR, whole disk (G)PT or (E)dit? [whole] G
An EFI/GPT disk may not boot. Proceed? [no] yes
Setting OpenBSD GPT partition to whole sd0...done.
The auto-allocated layout for sd0 is:
# size offset fstype [fsize bsize cpg]
a: 1177.2M 532544 4.2BSD 2048 16384 1 # /
b: 256.0M 2943424 swap
c: 10240.0M 0 unused
d: 3072.0M 3467712 4.2BSD 2048 16384 1 # /usr
e: 2048.0M 9759168 4.2BSD 2048 16384 1 #
/home
i: 260.0M 64 MSDOS
Use (A)uto layout, (E)dit auto layout, or create (C)ustom layout? [a] a
/dev/rsd0a: 1177.2MB in 2410880 sectors of 512 bytes
6 cylinder groups of 202.50MB, 12960 blocks, 25920 inodes each
/dev/rsd0e: 2048.0MB in 4194304 sectors of 512 bytes
11 cylinder groups of 202.50MB, 12960 blocks, 25920 inodes each
/dev/rsd0d: 3072.0MB in 6291456 sectors of 512 bytes
16 cylinder groups of 202.50MB, 12960 blocks, 25920 inodes each
Available disks are: sd1.
Which disk do you wish to initialize? (or 'done') [done]
(...)
Saving configuration files... done.
Making all device nodes... done.
Multiprocessor machine; using bsd.mp instead of bsd.
fw_update: add intel; update none
Relinking to create unique kernel... done.
CONGRATULATIONS! Your OpenBSD install has been successfully completed!
Exit to (S)hell, (H)alt or (R)eboot? [reboot]
If you wish to use this installation as a template to deploy all your next OpenBSD instances, jump to the next section now .
If you plan to use this virtual machine as-is, select the (H)alt option,
quit the console (using ~~.
), stop the zone and deallocate the block
device.
# zadm poweroff openbsd74
# zonecfg -z openbsd74 remove device match=/dev/lofi/1
# zonecfg -z openbsd74 remove attr value=/dev/lofi/1
# lofiadm -d /zones/iso/install74.img
Now, start the virtual machine and enjoy.
# zadm start openbsd74
# zadm console openbsd74
[Connected to zone 'openbsd74' console]
Run syspatch(8) to install:
001_xserver 002_msplit 003_patch 004_ospfd 005_tmux
006_httpd 007_perl 008_vmm 009_pf
starting local daemons: cron.
Tue Dec 12 20:16:03 CET 2023
OpenBSD/amd64 (openbsd74.home.arpa) (tty00)
login: root
A dmesg example is available here .
Template image
You can install all your OpenBSD bhyve instances using the previous steps. Or you can modify the installer to provide an unattended install. Or you can use this installed zone as a base to speed up futur deployments.
While still in the installer, drop to (S)hell and chroot to the installed system. I like to add an authorized SSH key for root. One could also configure smtpd(8) to authenticate to the LAN’s relay, add and configure some monitoring packages etc.
Image creation
As I’ll be using this installation as a template, I want the VM to use
another hostname and auto apply syspatches at first boot. To do so, I
simply write an /etc/rc.firsttime
. That script will ask for a new
hostname, modify the proper configuration files, delete the older SSH
keys, run syspatch and reboot.
CONGRATULATIONS! Your OpenBSD install has been successfully completed!
When you login to your new system the first time, please read your mail
using the 'mail' command.
Exit to (S)hell, (H)alt or (R)eboot? [reboot] s
To boot the new system, enter 'reboot' at the command prompt.
# chroot /mnt /bin/ksh
# echo "ssh-ed25519 (...)" > /root/.ssh/authorized_keys
# echo change_me@example > /root/.forward
# TERM=vt220 vi /etc/mail/smtpd.conf
# echo "(...)" > /etc/mail/secrets
# chown root:_smtpd /etc/mail/secrets
# chmod 0640 /etc/mail/secrets
# cp /etc/examples/doas.conf /etc/
# TERM=vt220 vi /etc/doas.conf
# cat >> /etc/rc.firsttime
echo "************************************************************************"
echo "This system was build from a template."
echo -n "System hostname? (short form, e.g. 'foo') "
read _hostname
/usr/bin/sed -E -i "s/openbsd74/$_hostname/g" /etc/myname
/bin/rm /etc/ssh/ssh_host*
echo "Applying syspatches..."
/usr/sbin/syspatch
echo "Rebooting in 5 seconds..."
/bin/sleep 5
/sbin/shutdown -r now
^D
# exit
# halt
syncing disks... done
The operating system has halted.
Please press any key to reboot.
The template is now ready to be used. Turn the zone off, remove the installer image and deallocate the block device.
# zadm poweroff openbsd74
# zonecfg -z openbsd74 remove device match=/dev/lofi/1
# zonecfg -z openbsd74 remove attr value=/dev/lofi/1
# lofiadm -d /zones/iso/install74.img
Create a copy of the installed Zvol. Create a template file out of the installed zone. Both will be used to create new OpenBSD instances.
# zfs send tank/zones/openbsd74/root > /zones/openbsd74.zvol
# zadm show openbsd74 > /zones/openbsd74.zadm
# vi /zones/openbsd74.zadm
{
"acpi" : "on",
"autoboot" : "false",
"bootdisk" : {
"blocksize" : "8K",
"path" : "tank/zones/__ZONENAME__/root",
"size" : "10G",
"sparse" : "false"
},
"bootrom" : "BHYVE",
"brand" : "bhyve",
"diskif" : "virtio",
"hostbridge" : "i440fx",
"ip-type" : "exclusive",
"net" : [
{
"global-nic" : "igb0",
"physical" : "__ZONENAME__0"
}
],
"netif" : "virtio",
"ram" : "2G",
"rng" : "on",
"type" : "openbsd",
"vcpus" : "2",
"vnc" : {
"enabled" : "off"
},
"xhci" : "on",
"zonename" : "__ZONENAME__",
"zonepath" : "/zones/__ZONENAME__"
}
You can now delete the template VM.
Image deployment
Use zadm
to create a new OpenBSD instance from the Zvol and configure
file that were just created.
# zadm create -b bhyve -i /zones/openbsd74.zvol \
-t /zones/openbsd74.zadm puffy
Going to overwrite the boot disk 'tank/zones/puffy/root'
with the provided image. Do you want to continue [Y/n]? y
receiving full stream of tank/zones/openbsd74/root@--head-- into tank/zones/puffy/root@--head--
2.5GiB 00:00:04 [629.7MiB/s]
received 2,46GB stream in 6 seconds (420MB/sec)
The new zone can now be started.
# zadm start -c puffy
During the first boot, OpenBSD will ask for a new name, run syspatch and reboot. From there, kindergarden is opened.
Notes on sizing changes
If the new zone has a different CPU number, no problem, OpenBSD detects it after a clean boot. Same thing happens if the RAM size was changed from the template value. Check if the swap size matches your changes.
If the disk size was changed, your mileage may vary. If you down-resized the boot disk, chances are that the zone won’t boot properly. If you up-resized the boot disk, OpenBSD should detect this. In my testings, the kernel detected the new disk size properly and both disklabel(8) and growfs(8) helped recovering the new space.
# dmesg | grep sd0
sd0 at scsibus1 targ 0 lun 0: <VirtIO, Block Device, >
sd0: 65536MB, 512 bytes/sector, 134217728 sectors
# df -h | egrep '^Filesystem|/home$'
Filesystem Size Used Avail Capacity Mounted on
/dev/sd0l 2.4G 2.0K 2.3G 1% /home
# umount /home
# disklabel -E sd0
sd0> p g
OpenBSD area: 532544-33554399; size: 15.7G; free: 0.0G
# size offset fstype [fsize bsize cpg]
a: 0.4G 532544 4.2BSD 2048 16384 6461 # /
b: 0.6G 1366272 swap # none
c: 64.0G 0 unused
(...)
l: 2.5G 28297888 4.2BSD 2048 16384 12960 # /home
sd0> b
Starting sector: [532544]
Size ('*' for entire disk): [33021855] *
sd0*> p g
OpenBSD area: 532544-134217728; size: 63.7G; free: 48.0G
(...)
sd0*> m l
offset: [28297888]
size: [5256480] *
FS type: [4.2BSD]
sd0*> p g
OpenBSD area: 532544-134217728; size: 63.7G; free: 0.0G
# size offset fstype [fsize bsize cpg]
a: 0.4G 532544 4.2BSD 2048 16384 6461 # /
b: 0.6G 1366272 swap # none
c: 64.0G 0 unused
(...)
l: 50.5G 28297888 4.2BSD 2048 16384 12960 # /home
sd0*> w
sd0> q
# growfs /dev/sd0l
We strongly recommend you to make a backup before growing the Filesystem
Did you backup your data (Yes/No) ? yes
new filesystem size is: 26479960 frags
Warning: 166240 sector(s) cannot be allocated.
growfs: 51637.5MB (105753600 sectors) block size 16384, fragment size 2048
using 255 cylinder groups of 202.50MB, 12960 blks, 25920 inodes.
super-block backups (for fsck -b #) at:
5391520, 5806240, 6220960, 6635680, 7050400, 7465120, 7879840, 8294560,
(...)
102021280, 102436000, 102850720, 103265440, 103680160, 104094880, 104509600,
104924320, 105339040
# fsck /dev/sd0l
** /dev/rsd0l
** Last Mounted on /home
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 25608077 free (13 frags, 3201008 blocks, 0.0% fragmentation)
MARK FILE SYSTEM CLEAN? [Fyn?] y
***** FILE SYSTEM WAS MODIFIED *****
# mount /home
# df -h | egrep '^Filesystem|/home$'
Filesystem Size Used Avail Capacity Mounted on
/dev/sd0l 48.8G 2.0K 46.4G 1% /home
A lot of space is assigned to /usr/obj
and /usr/src
. That’s the
standard partitioning layout but I never use sources of my Production
VMs. To recover the space and use it for /home
:
# tar czpf /home.tar.gz /home
# umount /usr/src /usr/obj /home
# disklabel -E sd0
sd0> d j
sd0*> d k
sd0*> d l
sd0*> a j
offset: [14108320]
size: [19446079] *
FS type: [4.2BSD]
sd0*> w
sd0> q
# newfs /dev/rsd0j
/dev/rsd0j: 9495.1MB in 19446048 sectors of 512 bytes
(...)
# vi /etc/fstab
# mount /home
# tar xzpf /home.tar.gz -C /
Things will be more complicated if you want to resize any other partitions. In that case, you’d probably better go for a new fresh install.
Add network interfaces
The virtual machine can get several NICs so that it is used as a firewall. Those network interfaces can either be linked to a physical interface on the host or to a virtual interface (aka etherstub). Quoting dladm(8):
An Ethernet stub can be used instead of a physical NIC to create VNICs. VNICs created on an etherstub will appear to be connected through a virtual switch, allowing complete virtual networks to be built without physical hardware.
As an example, I’ve added an interface binded to the physical interface of the host - it will behave as the first initial one - and another interface binded to an etherstub - it will behave as if it was connected to another switch.
# dladm create-etherstub private0
# zonecfg -z openbsd74 "\
add net; set global-nic=igb0; set physical=openbsd741; end; \
add net; set global-nic=private0; set physical=openbsd742; end"
Boot the VM and configure the NICs as you wish.
omnios# zadm boot -c openbsd74
openbsd74# dmesg | grep "^vio[0-9]"
vio0 at virtio1: address 02:08:20:66:ed:ec
vio1 at virtio2: address 02:08:20:2e:a1:0f
vio2 at virtio3: address 02:08:20:7c:2f:e2
openbsd74# ifconfig vio1 autoconf
openbsd74# ifconfig vio2 inet 192.0.2.10 netmask 255.255.255.0 up
omnios# dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
igb0 phys 1500 up -- --
(...)
private0 etherstub 9000 up -- --
openbsd740 vnic 1500 up -- igb0
openbsd741 vnic 1500 up -- igb0
openbsd742 vnic 9000 up -- private0
nemo0 vnic 9000 up -- private0
In my example, another OpenBSD VM (name nemo) has its unique NIC connected to private0. It can’t reach my LAN. It can only ping 192.0.2.10. I’ll will have to turn the “openbsd74” VM into a router so that the “nemo” VM can access the Wild Wild Web.
Add storage
The template I build has a quite small storage. It’s ok for a firewall, a DNS or DHCP server. But I need more storage for my Cloud storage services.
Attaching a dataset
Attaching a dataset to an OpenBSD bhyve VM does not work. Well, it works but the VM has no access to it. That may be related to OpenBSD having no ZFS support. I haven’t tested this configuration (yet) with another OS so I’m not 100% sure it works at all.
Attaching a Zvol
Create a dataset and attach the corresponding volume to the VM:
# zfs create -V 64G tank/abyss
# zonecfg -z openbsd74 "\
add device;\
set match=/dev/zvol/dsk/tank/abyss;\
end;\
add attr;\
set name=disk;\
set type=string;\
set value=/dev/zvol/dsk/tank/abyss;\
end"
# zadm start -c openbsd74
Have a look at what the OS sees:
openbsd74# dmesg | grep '^sd[0-9]'
sd0 at scsibus1 targ 0 lun 0: <VirtIO, Block Device, >
sd0: 10240MB, 512 bytes/sector, 20971520 sectors
sd1 at scsibus2 targ 0 lun 0: <VirtIO, Block Device, >
sd1: 65536MB, 512 bytes/sector, 134217728 sectors
openbsd74# sysctl hw.disknames
hw.disknames=sd0:059c579d2454f74e,sd1:
Partition the disk. Then format and mount the filesystem(s).
openbsd74# disklabel -E sd1
Label editor (enter '?' for help at any prompt)
sd1> a a
offset: [0]
size: [134217728]
FS type: [4.2BSD]
sd1*> p g
OpenBSD area: 0-134217728; size: 64.0G; free: 0.0G
# size offset fstype [fsize bsize cpg]
a: 64.0G 0 4.2BSD 2048 16384 1
c: 64.0G 0 unused
sd1*> w
sd1> q
No label changes.
openbsd74# newfs sd1a
/dev/rsd1a: 65536.0MB in 134217728 sectors of 512 bytes
324 cylinder groups of 202.50MB, 12960 blocks, 25920 inodes each
super-block backups (for fsck -b #) at:
160, 414880, 829600, 1244320, 1659040, 2073760, 2488480, 2903200, 3317920,
3732640, 4147360, 4562080, 4976800, 5391520, 5806240, 6220960, 6635680,
(...)
132295840, 132710560, 133125280, 133540000, 133954720,
openbsd74# mount /dev/sd1a /mnt
openbsd74# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/sd0a 1.1G 134M 948M 13% /
/dev/sd0e 1.9G 2.0K 1.8G 1% /home
/dev/sd0d 2.9G 1.7G 1.1G 61% /usr
/dev/sd1a 62.0G 2.0K 58.9G 1% /mnt
Migrating a Zvol
If for some reasons, the VM has to be trashed while keeping the data, the Zvol can be dettached frm the VM and reattached to some other VM.
# zadm stop openbsd74
# zadm delete openbsd74
# zfs get zoned tank/abyss
NAME PROPERTY VALUE SOURCE
tank/abyss zoned - -
# zonecfg -z nemo "\
> add device;\
> set match=/dev/zvol/dsk/tank/abyss;\
> end;\
> add attr;\
> set name=disk;\
> set type=string;\
> set value=/dev/zvol/dsk/tank/abyss;\
> end"
# zadm start -c nemo
[Connected to zone 'nemo' console]
[NOTICE: Zone booting up]
(...)
OpenBSD/amd64 (nemo.home.arpa) (tty00)
login: root
Password:
nemo# sysctl hw.disknames
hw.disknames=sd0:6821b02cbebf43fa,sd1:cda7854a83e5396c
nemo# disklabel sd1
# /dev/rsd1c:
type: SCSI
disk: SCSI disk
label: Block Device
duid: cda7854a83e5396c
(...)
16 partitions:
# size offset fstype [fsize bsize cpg]
a: 134217728 0 4.2BSD 2048 16384 12960
c: 134217728 0 unused
nemo# mount /dev/sd1a /mnt
nemo# ls -alh /mnt/
total 2098216
drwxr-xr-x 2 root wheel 512B Dec 13 00:39 .
drwxr-xr-x 13 root wheel 512B Dec 13 01:06 ..
-rw-r--r-- 1 root wheel 1.0G Dec 13 00:46 TEST
Attaching an NFS share
Solarish systems can export datasets using SMB and NFS. And OpenBSD can mount an NFS share.
omnios# zfs set sharenfs="rw=@192.0.2.42/32" tank/home
nemo# mount -t nfs -o udp,soft,wsize=32768,rsize=32768 \
192.0.2.2:/home /home
That’s all folks! Bye for now :)