2

I want to be able to mount zfs pool in a WSL2 Ubuntu distro to make it transparently available for use by native Windows programs. I have created a custom kernel with zfs enabled with help from this guide: https://wsl.dev/wsl2-kernel-zfs/.

Everything works correctly and I can import the zpool and access the data within windows. Unfortunately windows does not persist wsl2 mounts, so this needs to be done everytime so I created windows scheduled task which executes mounts physical drives and imports zpool in ubuntu:

@echo off
wsl --mount \\.\PHYSICALDRIVE0 --bare
wsl --mount \\.\PHYSICALDRIVE1 --bare
wsl --mount \\.\PHYSICALDRIVE2 --bare
wsl --mount \\.\PHYSICALDRIVE3 --bare
wsl --mount \\.\PHYSICALDRIVE4 --bare

wsl -u root zpool import zstore

However some startup programs rely on the the zpool to be mounted but they can fail if it is not mounted before they load as there is a race condition. Is there a way to delay auto startup of programs or a work around to effectively have the wsl mounted drives persist between restarts?

ᄂ ᄀ
  • 4,187

1 Answers1

0

There is one or two excellent threads over in /r/zfs, https://www.reddit.com/r/zfs/comments/lcdxs5/zfs_in_wsl2_raw_disk_access/

Excellent summary also note the importance of case sensitivity for wsl2 zfs datasets,here: https://stackoverflow.com/questions/51591091/apply-setcasesensitiveinfo-recursively-to-all-folders-and-subfolders

And a completely different option by @/u/ShaRose of another option of file based loopback for the zpool in wsl, quote and code from her original post: https://www.reddit.com/r/zfs/comments/lcdxs5/comment/gms8056/

So, I thought of this when I first saw this thread, but never really had a chance to get around to trying it out. The idea is rather than use a raw disk, which requires a bit of finagling, if you just need a zfs pool to say, back up a nas, set up a disk image on a drive (or more). For some reason, it seems like zpool create on wsl2 doesn't like passing to a file, but setting up a loopback with losetup works. Edit: Interesting tidbit: If I totally corrupt one 'drive' using dd and urandom, it just says it's thousands of checksum errors. If I detach one, suddenly it's corrupted data. You can stick it in a script and automate it if you want. dkms autoinstall SHOULD install any modules that were set up as long as they are compatible, and even without that if you run this before doing, say, apt install zfs-dkms, it should set it all up for you, just requiring modprobe zfs. Some modules don't work however: wireguard isn't compatible, but virtualbox seems to work just fine.

KERNVER=$(uname -r | cut -f 1 -d'-')
git clone --branch linux-msft-$KERNVER --depth 1 https://github.com/microsoft/WSL2-Linux-Kernel.git ~/kern-$KERNVER
zcat /proc/config.gz > ~/kern-$KERNVER/.config
make -C ~/kern-$KERNVER -j $(nproc)
make -C ~/kern-$KERNVER -j $(nproc) modules_install
ln -s /lib/modules/$KERNVER-microsoft-standard-WSL2+ /lib/modules/$KERNVER-microsoft-standard-WSL2
dkms autoinstall -k $KERNVER-microsoft-standard-WSL2

root@DESKTOP-FVKI47F:~# dkms status virtualbox, 6.1.16, 5.4.72-microsoft-standard-WSL2+, x86_64: installed virtualbox, 6.1.16, 5.4.72-microsoft-standard-WSL2, x86_64: installed wireguard, 1.0.20201112: added zfs, 2.0.2, 5.4.72-microsoft-standard-WSL2, x86_64: installed

root@DESKTOP-FVKI47F:~# zpool status pool: testpool state: ONLINE config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 loop1 ONLINE 0 0 0 loop2 ONLINE 0 0 0 loop3 ONLINE 0 0 0 loop4 ONLINE 0 0 0 loop5 ONLINE 0 0 0 loop6 ONLINE 0 0 0 loop7 ONLINE 0 0 0 loop8 ONLINE 0 0 0 loop9 ONLINE 0 0 0 errors: No known data errors