Email address tags in Postfix and Dovecot

What if you could tag the mail address you provide when registering for various services to simplify the management of the inevitable stream of unsolicited mail that follows? If you could register myname+theservicename@mydomain.tld it would make it very easy to recognize mail from that service – and it would make it easy to pinpoint common leaks, whether they’d got their customer database cracked or just sold it to the highest bidder.

The most famous provider of such a service might be Google’s Gmail. But if you run a Postfix server, this functionality is included and may actually already be turned on out-of-the-box. In your it looks like this:

recipient_delimiter = +

The delimiter can basically be any character that’s valid in the local part of an email address, but obviously you want to avoid using characters that actually are in use in your environment (dots (.) and dashes (-) come to mind).

By default, though, such mail won’t actually get delivered if you use Dovecot with a relatively default configuration for storing mail. The reason is that the + character needs to be explicitly allowed. To fix this, find the auth_username_chars setting and add the + character to it (remembering to uncomment the line):

auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@+

That’s it: A single step to enable some additional useful functionality on your mail server.

ZFS backups in Proxmox

I’ve been experimenting with using ZFS snapshots for on- and off-site backups of my Proxmox virtualization environment. For now I’m leaning towards using pve-zsync for backing up my bigger but non-critical machines, and then using syncoid to achieve incremental pull backups off-site. After the initial seed – which I perform over a LAN link – only block-level changes need to be transferred, which a regular home connection at a synchronous 100 Mbps should be more than capable of handling.

One limitation in pve-zsync I stumbled upon is that it will trip itself up if a VM has multiple disks stored on different ZFS pools. One of my machines was configured to have its EFI volume and root filesystem on SSD storage, while the bulk data drive was stored on a mechanical disk. This didn’t work at all, with an error message that wasn’t exactly crystal clear:

# pve-zsync create -source 105 -dest backuppool/zsync -name timemachinedailysync -maxsnap 14
Job --source 105 --name timemachinedailysync got an ERROR!!!
ERROR Message:
	zfs send -- datapool/vm-105-disk-0@rep_timemachinedailysync_2020-04-05_11:32:01 | zfs recv -F -- backuppool/zsync/vm-105-disk-0
	cannot receive new filesystem stream: destination has snapshots (eg. backuppool/zsync/vm-105-disk-0@rep_timemachinedailysync_2020-04-05_11:32:01)
must destroy them to overwrite it

Of course removing the snapshots in question didn’t help at all – but moving all disk images belonging to the machine to a single ZFS pool solved the issue immediately.

The other problem is that while this program is VM aware while backing up, it only performs ZFS snapshots on the actual dataset(s) backing the drive(s) of a VM or container – it doesn’t by itself backup the machine configuration. This means a potentially excellent recovery point objective (RPO), but the recovery time objective (RTO) will suffer as an effect: A critical service won’t get back online until someone creates an appropriate machine and connects the backed up drives.

I will be experimenting with variations of the tools available to me, to see if I can simplify the restore process somewhat.

Moving Proxmox /boot to USB stick

Some short notes I made along the way to benefit the future me.


On my new server, Proxmox was unable to boot directly to a ZFS file system on a drive connected via the HBA controller. UPDATE (2020-01-27): The SuperMicro X10SRH-CLN4F motherboard boots just fine from a root-on-ZFS disk in UEFI mode from the built-in SAS HBA. The only required change is the last step in the description below; to add a delay before attempting to mount ZFS volumes at boot-time.

There is a potential drawback to installing Proxmox in root-on-ZFS mode in a UEFI system: The drive gets partitioned, so ZFS doesn’t get uninhibited access to the entire block storage. This may or may not make a difference for performance, but in terms of speed on an SSD solution, I haven’t really seen any cause for concern for my real-world use case. An alternative would be to install the underlying operating system to a separate physical drive.

Also note that the workaround below works on a single vFAT volume. Since FAT doesn’t support symlinks, kernel or initramfs updates in Proxmox/Debian will require some manual work, which most sane people would likely wish to avoid.

I’m leaving the rest of my article intact for posterity:

My workaround was to place /boot – not the system – on a USB stick connected directly to the motherboard.


After installation, reboot with the Proxmox installation medium, but select Install Proxmox VE (Debug mode).

When the first shell appears, Ctrl+D to have the system load the necessary drivers.

Check the name of the USB drive.


Partition it.

cfdisk /dev/sdb

Clear the disk, create an EFI System partition and write the changes. Then apply a FAT to the new partition

mkfs.vfat /dev/sdb1

Prepare to chroot into the installed Proxmox instance

mkdir /media/rescue
zpool import -fR /media/rescue rpool
mount -o bind /dev /media/rescue/dev
mount -o bind /sys /media/rescue/sys
mount -o bind /dev /media/rescue/dev
chroot /media/rescue

Make room for the new /boot

mv /boot /boot.bak

Edit /etc/fstab and add the following:

/dev/sdb1 /boot vfat defaults 0 0

Make the stick bootable

mount -a
grub-install --efi-directory=/boot/efi /dev/sdb
grub-mkconfig -o /boot/grub/grub.cfg

Exit the chroot, unmount the ZFS file system (zfs export rpool)and reboot

In my specific case I had a problem where I got stuck in a shell with the ZFS pool not mountable.

/sbin/zpool import -Nf rpool

Exit to continue the boot process. Then edit /etc/default/zfs and edit a delay before attempting to boot the file system.


Then apply the new configuration:

update-initramfs -u

Head: Meet Wall.

I spent way more time than I’m comfortable disclosing, troubleshooting an issue with an AD-attached Oracle Linux server that wouldn’t accept ssh logons by domain users.

We use the recommended sssd and realmd to ensure AD membership. Everything looked good, and I could log on using an account that’s a member of the Domain Admins group, and so I released the machine to our developers for further work.

Only they couldn’t log on.

After spending most of the morning looking through my logs and config files, and detaching and re-attaching the server to the domain after tweaking various settings, I suddenly saw the light.

Note to my future self:

Windows runs NetBIOS under the hood! Any machine name over 14 characters of length in a domain joined computer will cause trouble!

Naturally, after setting a more Windows-like hostname and re-joining the domain, everything worked as I expected.

Simple DNS over HTTPS setup

I read that Mozilla had been named an Internet villain by a number of British ISPs, for supporting encrypted DNS queries using DNS over HTTPS. I guess the problem is that an ISP by default knows which sites you browse even though the traffic itself is usually encrypted nowadays, since the traditional way of looking up the IP address of a named service has been performed in plaintext.

The basic fact is that knowledge of what you do on the Internet can be monetized – but the official story naturally is a combination of “Terrorists!” and “Think about the children!”. As usual.

Well, I got a sudden urge to become an Internet villain too, so I put a DoH resolver in front of my Bind server at home. Cloudflare – whom I happen to trust when they say they don’t sell my data – provide a couple of tools to help here. I chose to go with Cloudflared. The process for installing the daemon is pretty well documented on their download page, but for the sake of posterity looks a bit like this:

First we’ll download the installation package. My DNS server is a Debian Stretch machine, so I chose the correct package for this:

dpkg -i cloudflared-stable-linux-amd64.deb

Next we need to configure the service. It doesn’t come with a config file out of the box, but it’s easy enough to read up on their distribution page what it needs to contain. I added a couple of things beyond the bare minimum. The file is stored as /etc/cloudflared/config.yml.

logfile: /var/log/cloudflared.log
proxy-dns: true
proxy-dns-port: 5353

After this we make sure the service is active, and that it’ll restarts if we restart our server:

cloudflared service install
service cloudflared start
systemctl enable cloudflared.service

Next let’s try it out:

dig @ -p 5353

If we get an answer, it works.

The next step is to make Bind use our cloudflared instance as a DNS forwarder. We’ll edit /etc/bind/named.conf.options. The new forwarder section should look like this:

options {
	forwarders {
       port 5353;

Restart bind (service bind9 restart), and try it out by running dig @ against a service you don’t usually visit. Note the absence of a port number in the latter command: if it keeps working, the chain is up and running.

PowerShell for Unix nerds

(This post was inspired by a question on ServerFault)

Windows has had an increasingly useful scripting language since 2006 in PowerShell. Since Microsoft apparently fell in love with backend developers a while back, they’ve even ported the core of it to GNU/Linux and macOS. This is actually a big deal for us who prefer our workstations to run Unix but have Windows servers to manage on a regular basis.

Coming from a background in Unix shell scripting, how do we approach the PowerShell mindset? Theoretically it’s simple to say that Unix shells are string-based while PowerShell is object oriented, but what does that mean in practice? Let me try to present a concrete example to illustrate the difference in philosophy between the two worlds.

We will parse some system logs on an Ubuntu server and on a Windows server respectively to get a feel for each system.

Task 1, Ubuntu

The first task we shall accomplish is to find events that reoccur between 04:00 and 04:30 every morning.

In Ubuntu, logs are regular text files. Each line clearly consists of predefined fields delimited by space characters. Each line starts with a timestamp with the date followed by the time in hh:mm:ss format. We can find anything that happens during the hour “04” of any day in our retention period with a naïve grep for ” 04:”:

zgrep " 04:" /var/log/syslog*

(Note that I use zgrep to also analyze the archived, rotated log files.)

On a busy server, this particular search results in twice as much data to sift through as we originally wanted. Let’s complement our commands with some simple regular expressions to filter the results:

zgrep " 04:[0-2][0-9]:[0-5][0-9]" /var/log/syslog*

Mission accomplished: We’re seeing all system log events between 04:00:00 and 04:29:59 for each day stored in our log retention period. To clarify the command, each bracket represents one position in our search string and defines the valid characters for this specific position.

Bonus knowledge:
[0-9] can be substituted with \d, which translates into “any digit”. I used the longer form here for clarity.

Task 2, Ubuntu

Now let’s identify the process that triggered each event. We’ll look at a line from the output of the last command to get a feeling for how to parse it:

/var/log/syslog.7.gz:Jan 23 04:17:36 lbmail1 haproxy[12916]: [23/Jan/2019:04:08:36.405] ft_rest_tls~

This can be translated into a general form:

<filename>:<MMM DD hh:mm:ss> <hostname> <procname[procID]>: <message>

Let’s say we want to filter the output from the previous command and only see the process information and message. Since everything is a string, we’ll pipe grep to a string manipulation command. This particular job looks like a good use case for GNU cut. With this command we need to define a delimiter, which we know is a space character, and then we need to count spaces in our log file format to see that we’re interested in what corresponds to ”fields” number 5 and 6. The message part of each line, of course, may contain spaces, so once we reach that field we’ll want to show the entire rest of the line. The required command looks like this:

zgrep " 04:[0-2][0-9]:[0-5][0-9]" /var/log/syslog* | cut -d ' ' -f 5,6-

Now let’s do the same in Windows:

Task 1, Windows

Again our task is to find events between 04:00 and 04:30 on any day. As opposed to our Ubuntu server, Windows treats each line in our log as an object, and each field as a property of that object. This means that we will get no results at best and unpredictable results at worst if we treat our log as a searchable mass of text.
Two examples that won’t work:

Wrong answer 1

get-EventLog -LogName System -After 04:00 -Before 04:30

This looks nice, but it implicitly only gives us log events between the given times this day.

Wrong answer 2

get-EventLog -LogName System | Select-String -Pattern "04:[0-2][0-9]:[0-5][0-9]"

Windows can use regular expressions just fine in this context, so that’s not a problem. What’s wrong here is that we’re searching the actual object instance for the pattern; not the contents of the object’s properties.

Right answer

If we remember that Powershell works with objects rather than plain text, the conclusion is that we should be able to query for properties within each line object. Enter the “where” or “?” command:

Get-EventLog -LogName System | ?{$_.TimeGenerated -match "04:[0-2][0-9]:[0-5][0-9]"}

What did we do here? The first few characters after the pipe can be read as “For each line check whether this line’s property “Time Generated” matches…“.

One of the things we “just have to know” to understand what happened here, is that the column name “Time” in the output of the Get-EventLog command doesn’t represent the actual name of the property. Looking at the output of get-eventlog | fl shows us that there’s one property called TimeWritten, and one property called TimeGenerated. We’re naturally looking for the latter one.

This was it for the first task. Now let’s see how we pick up the process and message information in PowerShell.

Task 2, Windows

By looking at the headers from the previous command, we see that we’re probably interested in the Source and Message columns. Let’s try to extract those:

Get-EventLog -LogName System | ?{$_.TimeGenerated -match "04:[0-2][0-9]:[0-5][0-9]"} | ft Source, Message

The only addition here, is that we call the Format-Table cmdlet for each query hit and tell it to include the contents of the Source and the Message properties of the passed object.


PowerShell is different from traditional Unix shells, and by trying to accomplish a specific task in both we’ve gained some understanding in how they differ:

  • When piping commands together in Unix, we’re sending one command’s string output to be parsed by the next command.
  • When piping cmdlets together in PowerShell, we’re instead sending entire objects with properties and all to the next cmdlet.

Anyone who has tried object oriented programming understands how the latter is potentially powerful, just as anyone who has “gotten” Unix understands how the former is potentially powerful. I would argue that it’s easier for a non-developer to learn Unix than to learn PowerShell, that Unix allows for a more concise syntax than PowerShell, and that Unix shells execute commands faster than PowerShell in many common cases. However I’m glad that there’s actually a useful, first-party scripting language available in Windows.

To get things done in PowerShell is mainly a matter of turning around and working with entire properties (whose values may but needn’t necessarily be strings) rather than with strings directly.

It’s so fluffy!

(Or: Backblaze B2 cloud backups from a Proxmox Virtual Environment)

Backups are one of those things that have a tendency to become unexpectedly expensive – at least through the eyes of a non-techie: Not only do you need enough space to store several generations of data, but you want at least twice that, since you want to protect your information not only from accidental deletion or corruption, but also from the kind of accidents that can render both the production data and the backup unreadable. Ultimately, you’ll also want to spend the resources to automate as much of the process as possible, because anything that requires manual work will be forgotten at some point, and by some perverse law of the Universe, that’s when it would have been needed.

In this post I’ll describe how I’ve solved it for full VM/container backups in my lab/home environment. It’s trivial to adapt the information from this post to apply to regular file system backups. Since I’m using a cloud service to store my backups, I’m applying a zero trust policy to them at the cost of increased storage (and network) requirements, but my primary dataset is small enough that this doesn’t really worry me.

Backblaze currently offers 10 GB of B2 object storage for free. This doesn’t sound like a lot today, but it will comfortably fit several compressed and encrypted copies of my reverse proxy, and my mail and web servers. That’s Linux containers for you.

First of all, we’ll need an account at Backblaze. Save your Master Application Key in your password manager! We’ll need it soon. Then we’ll want to create a Storage Bucket. In my case I gave it the wonderfully inventive name “pvebackup”.

Next, we shall install a program called rclone on our Proxmox server. The version in the apt repository as I write this seems to have a bug vis à vi B2, that will require us to use the Master Application Key rather than a more limited Application Key specifically for this bucket. Since we’re encrypting our cloud data anyway, I feel pretty OK with this compromise for home use.

EDIT 2018-10-30: Downloading the current dpk package of rclone directly from the project site did solve this bug. In other words it’s possible and preferable to create a separate Application Key with access only to the backup bucket, at least if the B2 account will be used for other storage too.

# apt install rclone

Now we’ll configure the program:

# rclone config --config /etc/rclone.conf
Config file "/etc/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config

Type n to create a new remote configuration. Name it b2, and select the appropriate number for Backblaze B2 storage from the list: In my case it was number 3.

The Account ID can be viewed in the Backblaze portal, and the Application Key is the master key we saved in our password manager earlier. Leave the endpoint blank and save your settings. Then we’ll just secure the file:

# chown root. /etc/rclone.conf && chmod 600 /etc/rclone.conf

We’ll want to encrypt the file before sending it to an online location. For this we’ll use gpg, for which the default settings should be enough. The command to generate a key is gpg –gen-key, and I created a key in the name of “proxmox” with the mail address I’m using for notification mails from my PVE instance. Don’t forget to store the passphrase in your password manager, or your backups will be utterly worthless.

Next, we’ll shamelessly steal and modify a script to be used for hooking into the Proxmox VE backup process (I took it from this github repository and repurposed it for my needs).

Edit 2018-10-30: I added the –b2-hard-delete option to the job-end phase of deleting old backups, since the regular delete command just hides files in the B2 storage, adding to the cumulative storage used.

#!/usr/bin/perl -w
# VZdump hook script for offsite backups to Backblaze B2 storage
use strict;

print "HOOK: " . join (' ', @ARGV) . "\n";

my $phase = shift;

if ($phase eq 'job-start' ||
        $phase eq 'job-end'  ||
        $phase eq 'job-abort') {

        my $dumpdir = $ENV{DUMPDIR};

        my $storeid = $ENV{STOREID};

        print "HOOK-ENV: dumpdir=$dumpdir;storeid=$storeid\n";

        if ($phase eq 'job-end') {
                        # Delete backups older than 8 days
                        system ("/usr/bin/rclone delete -vv --b2-hard-delete --config /etc/rclone.conf --min-age 8d b2:pvebackup") == 0 ||
                                die "Deleting old backups failed";
} elsif ($phase eq 'backup-start' ||
        $phase eq 'backup-end' ||
        $phase eq 'backup-abort' ||
        $phase eq 'log-end' ||
        $phase eq 'pre-stop' ||
        $phase eq 'pre-restart' ||
        $phase eq 'post-restart') {
        my $mode = shift; # stop/suspend/snapshot
        my $vmid = shift;
        my $vmtype = $ENV{VMTYPE}; # lxc/qemu
        my $dumpdir = $ENV{DUMPDIR};
        my $storeid = $ENV{STOREID};
        my $hostname = $ENV{HOSTNAME};
        # tarfile is only available in phase 'backup-end'
        my $tarfile = $ENV{TARFILE};
        my $gpgfile = $tarfile . ".gpg";
        # logfile is only available in phase 'log-end'
        my $logfile = $ENV{LOGFILE};
        print "HOOK-ENV: vmtype=$vmtype;dumpdir=$dumpdir;storeid=$storeid;hostname=$hostname;tarfile=$tarfile;logfile=$logfile\n";
        # Encrypt backup and send it to B2 storage
        if ($phase eq 'backup-end') {
                system ("/usr/bin/gpg -e -r proxmox $tarfile") == 0 ||
                        die "Encrypting tar file failed";
                system ("/usr/bin/rclone copy -v --config /etc/rclone.conf $gpgfile b2:pvebackup") == 0 ||
                        die "Copying encrypted file to B2 storage failed";
        # Copy backup log to B2
        if ($phase eq 'log-end') {
                system ("/usr/bin/rclone copy -v --config /etc/rclone.conf $logfile b2:pvebackup") == 0 ||
                        die "Copying log file to B2 storage failed";
} else {
      die "got unknown phase '$phase'";
exit (0);

Store this script in /usr/local/bin/ and make it executable:

# chown root. /usr/local/bin/ && chmod 755 /usr/local/bin/

The last cli magic for today will be to ensure that Proxmox VE actually makes use of our fancy script:

# echo "script: /usr/local/bin/" >> /etc/vzdump.conf

To try it out, select a VM or container in the PVE web interface, select Backup -> Backup now. I use Snapshot as my backup method and GZIP as my compression method. Hopefully you’ll see no errors in the log, and the B2 console will display a new file with a name corresponding to the current timestamp and the machine ID.


The tradeoffs with this solution compared to, for example, an enterprise product from Veeam are obvious, but so is the difference in cost. For a small business or a home lab, this solution should cover the needs to keep the most important data recoverable even if something bad happens to the server location.

Replacing ZFS system drives in Proxmox

Running Proxmox in a root-on-zfs configuration in a RAID10 pool results in an interesting artifact: We need a boot volume from which to start our system and initialize the elements required to recognize a ZFS pool. In effect, the first mirror pair in our disk set will have (at least) two partitions: a regular filesystem on the first partition and a second partition to participate in the ZFS pool.

To see how it all works together, I tried failing a drive and replacing it with a different one.


If the drives would have had identical sector sizes, the operation would have been simple. In this case, sdb is the good mirror volume and sda is the new, empty drive. We want to copy the working partition table from the good drive to the new one, and then randomize the UUID of the new drive to avoid catastrophic confusion on the part of ZFS:

# sgdisk /dev/sdb -R /dev/sda
# sgdisk -G /dev/sda

After that, we should be able to use gdisk to view the partition table, to identify what partition does what, and simply copy the contents of the good partitions from the good mirror to the new drive:

# gdisk /dev/sda
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): p
Disk /dev/sda: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 8-sector boundaries
Total free space is 0 sectors (0 bytes)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            2047   1007.0 KiB  EF02  
   2            2048      5860516749   2.7 TiB     BF01  zfs
   9      5860516750      5860533134   8.0 MiB     BF07  

Command (? for help): q
# dd if=/dev/sdb1 of=/dev/sda1
# dd if=/dev/sdb9 of=/dev/sda9

Then we would add the new disk to our ZFS pool and have it resilvered:

# zpool replace rpool /dev/sda2

To view the resilvering process:

# zpool status -v
  pool: rpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Sep  1 18:48:13 2018
	2.43T scanned out of 2.55T at 170M/s, 0h13m to go
	1.22T resilvered, 94.99% done

	rpool            DEGRADED     0     0     0
	  mirror-0       DEGRADED     0     0     0
	    replacing-0  DEGRADED     0     0     0
	      old        UNAVAIL      0    63     0  corrupted data
	      sda2       ONLINE       0     0     0  (resilvering)
	    sdb2         ONLINE       0     0     0
	  mirror-1       ONLINE       0     0     0
	    sdc          ONLINE       0     0     0
	    sdd          ONLINE       0     0     0
	  sde1           ONLINE       0     0     0
	  sde2           ONLINE       0     0     0

errors: No known data errors

The process is time consuming on large drives, but since ZFS both understands the underlying disk layout and the filesystem on top of it, resilvering will only occur on blocks that are in use, which may save us a lot of time, depending on the extent to which our filesystem is filled.

When resilvering is done, we’ll just make sure there’s something to boot from on the new drive:

# grub-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.

Real life intervenes

Unfortunately for me, the new drive I tried had the modern 4 KB sector size (“Advanced Format / 4Kn”), while my old drives were stuck with the older 512 B standard. This led to the interesting side effect that my new drive was too small to fit volumes according to the healthy mirror drive’s partition table:

# sgdisk /dev/sdb -R /dev/sda
Caution! Secondary header was placed beyond the disk's limits! Moving the header, but other problems may occur!

In the end, what I ended up doing was to use gdisk to create a new partition table with volume sizes for partitions 1 and 9 as similar as possible to those of the healthy mirror (but not smaller!), entirely skipping the steps involving the sgdisk utility. The rest of the steps were identical.

The next problem I encountered was a bit worse: Even though ZFS in the Proxmox VE installation managed 4Kn drives just fine, there was simply no way to get the HP MicroServer Gen7 host to boot from one, so back to the old 3 TB WD RED I went.


Running root-on-zfs in a striped mirrors (“RAID10”) configuration complicates the replacement of any of the drives in the first mirror pair slightly compared to a setup where the ZFS pool is used for storage only.

Fortunately the difference is minimal, and except for the truly dangerous syntax and unclear documentation of the sgdisk command, replacing a boot disk really boils down to four steps:

  1. Make sure the relevant partitions exist.
  2. Copy non-ZFS-data from the healthy drive to the new one.
  3. Resilver the ZFS volume.
  4. Install GRUB.

In a pure data disk, the only thing we have to think about is step 3.

On the other hand, running too new hardware components in old servers doesn’t always work as intended. Note to the future me: Any meaningful expansion of disk space will require newer server hardware than the N54L-based MicroServer.