Exporting mail to .PST files using PowerShell

We have one Exchange mailbox that has seen exponential growth with no good workaround to be had. I started worrying when we passed 100 GB for the single box, and by 160 GB users started getting properly annoyed by the performance when browsing the box. I spoke to the manager for the department and suggested exporting the data to a number of archive files that could be manually accessed by select users.

The basic command

As usual, for this kind of task we want to reach for our CLI. The command to copy data to a .PST is New-MailboxExportRequest. A typical use case for a small box could look something like this:

New-MailboxExportRequest -Mailbox "MailboxName" -Name JobIdentifier -FilePath \\servername\path\filename.pst

Two notes:
1) The Name of the job is how we may reference it later on. It doesn’t have to make sense, but in case we create multiple jobs our lives get easier if they have sane names.
2) The FilePath argument must point at a UNC share where the domain group Exchange Trusted Subsystem has Modify rights. Avoid writing to a system or database volume directly on the mail server, or bad things may happen when the volume fills up.

Advanced parameters

We will undoubtedly want to create more advanced queries, though. In the case of my giant mailbox, it makes sense to split the contents chronologically. The ContentFilter parameter can be made advanced enough if we want to:

New-MailboxExportRequest -ContentFilter {(((Sent -gt '2016-08-01') -and (Sent -lt '2016-10-01')) -or ((Received -gt '2016-08-01') -and
(Received -lt '2016-10-01')))} -Mailbox "MailboxName" -Name MailboxAugToSep2016dump -IsArchive -FilePath \\servername\path\filename.pst

To break the command down:
1) We want to filter for both the Sent and the Received properties of our mails, since we want to catch not only mails that were received during the period, but also outbound mail from the time.
2) Since we use the “greater than” and “less than” parameters, it’s good to know how they work in the case of dates: When written in this way, what we’re actually putting in is “the date, at 00:00”. In other words the -gt switch will pick up the entire day of the date entered, but to catch the entire last day of a month with the -lt switch, we must enter the first day of the following month. In this case we’ll dump everything up until 00:00 on October the 1st, which is exactly what we want.
3) In this case I added the IsArchive directive. This tells the command to look in the online archive belonging to the mailbox instead of in the actual mailbox.

Checking job status

We can check if an export request is queued, in progress, completed, or failed, simply by running Get-MailboxExportRequest. But as usual we can get some truly useful information by stringing some commands together. Why did a job fail? Did the supposedly “completed” job parse our command line the way we expected it to?

Get-MailboxExportRequest -status Completed | Get-MailboxExportRequestStatistics -IncludeReport | fl > c:\report.txt

Here we first ask for a list of Completed jobs – but we might as well ask for failed jobs. Then we dump a pretty verbose list of the report(s) to a regular text file for easy reading.

Clearing jobs

Once we’re confident we know what we need to know about our completed or failed jobs, we can use the Remove-MailboxExportRequest cmdlet to clear our job list. Combine it with Get-MailboxExportRequest to clear many at a time. For example:

Get-MailboxExportRequest -status Completed | Remove-MailboxExportRequest

This will simply remove all jobs with the status Completed.



Workaround for broken connection management in Exchange

For legacy reasons (don’t even ask…) we still have an old NLB-based Exchange 2010 mail server farm, with a CASArray consisting of two servers, in front of a DAG cluster at work.

The interesting thing, of course, is when one of the CAS’s fail, Outlook clients don’t automatically start using the other CAS as you’d expect in a sane system. But which Outlook clients didn’t keep working seemed to be somewhat arbitrary.

A couple of minutes with my preferred search engine gave me the tools to show what’s wrong:

Get-Mailboxdatabase | ft Identity, RpcClientAccessServer

Identity RpcClientAccessServer
-------- ---------------------
Mailbox DB05 CAS1.tld
Mailbox DB03 CAS2.tld

The above example output shows that each database has a preferred CAS, and explains the apparent arbitrariness of clients refusing to connect to the remaining CAS.

The funny thing is that even after an hour and a half and way after NLB Manager stopped presenting the second CAS in its GUI, Exchange hadn’t understood that one of the members of the CASArray was down. The workaround is to manually tell each datastore to use the healthy CAS:

Set-MailboxDatabase "Mailbox DB03" -RPCClientAccessServer CAS1.tld

Get-Mailboxdatabase | ft Identity, RpcClientAccessServer

Identity RpcClientAccessServer
-------- ---------------------
Mailbox DB05 CAS1.tld
Mailbox DB03 CAS1.tld

Fortunately it looks as though modern Exchange solutions with real load balancers in front of them don’t experience this issue.

Monitoring mounted Windows volumes using Zabbix

Sometimes it’s nice to mount a separate disk volume inside a directory structure. For a concrete example: At work we have a legacy system that writes copious amounts of data to subfolders of a network share. While vSphere allows for pretty large vdisks, after you pass 8 TB or so, they become cumbersome to manage. By mounting smaller disks directly in this directory structure, each disk can be kept to a manageable size. 

First the bad news: the built-in filesystem discovery rules for the Zabbix Windows agent can only automatically enumerate legacy drive letters, so we get to know the status of the root file system, but not of the respective mounted volumes.

The good news, however, is that it’s a piece of cake to make Zabbix understand what you mean if you manually create data collection items for these subdirectories.

The key syntax in Zabbix 3 looks like this:


The only thing to remember is that we’re sending forward slashes in our query to the server agent even though we’re running Windows.

vSAN Benchmark Analysis

As part of this year’s server upgrades, we put together a new vSAN cluster at work. The machines are Lenovo SR650 servers with dual Xeon Gold 6132 14 Core CPUs, and 768 GB of RAM. Each server is equipped with two disk groups consisting of one 800 GB write intensive SSD and three 3.84 TB SSDs for use as capacity drives. The servers are connected to the network using two of their four 10GbE interfaces, and to our existing storage solution using dual FC interfaces. The version of VMware vSphere we’re currently running is 6.5 u2.

As part of setting up the solution, we ran benchmarks using VMware’s HCIBench appliance, available as a VMware Fling from here. HCIBench was configured to clear the read/write cache before testing, but re-use VMs if possible. The “Easy Run” setting was used since it lets the benchmarking program create a workload based on the individual vSAN environment. The transaction test ran using 20 VMs with 8 data disks each, and the IOPS numbers represent a 100% random, 70% read load on a 4 kb block-size.

The first run was pretty much the out-of-the-box configuration: The network between the hosts had not been tweaked at all, and we ran the workers with the stock storage policy, meaning basic data mirroring without striping. 

For the second run, we separated vSAN traffic to its own dedicated NIC, and allowed jumbo frames between the hosts. 

In the third run we tried to discern what striping virtual disks across capacity drives does to performance by creating a storage policy with a stripe width value of 2, and assigning it to all worker VMs.

Finally, in the fourth run, we turned on Compression and Deduplication on the vSAN and re-ran the same benchmark to see how performance and latency were affected.

(For clarity: We did perform several more benchmark tests to confirm that the values really were representative.)


The raw throughput performance numbers tells us whether we’re getting data through a connection as fast as possible. As seen by runs 2 and 3 in the graph below, we’re pretty much bouncing against the physical limits of our 12 Gbps SAS controllers and the 10GbE inter-host network. This value isn’t particularly relevant in real life other than that unexpectedly low numbers tell us we have a problem – see the result from run number 1 for a perfect example of that.

Throughput in MB/s. Higher is better.

Transaction performance

The transaction performance in benchmark form is another one of those numbers that give you an idea of whether something is seriously wrong, but otherwise is a rather hypothetical exercise. Once again we are hitting numbers approaching what the hardware is capable of in the two middle runs.

Input/Output operations per second. Higher is better.


Finally a number that has a serious bearing on how our storage will feel: How long does it take from issuing a request to the storage system until the system confirms that the task is done? The blue line represents an average for the test period – but remember that this is during extreme load that the vSAN is unlikely to see in actual use. The 95th percentile bar tells us that 95% of storage operations take less time than this to complete.

Latency values in milliseconds – Lower is better

Thoughts on the results

The first run really sticks out, as it should: It’s an exposition of what not to do in production. Storage really should have its own dedicated network. Interestingly, though, from my admittedly limited experience, going up to jumbo frames (MTU=9000) didn’t by itself make a huge difference in performance, but it should result in a bit less strain on the hardware putting network packets together.

Curiously enough, I saw no relevant difference between just mirroring and striping + mirroring virtual machine disks once the cluster had settled. The numbers are very close, percentage-wise. This echoes VMware’s own words:

In most real world use cases, we do not see significant performance increases from changing the striping policy. It is available and you should weigh the added complexity against the need before changing it from the default.


Finally we come to the run I haven’t really commented on yet: How much does performance suffer from the compression + deduplication option available in VMware vSAN? The simplified answer: About 20%, counted both in throughput and in transactional performance, and that doesn’t sound bad at all. But the latency numbers tell a slightly different tale: Average latency jumps up by a quarter, and 95th percentile latency by more than half.  I see how the benefits of space-saving could make up for the drop in performance in some use-cases, but I would be wary of putting a heavily used production database on top of a storage layer that displays this sort of intermittent latency peaks.

In summary, vSAN on affordable hardware is slightly slower than a dedicated storage system like our IBM FlashSystem V9000, but that really says more about the wicked speed of the latter than being a negative against the former. For most real-world workloads in our environment the difference should be negligible, and well offset by the benefits of a fully software defined storage layer working hand-in-hand with the virtualization platform.

Configuring Lenovo SR650 nodes for running vSphere

As usual nowadays, Lenovo SR650 servers come with energy saving presets that may seem ”green”, but which kill virtualization performance.

The regular way to get them running the way they should is to enter the UEFI setup at boot, go to UEFI Settings -> System Settings -> Operating Modes and choose ”Maximum Performance”. Unfortunately, on these servers, this removes the ability to set VMware EVC: the Enhanced vMotion Compatibility functionality that allows for live migration of virtual servers between hosts of different generations, for example when introducing a new cluster into a datacenter.

It turns out that what’s missing is one specific setting: ”MONITOR/MWAIT” must be set to ”Enabled”. It should be possible to first choose the ”Maximum Performance” scheme, then switch to the ”Custom” scheme and only change this single setting in Operating modes. In addition, we should also go to System Settings -> Devices and I/O Ports, and modify PCI 64-bit Resource Allocation to read ”Disabled”.  For reference, the complete checklist is available from Lenovo:

Power.PowerPerformanceBias=Platform Controlled
Power.PlatformControlledType=Maximum Performance

After making these changes, we should be able to both run our workload at maximum performance and enable EVC to migrate workloads between server clusters utilizing CPUs from different generations.

It’s so fluffy!

(Or: Backblaze B2 cloud backups from a Proxmox Virtual Environment)

Backups are one of those things that have a tendency to become unexpectedly expensive – at least through the eyes of a non-techie: Not only do you need enough space to store several generations of data, but you want at least twice that, since you want to protect your information not only from accidental deletion or corruption, but also from the kind of accidents that can render both the production data and the backup unreadable. Ultimately, you’ll also want to spend the resources to automate as much of the process as possible, because anything that requires manual work will be forgotten at some point, and by some perverse law of the Universe, that’s when it would have been needed.

In this post I’ll describe how I’ve solved it for full VM/container backups in my lab/home environment. It’s trivial to adapt the information from this post to apply to regular file system backups. Since I’m using a cloud service to store my backups, I’m applying a zero trust policy to them at the cost of increased storage (and network) requirements, but my primary dataset is small enough that this doesn’t really worry me.

Backblaze currently offers 10 GB of B2 object storage for free. This doesn’t sound like a lot today, but it will comfortably fit several compressed and encrypted copies of my reverse proxy, and my mail and web servers. That’s Linux containers for you.

First of all, we’ll need an account at Backblaze. Save your Master Application Key in your password manager! We’ll need it soon. Then we’ll want to create a Storage Bucket. In my case I gave it the wonderfully inventive name “pvebackup”.

Next, we shall install a program called rclone on our Proxmox server. The version in the apt repository as I write this seems to have a bug vis à vi B2, that will require us to use the Master Application Key rather than a more limited Application Key specifically for this bucket. Since we’re encrypting our cloud data anyway, I feel pretty OK with this compromise for home use.

EDIT 2018-10-30: Downloading the current dpk package of rclone directly from the project site did solve this bug. In other words it’s possible and preferable to create a separate Application Key with access only to the backup bucket, at least if the B2 account will be used for other storage too.

# apt install rclone

Now we’ll configure the program:

# rclone config --config /etc/rclone.conf
Config file "/etc/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config

Type n to create a new remote configuration. Name it b2, and select the appropriate number for Backblaze B2 storage from the list: In my case it was number 3.

The Account ID can be viewed in the Backblaze portal, and the Application Key is the master key we saved in our password manager earlier. Leave the endpoint blank and save your settings. Then we’ll just secure the file:

# chown root. /etc/rclone.conf && chmod 600 /etc/rclone.conf

We’ll want to encrypt the file before sending it to an online location. For this we’ll use gpg, for which the default settings should be enough. The command to generate a key is gpg –gen-key, and I created a key in the name of “proxmox” with the mail address I’m using for notification mails from my PVE instance. Don’t forget to store the passphrase in your password manager, or your backups will be utterly worthless.

Next, we’ll shamelessly steal and modify a script to be used for hooking into the Proxmox VE backup process (I took it from this github repository and repurposed it for my needs).

Edit 2018-10-30: I added the –b2-hard-delete option to the job-end phase of deleting old backups, since the regular delete command just hides files in the B2 storage, adding to the cumulative storage used.

#!/usr/bin/perl -w
# VZdump hook script for offsite backups to Backblaze B2 storage
use strict;

print "HOOK: " . join (' ', @ARGV) . "\n";

my $phase = shift;

if ($phase eq 'job-start' ||
        $phase eq 'job-end'  ||
        $phase eq 'job-abort') {

        my $dumpdir = $ENV{DUMPDIR};

        my $storeid = $ENV{STOREID};

        print "HOOK-ENV: dumpdir=$dumpdir;storeid=$storeid\n";

        if ($phase eq 'job-end') {
                        # Delete backups older than 8 days
                        system ("/usr/bin/rclone delete -vv --b2-hard-delete --config /etc/rclone.conf --min-age 8d b2:pvebackup") == 0 ||
                                die "Deleting old backups failed";
} elsif ($phase eq 'backup-start' ||
        $phase eq 'backup-end' ||
        $phase eq 'backup-abort' ||
        $phase eq 'log-end' ||
        $phase eq 'pre-stop' ||
        $phase eq 'pre-restart' ||
        $phase eq 'post-restart') {
        my $mode = shift; # stop/suspend/snapshot
        my $vmid = shift;
        my $vmtype = $ENV{VMTYPE}; # lxc/qemu
        my $dumpdir = $ENV{DUMPDIR};
        my $storeid = $ENV{STOREID};
        my $hostname = $ENV{HOSTNAME};
        # tarfile is only available in phase 'backup-end'
        my $tarfile = $ENV{TARFILE};
        my $gpgfile = $tarfile . ".gpg";
        # logfile is only available in phase 'log-end'
        my $logfile = $ENV{LOGFILE};
        print "HOOK-ENV: vmtype=$vmtype;dumpdir=$dumpdir;storeid=$storeid;hostname=$hostname;tarfile=$tarfile;logfile=$logfile\n";
        # Encrypt backup and send it to B2 storage
        if ($phase eq 'backup-end') {
                system ("/usr/bin/gpg -e -r proxmox $tarfile") == 0 ||
                        die "Encrypting tar file failed";
                system ("/usr/bin/rclone copy -v --config /etc/rclone.conf $gpgfile b2:pvebackup") == 0 ||
                        die "Copying encrypted file to B2 storage failed";
        # Copy backup log to B2
        if ($phase eq 'log-end') {
                system ("/usr/bin/rclone copy -v --config /etc/rclone.conf $logfile b2:pvebackup") == 0 ||
                        die "Copying log file to B2 storage failed";
} else {
      die "got unknown phase '$phase'";
exit (0);

Store this script in /usr/local/bin/vzclouddump.pl and make it executable:

# chown root. /usr/local/bin/vzclouddump.pl && chmod 755 /usr/local/bin/vzclouddump.pl

The last cli magic for today will be to ensure that Proxmox VE actually makes use of our fancy script:

# echo "script: /usr/local/bin/vzclouddump.pl" >> /etc/vzdump.conf

To try it out, select a VM or container in the PVE web interface, select Backup -> Backup now. I use Snapshot as my backup method and GZIP as my compression method. Hopefully you’ll see no errors in the log, and the B2 console will display a new file with a name corresponding to the current timestamp and the machine ID.


The tradeoffs with this solution compared to, for example, an enterprise product from Veeam are obvious, but so is the difference in cost. For a small business or a home lab, this solution should cover the needs to keep the most important data recoverable even if something bad happens to the server location.

Back on (tunnelled) IPv6

On principle, I dislike not being able to present my Internet-facing services over IPv6. The reasoning is simple: Unless services exist that use IPv6, ISPs have no reason to provide it. I’m obviously microscopic in this context, but I’m doing my thing to help the cause.

As mentioned earlier, I first experimented with Hurricane Electric’s tunneling service, which caused issues with Netflix because of silly geofencing rules.

After that I tried Telia, who at the time did not provide IPv6 natively, but who have a 6rd service, which generates a /64 subnet for you based on your (DHCP-issued) IPv4 address. For home use, I could accept that, but when I got my fibre connection, I moved away from that ISP. Unfortunately, neither the ISP nor their service provider do IPv6 in my area, so then I didn’t have access to Telia’s 6rd service, and for practical reasons I couldn’t route client traffic from my home over HE’s tunnel service.

PfSense and Proxmox VE to the rescue: I set up the Hurricane Electric tunnel as per the regular pfSense instructions, but I assigned that network to a separate internal NIC on my firewall instead of routing it to the regular LAN.

I then set up a new network bridge in Proxmox VE, assigning a hitherto unused NIC to it, and connected the two ports. Voìla! I now have a trouble-free client network where Netflix and similar services work well, and I also have an IPv6 capable server network to which I’ve added relevant machines.

In other words, while a functioning native IPv6 solution is not available to me, I now have a workaround for IPv6 server connectivity until my service providers get with the times…

Fixing lack of console video in Proxmox on HP MicroServer Gen7

After my latest experiment I encountered an issue where the current lack of video output from Proxmox on my N54L-based HP Microserver Gen 7 became a serious issue: I would see the Grub menu, then the screen would turn blank and enter power-save mode, I’d see the disk activity light blink a few times, but the system wouldn’t start up.

Naturally, without seeing the error message I couldn’t do anything about the issue, but I had seen a similar symptom earlier, namely when installing Proxmox for the first time: The installer USB image behaves the same way on this computer, and the workaround there is simply to press Enter to enter the graphical install environment after which the screen is visible.

This time I re-created a Proxmox 5.2 USB stick, booted the server from it, and correctly assumed that arrow down followed by Enter would likely get me into some sort of rescue environment. Sure enough I was soon greeted by a root prompt. At this stage, the ZFS modules weren’t loaded, so again I guessed that pressing Ctrl+D to exit the rescue environment would start the install environment where I know ZFS is available, and pressing the Abort button there luckily got me back to a shell.

From here I mounted my ZFS environment and chrooted into it:

# zpool import -f -a -N -R /mnt
# zfs mount rpool/ROOT/pve-1
# zfs mount -a
# mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login

I could now confirm that I was within my regular Proxmox file system, and so I got to work on the Grub configuration:

In /etc/default/grub, I found the commented out line #GRUB_GFXMODE=640×480 and changed that portion of the file so it now looks like this:


I then ran update-grub and rebooted the server, after which I could see the boot process including the issue that prevented the system from booting fully: It turns out my ZFS pool didn’t want to mount after replacing the drive. I quickly ran zpool import -f, and exited the shell, and then the system successfully booted the rest of the way. An additional reboot confirmed that the system was functional.


Troubleshooting gets a lot harder when you’re blind. The solution is to attempt to become less blind.

Replacing ZFS system drives in Proxmox

Running Proxmox in a root-on-zfs configuration in a RAID10 pool results in an interesting artifact: We need a boot volume from which to start our system and initialize the elements required to recognize a ZFS pool. In effect, the first mirror pair in our disk set will have (at least) two partitions: a regular filesystem on the first partition and a second partition to participate in the ZFS pool.

To see how it all works together, I tried failing a drive and replacing it with a different one.


If the drives would have had identical sector sizes, the operation would have been simple. In this case, sdb is the good mirror volume and sda is the new, empty drive. We want to copy the working partition table from the good drive to the new one, and then randomize the UUID of the new drive to avoid catastrophic confusion on the part of ZFS:

# sgdisk /dev/sdb -R /dev/sda
# sgdisk -G /dev/sda

After that, we should be able to use gdisk to view the partition table, to identify what partition does what, and simply copy the contents of the good partitions from the good mirror to the new drive:

# gdisk /dev/sda
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): p
Disk /dev/sda: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 8-sector boundaries
Total free space is 0 sectors (0 bytes)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            2047   1007.0 KiB  EF02  
   2            2048      5860516749   2.7 TiB     BF01  zfs
   9      5860516750      5860533134   8.0 MiB     BF07  

Command (? for help): q
# dd if=/dev/sdb1 of=/dev/sda1
# dd if=/dev/sdb9 of=/dev/sda9

Then we would add the new disk to our ZFS pool and have it resilvered:

# zpool replace rpool /dev/sda2

To view the resilvering process:

# zpool status -v
  pool: rpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Sep  1 18:48:13 2018
	2.43T scanned out of 2.55T at 170M/s, 0h13m to go
	1.22T resilvered, 94.99% done

	rpool            DEGRADED     0     0     0
	  mirror-0       DEGRADED     0     0     0
	    replacing-0  DEGRADED     0     0     0
	      old        UNAVAIL      0    63     0  corrupted data
	      sda2       ONLINE       0     0     0  (resilvering)
	    sdb2         ONLINE       0     0     0
	  mirror-1       ONLINE       0     0     0
	    sdc          ONLINE       0     0     0
	    sdd          ONLINE       0     0     0
	  sde1           ONLINE       0     0     0
	  sde2           ONLINE       0     0     0

errors: No known data errors

The process is time consuming on large drives, but since ZFS both understands the underlying disk layout and the filesystem on top of it, resilvering will only occur on blocks that are in use, which may save us a lot of time, depending on the extent to which our filesystem is filled.

When resilvering is done, we’ll just make sure there’s something to boot from on the new drive:

# grub-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.

Real life intervenes

Unfortunately for me, the new drive I tried had the modern 4 KB sector size (“Advanced Format / 4Kn”), while my old drives were stuck with the older 512 B standard. This led to the interesting side effect that my new drive was too small to fit volumes according to the healthy mirror drive’s partition table:

# sgdisk /dev/sdb -R /dev/sda
Caution! Secondary header was placed beyond the disk's limits! Moving the header, but other problems may occur!

In the end, what I ended up doing was to use gdisk to create a new partition table with volume sizes for partitions 1 and 9 as similar as possible to those of the healthy mirror (but not smaller!), entirely skipping the steps involving the sgdisk utility. The rest of the steps were identical.

The next problem I encountered was a bit worse: Even though ZFS in the Proxmox VE installation managed 4Kn drives just fine, there was simply no way to get the HP MicroServer Gen7 host to boot from one, so back to the old 3 TB WD RED I went.


Running root-on-zfs in a striped mirrors (“RAID10”) configuration complicates the replacement of any of the drives in the first mirror pair slightly compared to a setup where the ZFS pool is used for storage only.

Fortunately the difference is minimal, and except for the truly dangerous syntax and unclear documentation of the sgdisk command, replacing a boot disk really boils down to four steps:

  1. Make sure the relevant partitions exist.
  2. Copy non-ZFS-data from the healthy drive to the new one.
  3. Resilver the ZFS volume.
  4. Install GRUB.

In a pure data disk, the only thing we have to think about is step 3.

On the other hand, running too new hardware components in old servers doesn’t always work as intended. Note to the future me: Any meaningful expansion of disk space will require newer server hardware than the N54L-based MicroServer.

IKEv2 IPsec VPN with pfSense and Apple devices

Part 2: Apple VPN clients

(Part 1)

In the first part, we configured the pfSense firewall to allow clients to establish secure VPN connections to it. Now we’ll look at what needs to be done to get the clients to actually connect.

Specifically, we’ll create an Apple configuration profile that we can deliver to devices that we want to use as VPN clients.

We’ll start by getting the necessary certificates.

CA and Server certificates

As usual with a PKI-based solution, we need to trust the Root certificate to trust any certificates signed by the Root. Then we need a copy of the Server certificate’s public key to be able to establish an encrypted connection to it from the client. The VPN host in this case already has the client’s public key since we generated the client key-pair locally on the host.

In System – Cert. manager, select the “CAs” tab. Next to the “mydomain VPN-root-CA [year-month]” certificate we created earlier, there’s a row of blue icons. We’re interested in the middle one that represents a round seal. Press it, and your browser will download a .crt file; named something akin to “mydomain+VPN-root-CA+[year-month].crt

Then select the “Certificates” tab and do the same for the server certificate we created earlier. You will now have an additional file called “mydomain+VPN-server+[year-month].crt” in your Downloads directory.

Now for the only bit of shell magic we’ll need to do:

Client certificate

In System – Cert. manager, select the “Certificates” tab. This time download both the certificate (represented by the round seal icon” and the private key (represented by a key icon). This will store “mydomain+VPN-client+[year-month.crt]” and “mydomain+VPN-client+[year-month].key” in your Downloads directory.

Open a Terminal and run the following two commands:

$ cd ~/Downloads
$ openssl pkcs12 -export \
-in mydomain+VPN-client+[year-month].crt \ 
-inkey mydomain+VPN-client+[year-month].key \
-out mydomain+VPN-client+[year-month].p12

You will be asked for an export passphrase. Generate a secure one and store it in your password manager along with the certificate files.

Create an Apple Configuration Profile

This step requires a Mac with Apple Configurator 2 installed.

Start the program and create a new profile. Store it as “[year-month]-mydomain.tld-VPN.mobileconfig


Name: mydomain.tld VPN
Identifier: [Reverse FQDN of the VPN gateway, e.g. “tld.mydomain.vpn”
[The rest of the fields are optional]


Using the “+” button, add the Root CA certificate (“mydomain+VPN-root-CA+[year-month].crt“), the Server certificate (“”mydomain+VPN-server+[year-month].crt“), and the client certificate bundle we generated earlier (“mydomain+VPN-client+[year-month].p12“). When adding the latter, we also need to enter the export pass phrase.


Connection Name: mydomain.tld VPN
Connection Type: IKEv2
Always-on VPN: Unchecked
Server: [The Common Name from the Server certificate]
Remote Identifier: [The Common Name from the Server certificate]
Local Identifier: [The Common Name from the Client certificate]
Machine Authentication: Certificate
Certificate Type: RSA
Server Certificate Issuer Common Name: [The Common Name from the Root CA]
Server Certificate Common Name: [The Common Name from the Server certificate]
Enable EAP: Checked
Disconnect on Idle: Optional – I have it set to Never
EAP Authentication: Certificate
Identity Certificate: Select your Client certificate
Dead Peer Detection Rate: Medium
Disable redirects: Unchecked
Disable Mobility and Multihoming: Unchecked
Use IPv4 / IPv6 Internal Subnet Attributes: Unchecked
Enable perfect forward secrecy: Unchecked
Enable certificate revocation check: Unchecked
[Note: The following checkboxes may be changed depending on requirements, but that is outside the scope for this article]
Disable redirects: Unchecked
Disable Mobility and Multihoming: Unchecked
Use IPv4 / IPv6 Internal Subnet Attributes: Unchecked
Enable perfect forward secrecy: Unchecked
Enable certificate revocation check: Unchecked

Select the “IKE SA Params” tab and fill in the following:
First set the Integrity Algorithm to SHA2-384
Then set the Encryption Algorithm to AES-256-GCM
Diffie-Hellman Group: 20
Lifetime In Minutes: 720
Proxy Setup: [Optional]

Select the “Child SA Params” and fill in the following:
First set the Integrity Algorithm to SHA2-256
Then set the Encryption Algorithm to AES-256-GCM
Diffie-Hellman Group: 20
Lifetime In Minutes: 60
Proxy Setup: [Optional]

Save the .mobileconfig.

Using the profile


The profile can be installed on a Mac by double-clicking the file and entering administrative credentials to allow it to install. When installed, System Preferences – Network will contain a new “network device” called mydomain.tld VPN, with a padlock as an icon. It’s possible to start the VPN connection from here. It’s also possible to check the “Show VPN status in menu bar” checkbox, and manage the VPN by clicking the resulting icon.


The simplest way to install the profile on an iOS device is by mailing it and tapping the file from within Mail. After providing the device password to allow system changes, there will be a new “mydomain.tld VPN” profile in Settings – VPN. Select it and change Status to Connected.


We have enabled a simple and secure way to reach our home network and to reach the Internet via a known and trusted gateway from our Apple devices even when on the move.
With the proper client configuration, the same principles should be applicable to a client running any modern operating system.