Exchange – another lesson learned

This is why we test things before going live:
After migrating a test box from the old Exchange environment, it could receive mail just fine, and sending mail out of the organization worked flawlessly too. Unfortunately any mail sent from this account to recipients within the old Exchange environment got stuck in the mail queue.

Logically as usual, the fix was to complement the default receive connectors on the old servers with the explicit addresses of the new Exchange servers, even though they naturally were well within the range. Way to go, Microsoft!

Load Balancing Exchange 2016 behind HAProxy

I recently started the upgrade to Exchange 2016 at work. A huge benefit over Exchange 2010, is that REST based client connections are truly stateless. In effect this means that if a server goes down, clients shouldn’t really notice any issues as long as something redirects them to a working server. In my system, this something is HAProxy.

The guys at HAProxy have their own excellent walkthroughs for setting up their load balancer for Exchange 2013, which can pretty much be lifted verbatim to Exchange 2016, but I want to add a few key points to think about:

Service health checks

Each web service has a virtual file to tell its state, called HealthCheck.htm. Let HAProxy use the contents of this file for the server health check. That way it’ll know to redirect clients if one of the services is down, even though the Exchange server in question may still be listening on port 443.

Example config:

    option httpchk GET /owa/HealthCheck.htm
    http-check expect string 200\ OK
    server Exchange1 maxconn 10000 ssl cafile cacert.pem weight 20 check 

This example shows a test of the Outlook Web Access service state. Naturally the config can be set to test each of the REST services each Exchange server presents.

Exchange server default firewall rules

Our design puts the load balancer in a DMZ outside of our server networks. Clients connecting through the load balancer will be dropped by Windows firewall rules generated by Exchange; specifically the edge traversal rules for the POP3 and IMAP protocols. Make sure you allow edge traversal for these protocols, letting the network firewall take care of limiting external client connections to them. Also take note there are multiple firewall rules for IMAP and POP3 traffic. Only the ones concerned with client traffic are relevant for this change. There’s no point in kicking open holes in your firewall for no good reason.

Exchange and Outlook suck at IMAP

We use IMAP for an internal order management system. Outlook and Exchange aren’t the best tools for this protocol, but unfortunately we have to live with those due to sins committed long ago. I spent quite some time troubleshooting our IMAP connections:
No matter how I configured Outlook I couldn’t get it to open an IMAP connection to the Exchange servers. Error messages varied depending on the client settings, but essentially I couldn’t log on, couldn’t establish a secure connection, or couldn’t synchronize my folders.

I would get the regular banner when telnetting from the client machine, so I knew traffic was getting through all the way from Exchange via the load balancer.
Mozilla Thunderbird could connect perfectly well and sync accounts, both using STARTTLS on port 143 and over a TLS encrypted connection on port 993. After mulling it over, I turned on debug logging in Outlook and quickly saw that the client was trying and failing to perform an NTLM logon to Exchange. Using the error messages as search terms, I found others who had experienced the same issue. Their solution had been to turn off NTLM authentication for the IMAP protocol on the Exchange server. This seems to be a regression in Exchange Server 2016 from an earlier bug in Exchange 2013.
The command in the Exchange Management Shell:

Set-IMAPSettings -EnableGSSAPIAndNTLMAuth $false

After this, Outlook still is incapable of logging on using TLS over port 993, but at least it consistently manages to run STARTTLS over port 143, which is good enough for my use case.

All in all, the most complicated part here wasn’t to make HAProxy do its magic, but to get Exchange and Outlook do what they should.

Apple AirPods first impressions

I’ve had the Apple AirPods for a few days now, and thought I’d record a few of my thoughts on them.

EDIT 2017-07-17: I’ve added an update to the end of this article.
EDIT 2017-07-20: A second update has been added after going through the rounds with Apple’s support.

First of all, the sound quality: Wow.

What I liked about the regular EarPods, was that they let other sounds through, making wearing them in populated areas non-suicidal in comparison to wearing in-ear headphones with a better seal: I regularly shake my head at pedestrians and cyclists wearing in-ears and obviously having great faith in surrounding traffic as they step right out into zebra-crossings or high-traffic streets. Unfortunately, in the case of the original wired EarPods, this added situational awareness came at the cost of radically reduced “oomph” in the music: Bass and dynamic range seemed to suffer in anything but rather quiet environments.

While the AirPods have a pretty much identical fit, letting similar amounts of ambient sounds through, the design team has managed to give them the additional range and power to sound pretty close to as well as I imagine such small drivers can sound in an open design. That said, some magic is very hard to pull off: You won’t be well-off using these without additional protection when mowing the lawn or angle-grinding metal bars.

Technically, I second what most other people seem to be saying about the combination of Bluetooth and the W1 proprietary chip: Switching to sound output via the AirPods once the initial pairing with a device has been made seems to work flawlessly, but both on my Mac and on my iPad, it took a few tries to see the AirPods in the first place. Under the hood, information about your pair of AirPods is shared across your Apple devices using iCloud, and obviously this information needs to be updated in some way. On the Mac, it seems like restarting the computer worked the trick. This is obviously an area where Apple has some work to do, to smooth out the experience in the future.

One thing to be observant on: Enabling the AirPods when playing with GarageBand, you get a warning about the ‘Pods introducing latency to your process. Sure enough: playing with the on-screen keyboard I probably got somewhere between 1/4 and 1/2 second of latency instead of the immediate response I’m used to from Apple music tools, so if music production is something you do on your Apple devices, make sure to keep a wired pair of headphones or in-ears around.

All in all: Are the AirPods worth their price? It depends. Can you spare a bunch of money for a smoother and nicer experience than what’s available via the cheapest available product that solves your problem? If you’re an Apple user, the answer to that question is probably yes. To me, after I got them, I don’t really think about the money I saved up to spend on them. For now I’m extremely happy with them.

I’ve encountered two major annoyances in how the AirPods work with my Mac (a late 2013 15″ MacBook Pro):
Apparently when anything in MacOS uses the microphones in the AirPods, they switch to phone call mode, lowering sound quality and making all sounds slightly distorted and lo-fi. This can be temporarily mitigated by switching sound input for the system or for the specific application to another device, like the internal mike, but this of course isn’t a viable long-term solution.

The other problem on the Mac is recurring sound interruptions and glitches on music playback. Switching to the internal speakers or wired headphones, no such glitches can be heard, so it definitely has to do with the AirPods or their Bluetooth implementation.

Frankly I’m disappointed that the AirPods were released with such glitches not worked out; then again they did have trouble getting them to market in time in the first place. I will speak to Apple’s support to try to get some more information. It may be a problem with the Bluetooth protocol itself as implemented on the Mac or in macOS, and in that case there may not be a lot Apple can do.

In view of this, I have to change my recommendation:
At this point in time (mid-July 2017), do not purchase the AirPods expecting to use them for good-quality music playback and convenient voice calls in macOS. For use with iOS devices, however, they remain an excellent choice.

Apple’s support gave me a technical explanation for the lo-fi sound quality when the microphone is used in macOS.

The facts

When only listening to the AirPods, Apple can send relatively high-quality AAC-encoded sound to them. When the microphones are used – that is when a return channel is active – the Bluetooth standard specifies a lower-quality protocol to be used, resulting in noticeably lower dynamic range and sound quality.

The problem exists on iOS devices too, but it’s simply less likely that one would be listening to music and simultaneously using the microphone in that system.

My speculations

It looks to me as though my iOS devices (9.7″ iPad Pro, and iPhone 6s) are capable of a newer version of the Bluetooth hands-free profile than does macOS on my 2013 15″ MacBook Pro, since call sound quality is radically better on the former than on the latter. This may be due to the Bluetooth chip used in my computer, or due to software limitations in the operating system. If the former – which I suspect – the issue won’t get fixed on my current computer. If the latter, a patch at a later date may be able to remediate but not solve the issue.

A problem with the age of the Bluetooth chip and its available codecs may also explain the stuttering in macOS.


As I wrote in Update 1 to this post, my recommendations are as follows:
Beware of purchasing a pair of AirPods if you intend on using them primarily with a Mac. They’re probably not worse than other Bluetooth headsets for this purpose, but rather the same problems exist with these as you’ll find with any other Bt headset. If music or voice call quality is an issue, a wired headset still is the way to go on the computer side of things.

For iOS devices and the Apple Watch, however, a pair of AirPods is probably one of the best upgrades to your experience you can get if you want to go wireless.

iOS 11 drops 32-bit app support – do we care?

In the upcoming months and until a short while after Apple’s inevitable autumn event where they’ll publicly release their new operating systems, computer magazines and news sites will try to create headlines about how Apple is killing off tens or hundreds of thousands of apps. What’s true and what’s not about this?

Well, yes: iOS 11 kills the support for 32-bit apps. Any such apps on your iPhone or iPad will stop working the day you upgrade to the upcoming operating system. I had a discussion with a friend the other day, regarding Apple’s decision to drop 32-bit OS and app support. He didn’t really like that decision, but I would like to put it in perspective with this beautiful table:

Year  |  2012   2013   2014  2015   2016   2017
Model |    5.    5s.    6.    6s.     7.    7s/8.
      |    ^                                 ^
      |This is the                       This is the
      |last iPhone                       first iPhone
      |that requires                     that will be launched
      |a 32-bit OS                       without OS support
      |                                  for 32-bit apps

What I’m trying to indicate is that we have two conflicting ways of approaching the problem of legacy software:
One way would be to try to avoid rocking the boat, keeping backwards compatibility even at cost. The good thing about this is what we see in the Windows ecosystem: As long as the computer’s CPU is capable of running in emulation mode for the bitness required1, software just keeps on working. Particularly in business applications, not breaking backwards compatibility may be worth significant sums of money.
The bad thing is a lack of incentive on the part of software manufacturers to update their programs. A “Change is Bad” attitude easily develops when changes are few and far between: people don’t get enough practice in performing change in a safe way, and change management and reliability suffers as a result.

The other way to approach legacy software is to enforce changes for users who want to stay up-to-date. This is the approach Apple has chosen in many areas, for good and for bad. Since they control both their software and their hardware platforms, Apple are in a very good position to simply stop supporting old ways of doing things, and provided they wait a reasonable amount of time this shouldn’t cause a lot of problems. As evident of my table earlier in this post, the last iPhone unable to run a 64-bit environment will turn 5 this year. Considering the evolution of mobile hardware, I’d say anybody who still uses their iPhone 5 has gotten pretty decent mileage out of them – remember that every new software update up until this fall will have worked on that device.

But suppose you actually are heavily invested in some older app; how can you know whether it supports 64-bit iOS versions?
Look at the Version history field in the App Store. If an app was first published in January 2015 or later, or if it was last updated later than June 1 2015, it had to be able to run in a 64-bit environment.

There’s no stopping the wheels of time – iOS and Apple hardware will move on. I could recommend freezing a device, not upgrading it beyond a certain OS version. I won’t, because I consider that a terrible idea, at least for any device connected to the Internet, and for any device used for production work.

Luckily, we won’t see another bitness update in the foreseeable future. The two latest ones were exciting enough.


1 An x86 compatible CPU can by design not run 16-, 32-, and 64-bit code simultaneously, but can switch between 32/16 and 64/32 modes after a hard reset.

SFTP revelations

I got myself into a situation where I had to copy some files from my computer to a server that presented sftp but not scp. Since I’ve never needed to use the sftp protocol from a cli-only machine, I haven’t really thought about how it works in non-interactive mode. Batch mode allows you to create a batch file of sftp commands to execute on the server, but what if you just want to do a single operation?

Pipes to the rescue:

$ echo put filename.tgz | sftp -i private.key -b -

Putting a dash after the -b option causes the command to take batch input from stdin. Piping text to the command, then, means that text is swallowed by the sftp client. Nice and simple.

Securing an Internet accessible server – Part 3

This post is part of a series. Part 1, Part 2.

In the last part I briefly mentioned load balancers and proxies. After thinking about it for a while, I realized I see no reason not to run one, since it simplifies things a bit when setting up secure web services. In this part, we will be setting up a HAProxy server which won’t actually load balance anything, but which will act as a kind of extensible gatekeeper for our web services. In addition, the HAProxy instance will act as the TLS termination point for secure traffic between clients on the Internet and services hosted on our server(s).

This article is written from the perspective of running HAProxy on a separate virtual machine. That’s just for my own convenience, though. If you’re running pfSense for a firewall, you already have HAProxy as a module. It is also possible to run HAProxy directly on your web server, just logically putting it in front of whatever web server software you’re running.

Let’s get started. This post will be a rather long one.

Continue reading “Securing an Internet accessible server – Part 3”

WordPress behind HAProxy with TLS termination

My current project has been to set up a publicly accessible web server with a decent level of security. It has been an interesting exercise in applying “old” knowledge and gathering some new.

This weekend I finished this project for now. The current setup is as follows:
Behind my firewall, where I NAT port 80 and 443 for http and https traffic, I have set up a HAProxy instance. This allows me to do some interesting magic with incoming traffic.

In addition to the traffic manipulation, I also use the HAProxy server for contacting Let’s Encrypt to renew my TLS certificates, and for terminating TLS traffic. The latter has two reasons: a) I’m frankly too lazy to automate installing updated certificates on the web server, and b) I’m running the entire solution on so limited hardware that I’m a little bit worried about putting too much of a strain on it should there ever be a bit more traffic on the machine.

The web server is an Nginx running this very WordPress instance.

Let’s Encrypt configuration

I took the best parts from two different solutions to automate the relatively frequent certificate renewals that Let’s Encrypt enforces. I began by installing the HAProxy ACME Domain Validation Lua Plugin into HAProxy, which ensures that there’s a valid listener to show that I own my domain when I trigger the letsencrypt client program. The beauty of running this through HAProxy is that the process requires no downtime.

For the configuration of the letsencrypt client, I basically stole the scripts from Martijn Braam’s blog BrixIT, just adapting them to the fact that there was a listener provided through the Lua script. The benefit from doing it this way is that the BrixIT method is considerably more flexible than the Lua script when one expects HAProxy to use more than one certificate.

Example config:


    lua-load /etc/haproxy/acme-http01-webroot.lua

frontend web-http
    acl url_acme_http01 path_beg /.well-known/acme-challenge/
    http-request use-service lua.acme-http01 if METH_GET url_acme_http01
    redirect scheme https code 301 if !{ ssl_fc }

The last line also shows how to redirect regular http traffic to a https listener.



# Path to the letsencrypt-auto tool

# Directory where the acme client puts the generated certs

# Concat the requested domains
for DOM in "$@"
 DOMAINS+=" -d $DOM"

# Create or renew certificate for the domain(s) supplied for this tool
$LE_TOOL --agree-tos --renew-by-default certonly $DOMAINS --text --webroot --webroot-path /var/lib/haproxy --email

# Cat the certificate chain and the private key together for haproxy
cat $LE_OUTPUT/$1/{fullchain.pem,privkey.pem} > /etc/haproxy/ssl/${1}.pem

# Reload the haproxy daemon to activate the cert
systemctl reload haproxy

TLS termination configuration

The problem with terminating TLS traffic before the web server, is that any good web application should be able to recognize that the client is coming from an insecure connection. Luckily, we can use HAProxy to tell WordPress that the connection was good up until the load balancer and to trust it the rest of the way. Be aware that this is an extremely bad idea if there is any way to reach the web server other than via your HAProxy:


/** Make sure WordPress understands it's behind an SSL terminator */
define('FORCE_SSL_ADMIN', true);
define('FORCE_SSL_LOGIN', true);
if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')


frontend web-https
    option http-server-close
    http-request set-header X-Forwarded-Proto https if { ssl_fc }

As a final touch, I copied the brute force sandboxing scheme straight from this blog post by Baptiste Assmann over at


The paravirtual SCSI controller and the blue screen of death

For driver reasons, the default disk controller in VMware guests is an emulated LSI card. However, once you install VMware Tools in Windows (and immediately after installing the OS in most modern Linux distributions), it’s possible to slightly lower the overhead for disk operations by switching to the paravirtual SCSI controller (“pvscsi”).

I’m all for lower overhead, so my server templates are already converted to use the more efficient controller, but I still have quite a lot of older Windows servers that still run the LSI controller, so I’ve made it a habit to switch controllers when I have them down for manual maintenance. There is a perfectly good way of switching Windows system drives to a pvscsi controller in VMware, and it’s well documented, so up until a couple of days ago, I’ve never encountered any issues.

Continue reading “The paravirtual SCSI controller and the blue screen of death”

Securing an Internet accessible server – Part 2

In part 1 we made it significantly harder to gain access to our server once it is opened up to the Internet – but we’re not quite ready for that yet. In this post we’re exploring a firewall in Ubuntu, ufw, which stands for “uncomplicated firewall”, and we’ll set up some additional hardening using Fail2Ban to protect ourselves from some common repeated attacks.

Continue reading “Securing an Internet accessible server – Part 2”

Securing an Internet accessible server – Part 1

This article is part of a series. Part 2.

Let’s look at a simple scenario, and see how common tools in the Linux and BSD world can help us:

We want to be able to remote control a server from wherever in the world, but we really don’t want others to be able to log on to it.

In the real world, this is common enough. Understandably, though, anyone who even has a slight understanding of the risks involved will be somewhat nervous about creating a potential hole in the barricades protecting their network. With a little knowledge, we can achieve the relevant access while minimizing the risks.

In this first part, we’re configuring the Secure Shell for asymmetric key logon rather than the generally speaking less secure username/password combination we’re used to.

Continue reading “Securing an Internet accessible server – Part 1”