File system rights on mounted drives in Windows

As I repeatedly state, the same object oriented design that makes PowerShell potentially powerful in complex tasks, also makes it require ridiculous verbosity on our part to make it accomplish simple ones. Today’s post is a perfect example.

Consider a volume mounted to an NTFS mountpoint in a directory. Since this is an obvious afterthought in the file system design, setting access rights on the mountpoint directory won’t do you any good if you expect these rights to propagate down through the mounted file system. While the reason may be obvious once you think about the limitations in the design, it certainly breaks the principle of least astonishment. The correct way to set permissions on such a volume is to configure the proper ACL on the partition object itself.

In the legacy Computer Management MMC-based interface, this was simply a matter of right-clicking in the Disk Management module to change the drive properties, and then setting the correct values in the Security tab. In PowerShell, however, this isn’t a simple command, but a script with three main components:

  • Populate an ACL object with the partition object’s current security settings
  • Modify the properties of the ACL object
  • Commit the contents of the ACL object back into the partition object

Here’s how it’s done:

First we need to find the volume identifier. For this we can use get-partition | fl, optionally modified with a where, or ?, query, if we know additional details that can help narrow the search. What we’re looking for is something looking like the following example in our DiskPath property:

\\?\Volume{f0e7b028-8f53-42fa-952b-dc3e01c161d8}

Armed with that we can now fill an object with the ACL for our volume:

$acl = [io.directory]::GetAccessControl("\\?\Volume{f0e7b028-8f53-42fa-952b-dc3e01c161d8}\")

We then create a new access control entry (ACE):

$newace = New-Object -TypeName System.Security.AccessControl.FileSystemAccessRule -ArgumentList "DOMAIN\testuser", "ReadAndExecute, Traverse",
 "ContainerInherit, ObjectInherit", "None", "Allow"

The reason we must enter data in this order is because of the definition of the constructor for the access control entry object. There’s really no way of understanding this from within the interactive scripting environment; you just have to have a bunch of patience and read dry documentation, or learn from code snippets found through searching the web.

The next step is to load our new ACE into the ACL object:

$acl.SetAccessRule($newace)

What if we want to remove rights – for example the usually present Everyone entry? In that case we need to find every ACE referencing that user or group in our ACL, and remove it:

$acl.access | ?{$_.IdentityReference.Value -eq "Everyone"} | ForEach-Object { $acl.RemoveAccessRule($_)}

If we’ve done this job interactively, we can take a final look at our ACL to confirm it still looks sane by running $acl | fl.

Finally we’ll commit the ACL into the file system again:

[io.directory]::SetAccessControl("\\?\Volume{f0e7b028-8f53-42fa-952b-dc3e01c161d8}\",$acl)

And there we go: We’ve basically had to write an entire little program to make it, and the poor inventors of the KISS principle and of the principle of least astonishment are slowly rotating like rotisserie chickens in their graves, but we’ve managed to set permissions on a mounted NTFS volume through PowerShell.

PowerShell for Unix nerds

(This post was inspired by a question on ServerFault)

Windows has had an increasingly useful scripting language since 2006 in PowerShell. Since Microsoft apparently fell in love with backend developers a while back, they’ve even ported the core of it to GNU/Linux and macOS. This is actually a big deal for us who prefer our workstations to run Unix but have Windows servers to manage on a regular basis.

Coming from a background in Unix shell scripting, how do we approach the PowerShell mindset? Theoretically it’s simple to say that Unix shells are string-based while PowerShell is object oriented, but what does that mean in practice? Let me try to present a concrete example to illustrate the difference in philosophy between the two worlds.

We will parse some system logs on an Ubuntu server and on a Windows server respectively to get a feel for each system.

Task 1, Ubuntu

The first task we shall accomplish is to find events that reoccur between 04:00 and 04:30 every morning.

In Ubuntu, logs are regular text files. Each line clearly consists of predefined fields delimited by space characters. Each line starts with a timestamp with the date followed by the time in hh:mm:ss format. We can find anything that happens during the hour “04” of any day in our retention period with a naïve grep for ” 04:”:

zgrep " 04:" /var/log/syslog*

(Note that I use zgrep to also analyze the archived, rotated log files.)

On a busy server, this particular search results in twice as much data to sift through as we originally wanted. Let’s complement our commands with some simple regular expressions to filter the results:

zgrep " 04:[0-2][0-9]:[0-5][0-9]" /var/log/syslog*

Mission accomplished: We’re seeing all system log events between 04:00:00 and 04:29:59 for each day stored in our log retention period. To clarify the command, each bracket represents one position in our search string and defines the valid characters for this specific position.

Bonus knowledge:
[0-9] can be substituted with \d, which translates into “any digit”. I used the longer form here for clarity.

Task 2, Ubuntu

Now let’s identify the process that triggered each event. We’ll look at a line from the output of the last command to get a feeling for how to parse it:

/var/log/syslog.7.gz:Jan 23 04:17:36 lbmail1 haproxy[12916]: xx.xxx.xx.xxx:39922 [23/Jan/2019:04:08:36.405] ft_rest_tls~

This can be translated into a general form:

<filename>:<MMM DD hh:mm:ss> <hostname> <procname[procID]>: <message>

Let’s say we want to filter the output from the previous command and only see the process information and message. Since everything is a string, we’ll pipe grep to a string manipulation command. This particular job looks like a good use case for GNU cut. With this command we need to define a delimiter, which we know is a space character, and then we need to count spaces in our log file format to see that we’re interested in what corresponds to ”fields” number 5 and 6. The message part of each line, of course, may contain spaces, so once we reach that field we’ll want to show the entire rest of the line. The required command looks like this:

zgrep " 04:[0-2][0-9]:[0-5][0-9]" /var/log/syslog* | cut -d ' ' -f 5,6-

Now let’s do the same in Windows:

Task 1, Windows

Again our task is to find events between 04:00 and 04:30 on any day. As opposed to our Ubuntu server, Windows treats each line in our log as an object, and each field as a property of that object. This means that we will get no results at best and unpredictable results at worst if we treat our log as a searchable mass of text.
Two examples that won’t work:

Wrong answer 1

get-EventLog -LogName System -After 04:00 -Before 04:30

This looks nice, but it implicitly only gives us log events between the given times this day.

Wrong answer 2

get-EventLog -LogName System | Select-String -Pattern "04:[0-2][0-9]:[0-5][0-9]"

Windows can use regular expressions just fine in this context, so that’s not a problem. What’s wrong here is that we’re searching the actual object instance for the pattern; not the contents of the object’s properties.

Right answer

If we remember that Powershell works with objects rather than plain text, the conclusion is that we should be able to query for properties within each line object. Enter the “where” or “?” command:

Get-EventLog -LogName System | ?{$_.TimeGenerated -match "04:[0-2][0-9]:[0-5][0-9]"}

What did we do here? The first few characters after the pipe can be read as “For each line check whether this line’s property “Time Generated” matches…“.

One of the things we “just have to know” to understand what happened here, is that the column name “Time” in the output of the Get-EventLog command doesn’t represent the actual name of the property. Looking at the output of get-eventlog | fl shows us that there’s one property called TimeWritten, and one property called TimeGenerated. We’re naturally looking for the latter one.

This was it for the first task. Now let’s see how we pick up the process and message information in PowerShell.

Task 2, Windows

By looking at the headers from the previous command, we see that we’re probably interested in the Source and Message columns. Let’s try to extract those:

Get-EventLog -LogName System | ?{$_.TimeGenerated -match "04:[0-2][0-9]:[0-5][0-9]"} | ft Source, Message

The only addition here, is that we call the Format-Table cmdlet for each query hit and tell it to include the contents of the Source and the Message properties of the passed object.

Summary

PowerShell is different from traditional Unix shells, and by trying to accomplish a specific task in both we’ve gained some understanding in how they differ:

  • When piping commands together in Unix, we’re sending one command’s string output to be parsed by the next command.
  • When piping cmdlets together in PowerShell, we’re instead sending entire objects with properties and all to the next cmdlet.

Anyone who has tried object oriented programming understands how the latter is potentially powerful, just as anyone who has “gotten” Unix understands how the former is potentially powerful. I would argue that it’s easier for a non-developer to learn Unix than to learn PowerShell, that Unix allows for a more concise syntax than PowerShell, and that Unix shells execute commands faster than PowerShell in many common cases. However I’m glad that there’s actually a useful, first-party scripting language available in Windows.

To get things done in PowerShell is mainly a matter of turning around and working with entire properties (whose values may but needn’t necessarily be strings) rather than with strings directly.

Monitoring mounted Windows volumes using Zabbix

Sometimes it’s nice to mount a separate disk volume inside a directory structure. For a concrete example: At work we have a legacy system that writes copious amounts of data to subfolders of a network share. While vSphere allows for pretty large vdisks, after you pass 8 TB or so, they become cumbersome to manage. By mounting smaller disks directly in this directory structure, each disk can be kept to a manageable size. 

First the bad news: the built-in filesystem discovery rules for the Zabbix Windows agent can only automatically enumerate legacy drive letters, so we get to know the status of the root file system, but not of the respective mounted volumes.

The good news, however, is that it’s a piece of cake to make Zabbix understand what you mean if you manually create data collection items for these subdirectories.

The key syntax in Zabbix 3 looks like this:

vfs.fs.size[G:/topdir/subdir,pfree]

The only thing to remember is that we’re sending forward slashes in our query to the server agent even though we’re running Windows.