Configure NVDIMM-N on a DELL PowerEdge R740 with Windows Server 2019

Cloud and Virtualization Architect. Didier is an IT veteran with over 20 years of expertise in Microsoft technologies, storage, virtualization, and networking. Didier primarily works as an expert advisor and infrastructure architect.

Cloud and Virtualization Architect. Didier is an IT veteran with over 20 years of expertise in Microsoft technologies, storage, virtualization, and networking. Didier primarily works as an expert advisor and infrastructure architect.

Introduction

Dell EMC introduced support for NVDIMM-N with their 14th generation of PowerEdge servers. NVDIMM-N is one of the variants of Persistent Memory (PMEM).

This image shows the location of status LEDs on the NVDIMM-N

Microsoft also uses the term Storage Class Memory, not exactly the same as NVDIMM variants, but it is used in naming the bus driver on Windows. In essence, it is Non-Volatile DIMM (NVDIMM) which means it retains its data even when the system suffers unexpected power loss, a crash (BSOD) or a shutdown/restart., system crash etc. NVDIMM-N modules have a battery pack to help achieve this (like a storage controller) and has flash storage to persist the data.

NWDIMM BATTERY

An NVDIMM-N module sits in a standard CPU Memory slot. A lot closer to the CPU than any disk type has ever been. This way we get the benefits of memory (very high speed, very low latency) combined with the benefits of memory.

Hyper-V and NVDIMM-N

Both VMware and Microsoft are heavily investing in ever better storage performance. Both in the virtual disks as for their HCI storage offerings. VMware introduced support for NVDIMM-N from vSphere ESXi 6.7. Microsoft brings support for NVDIMM-N mainstream with Windows Server 2019 LTSC. With SAC build they can deliver features faster but for the use case of virtualization/HCI, LTSC is the branch we’re interested in. We can use NVDIMM-N with S2D or present it to VMs directly to be used with either DAX or block access depending on the need of your workloads.

As you know I tend to invest in cost-effective, high-value ways to let Hyper-V shine when it comes to performance and as such this capability is something I have to investigate. Just like most of you, I am a novice at this technology. So, we are going on an adventure.

The first step is configuring NVDIMM on the server

Before we can leverage the NVDIMM-N modules we need to configure them for that use case in the BIOS settings. We make sure we have updated the BIOS and all firmware to the latest and the greatest. The same goes for the drivers and Windows Server updates.

Boot into the BIOS and navigate to System Setup / System BIOS / Memory Settings / Persistent memory.

System Setup / System BIOS / Memory Settings / Persistent memory

In that menu make sure your settings are as listed below.

The Battery Status should read

You back out and it will ask to save any changes. When the server has rebooted, we will find the NVDIMM-N modules in “Device Manager” under Memory devices. The number there depends on how many you have in your server. It speaks for itself that if you did not enable them in the BIOS no NVDIMM-N devices will be shown. These are the physical devices in your server memory slots.

These are the physical devices in your server memory slots.

What if my NVDIMM battery is dead or set as read-only (BIOS)?

Windows Server 2019 will still function and be able to read and write data. But when restarting, crashing or power loss the data is lost. If you want the OS to mark the NVDIMMs as read-only in these cases you need to create a registry DWORD value under HKLM\System\CurrentControlSet\Services\pmem called ReadOnlyOnPersistenceLoss with a value of 1.

ReadOnlyOnPersistenceLoss

How to use NVDIMM-N modules in Windows

In Windows Server 2019 we have support for label and namespace management. This basically means that clean NVDIMM-N modules are will not show up as usable disks by default. You have to create those. That’s why you don’t see the logical persistent memory disks until you configure them. Once you’ve done that you can bring them online, initialize them and format them. To get started with this we use PowerShell. Let’s look at that. We run Get-Command -Module PersistentMemory and list the available cmdlets.

Get-Command -Module PersistentMemory and list the available cmdlets

Creating a logical PMEM disk

First, we list the physical memory devices with Get-PmemPhysicalDevice

First, we list the physical memory devices with Get-PmemPhysicalDevice

We see six, which corresponds to the 6 we have in the host. This corresponds to what we saw in “Device Manager” for Memory Devices.

We then list the usable regions to create logical PMEM disks with via Get-PmemUnusedRegion

create logical PMEM disks with via Get-PmemUnusedRegion

Again, we see six as we are not using interleaving, so each region contains 1 PMEM physical device, an NVDIMM-N module. We will use all of these to create 6 logical PMEM disks. But before we do so we run Get-PmemDisk to see there are none yet.

We now create just one using New-PmemDisk -RegionId 1 -AtomicityType None

Now let’s run Get-PmemUnusedRegion and Get-PmenDisk again. You see we have one less unused region than before …

Get-PmemUnusedRegion and Get-PmenDisk

… and we see one PMEM disk

 and we see one PMEM disk

We then create PMEM disk from the other 6 unused regions and voila we now have 6 disks at our disposal. Be patient this takes a while.

We then create PMEM disk from the other 6 unused regions

List all PMEM Disks

List all PMEM Disks

Note that once you have created a PMEM disk for all available physical devices you’ll have no more unused regions. Running Get-PmemUnusedRegion returns nothing at this point. As we have consumed all the available regions.

In “Device Manager” we see the Persistent memory disks. I did note that friendly naming doesn’t seem to be working properly, so I did not bother to use it.

In “Device Manager” we see the Persistent memory disks

These persistent memory disks are now available to the OS the bring online, initialize and format (as DAX) storage on which we can create Hyper-V .vhdpmem virtual disks. We’ll write about that later on.

Interleaving

Didier, I need more space than that miserly 16GB. Hold on, hold on, we’ll get you there. Within reason for now until we get larger NVDIMMs. Also, note you can split up a log file in SQL Server to help deal smaller disks.

Windows Server 2019 also brings us support for NVDIMM-N node interleaving. This has the ability to enhance performance. The big gain here is when interleaving is done for NVDIMM across channels. The gain in performance for interleaving within a channel is less significant.

To enable it you boot into the BIOS Setup and navigate to System Setup / System BIOS / Memory Settings / Persistent memory and change interleaving to enabled.

Persistent memory and change interleaving to enabled

You’ll get a warning this will delete all data. No worries. We already deleted the logical PMEM disks on the host to make sure we don’t run into issues when the regions their layout changes. If anything worth saving was on there, you must have a backup somewhere, somehow you can restore.

saving changes

When you save these changes and reboot, you’ll see that from the Memory Devices point of view nothing has changed, there are still six. But look at Get-PmemUnusedRegion and you’ll see 2 regions now each consisting of 3 physical PMEM devices.

at Get-PmemUnusedRegion and you’ll see 2 regions now each consisting of 3 physical PMEM devices

Great more space when we create a logical PMEM disk out of those! That made my DBA’s a bit more enthusiastic and they now started envisioning larger log or temp DB LUNs for their SQL Server VMs.

So let’s create logical PMEM disks out of the unused regions. I’m using Block Translation table now for Atomicity type.