Category Archives: Memory

Entire database in memory: Fact or fiction?

HP Servers and Persistent Memory

Advances in hardware and software have converged to allow storing your entire database in memory (depending on how large it is), even if you don’t use Microsoft’s In-Memory OLTP feature.

HP Gen9 servers support NVDIMM-N (known as Persistent Memory), which at that time had a maximum size of 8GB, and with 16 slots, offered a total server capacity of 128GB. Hardly large enough to run today’s mega-sized databases, and also there was no way to actually store your database there. So the use case for SQL Server 2016 was to store log blocks for transaction logs there. This could be beneficial in general, but particularly when using durable memory-optimized tables. That’s because WRITELOG waits for the transaction log could be a scalability bottleneck, which reduced the benefit of migrating to In-Memory OLTP.

There were other potential issues when using Persistent Memory, detailed in this blog post. But what’s not covered in that post is the fact that deploying NVIDMM-N reduced the memory speed and/or capacity, because they are not compatible with LRDIMM. This causes you to use RDIMM, which reduces capacity, and because NVDIMM-N operates at a slower speed than RDIMM, it also affects total memory speed.

HP has since released Gen10 servers, and they have changed the landscape for those seeking reduced latency by storing larger data sets in memory. For one thing, they raise the bar for what’s now referred to as Scalable Persistent Memory, with a total server capacity of 1TB. To be clear, NVDIMM-N is not used in this configuration. Instead, regular DIMMs are used, and they are persisted to flash via a power source (this was also the case for NVDIMM-N, but both the flash, DIMM, and power source were located on the NVDIMM-N).

In this video, Bob Ward demonstrates ~5x performance increase for the industry’s first “disklesss” database, using a HPE Gen10 server, SUSE Linux, Scalable Persistent Memory, and columnstore (presumably on a “traditional/formerly on-disk table”, not a memory-optimized table, although that’s not specifically detailed in the video).

Brett Gibbs, Persistent Memory Category Manager for HP servers, states in this video that even databases that use In-Memory OLTP can benefit from Scalable Persistent Memory, because the time required to restart the database can be significantly reduced. He stated that a 200GB memory-optimized database that took 20 minutes to restart on SAS drives, took 45 seconds using Persistent Scalable Memory. However, no details are provided about the circumstances under which those results are obtained.

We are left to guess about the number of containers used, and the IOPS available from storage. It may be that in both cases, they tested using a single container, which would be a worst practice. And if that’s correct, to reduce database restart time all you had to do was spread the containers across more volumes, to “parallelize” the streaming from storage to memory.

I’m assuming that the 45 seconds specified represents the amount of time required to get durable memory-optimized data from flash storage back into memory. If that’s correct, then the reduction of time required to restart the database has nothing to do with the Scalable Persistent Memory (other than memory speed), and everything to do with how fast flash storage can read data.

Licensing

The HP video also details how there might be a licensing benefit. It’s stated that if your workload requires 32 cores to perform well, and you reduce latency through the use of Scalable Persistent Memory, then you might be able to handle the same workload with less cores. I’d love to see independent test results of this.

In-Memory OLTP

If you are considering placing a database entirely in memory, and don’t want to be tied to a specific hardware vendor’s solution, In-Memory OLTP might be an option to consider.

This is an extremely vast topic that I’ve been interested in for quite a while, and I’ll summarize some of the potential benefits:

  • Maintaining referential integrity – Microsoft recommends keeping cold data in on-disk tables, and hot data in memory-optimized tables. But there’s just one problem with that: support for FOREIGN KEY constraints are not supported between on-disk and memory-optimized tables. Migrating all data to memory-optimized tables solves this specific issue.
  • Native compilation – if you want to use native compilation, it can only be used against memory-optimized tables. If you can deal with the potential TSQL surface area restrictions, migrating all data to memory-optimized tables might allow greater use of native compilation.
  • Single table structure – if you were to keep cold data on disk, and hot data in-memory, you would need to use two different table names, and perhaps reference them through a view. Migrating all data to memory-optimized tables solves this problem.
  • Unsupported isolation levels for cross-container transactions – it’s possible to reference both on-disk and memory-optimized tables in a single query, but memory-optimized tables only support a subset of the isolations that are available for on-disk tables, and some combinations are not supported (SNAPSHOT, for example).
  • Near-zero index maintenance – other than potentially reconfiguring the bucket count for HASH indexes, HASH and RANGE indexes don’t require any type of index maintenance. FILLFACTOR and fragmentation don’t exist for any of the indexes that are supported for memory-optimized tables.
  • Very large memory-optimized database size – Windows Server 2016 supports 24TB of memory, and most of that could be assigned to In-Memory OLTP, if you are using Enterprise Edition. This is way beyond the capacity supported by the current line of HP servers using Scalable Persistent Memory.

One extremely crucial point to make is that if you decide to migrate an entire database to In-Memory OLTP, then database recovery time must be rigorously tested. You will need to have enough containers spread across enough volumes to meet your RTO SLA.


In-Memory OLTP Resources, Part 1: The Foundation

This multi-part blog post will cover various resource conditions that can affect memory-optimized workloads. We’ll first lay the foundation for what types of resources are required for In-Memory OLTP, and why.

The following topics will be covered :

  • causes of OOM (Out of Memory)
  • how files that persist durable memory-optimized data affect backup size
  • how memory is allocated, including resource pools, if running Enterprise Edition
  • potential effect on disk-based workloads (buffer pool pressure)
  • what happens when volumes that store durable memory-optimized data run out of free space
  • what you can and cannot do when a memory-optimized database runs out of resources
  • database restore/recovery
  • garbage collection (GC) for row versions and files (file merge)
  • BPE (buffer pool extension)

Like most everything in the database world, In-Memory OLTP requires the following resources:

  • storage
  • IOPS
  • memory
  • CPU

Let’s take storage first – why would a memory-optimized database require storage, what is it used for, and how much storage is required?

Why and What?

You’ll need more storage than you might expect, to hold the files that persist your durable memory-optimized data, and backups. 

How much storage? 

No one can exactly answer that question, as we’ll explain over the next few blog posts. However, Microsoft’s recommendation is that you have 4x durable memory-optimized data size as a starting point for storage capacity planning.

Architecture

A memory-optimized database must have a special filegroup designated for memory-optimized data, known as a memory-optimized filegroup. This special filegroup is logically associated with one or more “containers”. What the heck is a “container”? Well, it’s just a fancy word for “folder”, nothing more, nothing less. But what is actually stored in those fancy folders?

Containers hold files known as “checkpoint file pairs”, which are also known as “data and delta files”, and these files persist durable memory-optimized data (in this blog post series, I’ll use the terms CFP and data/delta files interchangeably). You’ll note on the following image that it clearly states in bold red letters, “NO MAXSIZE” and “STREAMING”. “NO MAXSIZE” means that you can’t specify how large these files will grow, nor can you specify how large the container that houses them can grow (unless you set a quota, but you should NOT do that). And there’s also no way at the database level to control the size of anything having to do with In-Memory OLTP storage – you simply must have enough available free space for the data and delta files to grow.

This is the first potential resource issue for In-Memory OLTP: certain types of data modifications are no longer allowed if the volume your container resides upon runs out of free space. I’ll cover workload recovery from resource depletion in a future blog post.

“STREAMING” means that the data stored within these files is different than what’s stored in MDF/LDF/NDF files. Data files for disk-based tables store data rows on 8K pages, a group of which is known as an extent. Data for durable memory-optimized tables is not stored on pages or extents. Instead, memory-optimized data is written in a sequential, streaming fashion, like the FILESTREAM feature (it should be noted that you do not have to enable the FILESTREAM feature in order to use In-Memory OLTP, and that statement has been true since In-Memory OLTP was first released in SQL 2014).   

  Storage1

How do these data/delta files get populated? All that is durable in SQL Server is written to the transaction log, and memory-optimized tables are no exception. After first being written to the transaction log, a process known as “offline checkpoint” harvests changes related to memory-optimized tables, and persists those changes in the data/delta files. In SQL 2014, there was a single offline checkpoint thread, but as of SQL 2016, there are multiple offline checkpoint threads. 

Storage2

Let’s create a sample database:

After creating the database, the InMemOOMTest folder looks like this:

image

OOM_DB_inmem1 and OOM_DB_inmem2 are containers (folders), and they’ll be used to hold checkpoint file pairs. You’ll note in the DDL listed above, that under the memory-optimized filegroup, each container has both a name and filename entry. The name is the logical name of the container, while the filename is actually the container name, which represents the folder that gets created on disk. Initially there are no CPFs in the containers, but as soon as you create your first memory-optimized table, CFPs get created in both containers.

If we have a look in one of the containers, we can see files that have GUIDs as names, and are created with different sizes.

image

This is definitely not human-readable, but luckily, Microsoft has created a DMV to allow us to figure out what these files represent.

Below we can clearly see that there are different types of files, and that files can have different “states”, which is central to the discussion of the storage footprint for memory-optimized databases, and backups of those databases. There are different values for container_id – remember we said that a memory-optimized database can have one or more containers. Next, we should pay attention to the fact that all entries for the “relative_file_path” column begin with “$HKv2\”. This means that in each container, we have a folder with the name “$HKv2”, and all data/delta files for that container are located there.

image

At this point, it’s time for a discussion of the various file states. I’ll stick to SQL 2016+ (because SQL 2014 had more file states).

The possible file states are:

  • PRECREATED
  • UNDER CONSTRUCTION
  • ACTIVE
  • MERGE TARGET
  • WAITING FOR LOG TRUNCATION

We’ll discuss the first three now, and save MERGE TARGET and WAITING FOR LOG TRUNCATION for later.

PRECREATED: as a performance optimization technique, the In-Memory engine will “precreate” files. These precreated files have nothing in them – they are completely empty, from a durable data perspective. A file in this state cannot yet be populated.

UNDER CONSTRUCTION: when the engine starts adding data to a file, the state of the file changes from PRECREATED to UNDER CONSTRUCTION. Data and delta files are shared by all durable memory-optimized tables, so it’s entirely possible that the first entry is for TableA, the next entry for TableB, and so on. “UNDER CONSTRUCTION” could be interpreted as “able to be populated”.

ACTIVE: When a file that was previously UNDER CONSTRUCTION gets closed, the state transitions to ACTIVE. That means it has entries in it, but is no longer able to be be populated. What causes a file to be closed? The CHECKPOINT process will close the checkpoint, changing all UNDER CONSTRUCTION files to ACTIVE.

That’s the basic rundown of the file states we need to know about at this point.

In Part 2, we’ll dive deeper into the impact of data/delta file states and the storage footprint for memory-optimized databases.