Monthly Archives: January 2018

PASS Summit 2017

What a difference a tweet makes

I’ve been involved with SQL Server longer than most folks, but was always a sort of lone wolf. In the early days, there was no community to speak of, and even when the community started to gain momentum, I shied away from it.

That changed after the 2015 Summit, when I made a conscious decision to become involved in the SQL Server community. I spoke at my first user group, began applying to SQL Saturdays, and last year, I submitted abstracts to speak at the 2017 Summit. I was not at all surprised that my abstracts were not selected, but after reviewing the schedule, I was a bit disappointed to see my favorite feature somewhat under-represented (In-Memory OLTP) .

So I took to twitter, and wrote this:

image

And so it began, my roundabout path to presenting at PASS Summit 2017.

Niko Neugebauer – who I had been in touch with for a while, but had never actually met in person – replied to my tweet, and a dialog ensued.

After a bit of back and forth with folks at PASS, a panel had been formed to specifically discuss In-Memory OLTP, and I would be a member of that panel. Included was of course Niko himself, who was a fantastic MC, as well as Bob Ward, Kevin Farlee, Tehas Shah, and Jos de Briujn. Sunal Agarwal was also supposed to join us, but due to scheduling issues, couldn’t make it. Many of the Microsoft panel members were responsible for actually delivering the In-Memory OLTP feature, and to say that I was honored to be on a panel with them would be a great understatement. It was really thrilling – definitely the highlight of my presenting life!

Presenting takes you deep

Think you know a topic well? Presenting will prove that you don’t! The simple act of organizing your thoughts, such that they can be imparted to others, forces you to drill down into a topic in a way that you would never otherwise get to. All of the facts about SQL Server are written in the documentation, but delivering those facts to a room full of people requires a variety of skills: creating slides, demos, scripts, and anticipating questions.

It’s “Live”

My life as a jazz musician before I got into SQL Server included a lot of public performance, so I was pretty comfortable being out in front of a room full of people. There’s some common ground between jazz and presenting – they are both “live”, and anything can happen at any time. Projectors fail, other issues arise, and you have to find a way forward no matter what.

You

I encourage all readers of this post to submit abstracts for the 2018 PASS Summit – my experience is proof that you never know what will happen!

In-Memory OLTP Resources, Part 4: OOM, the most feared acronym in all of In-Memory OLTP

Earlier parts of this series can be found here:

Part 1: The Foundation

Part 2: Checkpoint File Pairs

Part 3: OOS (Out of Storage)

This post will cover memory requirements and usage, and what happens if you run actually reach OOM, also known as ”Out Of Memory”, a condition that strikes fear in the hearts of DBAs supporting memory-optimized databases. We’ll also cover CPU-bound conditions.

How memory is allocated to the In-Memory OLTP engine

At a high level, the memory that’s allocated to the In-Memory OLTP engine comes from the SQL Server ‘max memory’ setting, as does everything else within SQL Server. But beneath that level, we need to be aware of memory pools.

image

The pool that can be used for allocating memory to the In-Memory OLTP engine depends on which edition you are running:

  1. if you are running Enterprise Edition, you can use Resource Governor to configure a Resource Pool. Memory-optimized databases can be bound to separate pools, or multiple databases can be bound to a single pool. If you don’t bind a memory-optimized database to a pool created with Resource Governor, then all memory allocations for In-Memory OLTP for that database comes from the Default pool.
  2. if you are NOT running Enterprise Edition, all memory for In-Memory OLTP is allocated from the Default pool.

If using the Default pool, then as a result of deploying the In-Memory OLTP feature, there can be performance issues with on-disk workloads.

The following image shows that as we add rows to memory-optimized tables – and put pressure on the buffer pool – the buffer pool responds by shrinking, and that can affect disk-based workloads. If we then delete rows from memory-optimized tables, the buffer pool can expand. But what if we don’t delete rows from memory-optimized tables? Then the buffer pool will stay in its reduced state (or shrink even more), and that can cause problems due to buffer churn (continually having to do physical I/Os to retrieve pages from storage, for disk-based workloads).

image

Astute readers will consider using Buffer Pool Extensions (BPE), which is available in Standard Edition only. Yes, you could do that, but BPE retrieves a single 8K page at a time, and can actually make performance worse. And in case you’re wondering, no, it’s not possible to compress memory-optimized data that’s stored in memory. Think Windows will actually page out any of the memory allocated to In-Memory OLTP? That’s simply not possible.

Resource Governor

If you are running Enterprise Edition, then this problem gets solved by creating a resource pool. Now, to be clear, that doesn’t mean you can’t run out of memory for memory-optimized objects. It only means that your In-Memory workload can’t affect the on-disk workload, unless of course you configure the resource pool incorrectly. I’ve got a blog post on how to monitor resource pools here.

Let’s create a resource pool, with an artificially low upper bound, and insert rows until we hit the limit.

On my server, I was able to INSERT 305 rows before the pool ran out of memory, and receiving error 41805:

image

Causes of OOM

What can cause a memory-optimized database to run out of memory? It could be that resource consumption (memory) exceeded:

  • the relevant percentage of committed_target_kb from the sys.dm_os_sys_info DMV (explained in a moment)
  • MAX_MEMORY_PERCENT value of a Resource Pool that the database is bound to (if running Enterprise Edition and using Resource Governor)

or:

  • garbage collection is not operational (the purpose of GC is to reclaim memory consumed by stale row versions)
  • updates to memory-optimized table variables caused row versions to be created, and because GC does not operate on table variables, you ran out of memory (for table variables that have a very large amount of rows)

The only thing that can prevent GC from working is a long running transaction.

committed_target_kb

We are supposed to base our belief of how much memory is available for our memory-optimized databases, upon committed_target_kb from the sys.dm_os_sys_info DMV. Memory available for In-Memory OLTP is expressed as a percentage of committed_target_kb, based on total system memory, which is detailed here. Prior to SQL 2016/SP1, the In-Memory OLTP feature was only supported on Enterprise Edition, and the amount of memory allocated to SQL Server was limited to what the operating system supported.

But in a post-SQL 2016/SP1 world, things are different, because the In-Memory OLTP feature is now supported on non-enterprise editions. This means that people will start deploying In-Memory OLTP on servers with a lot less memory than is possible with Enterprise, and therein lies a potential issue.

The problem is that committed_target_kb is a moving target. 

From the documentation:

Applies to: SQL Server 2012 through SQL Server 2017.
Represents the amount of memory, in kilobytes (KB), that can be consumed by SQL Server memory manager. The target amount is calculated using a variety of inputs like:
– the current state of the system including its load
– the memory requested by current processes
– the amount of memory installed on the computer
– configuration parameters
If committed_target_kb is larger than committed_kb, the memory manager will try to obtain additional memory. If committed_target_kb is smaller than committed_kb, the memory manager will try to shrink the amount of memory committed. The committed_target_kb always includes stolen and reserved memory.

Those parts about “the current state of the system including its load” and “the memory requested by current processes” concern me. If there is x amount of memory available on a server, and you check the value of committed_target_kb when the server is “at rest”, then under load there might in fact be much less memory available. I believe this is one of the main causes of OOM for memory-optimized workloads, especially when people do a POC on under-provisioned machines (like laptops).

Database restore and recovery

The process of recovering a database is different for databases with durable memory-optimized data.

Step 1: the backup file is read, and the various types of of files are created. For example, all MD/NDF/LDF and data and delta files are created.

Step 2: data is copied from the backup into the files created in Step 1. If you restore a database WITH NORECOVERY, you have completed both Step 1 and Step 2

Step 3: For databases with durable memory-optimized data, there is one additional step, and that’s to stream data from the Checkpoint File Pairs (data/delta files) back into memory

It should be noted that if the backup contains both on-disk and memory-optimized tables, none of the on-disk data is available until all of the memory-optimized data has finished streaming. When restoring a backup – whether the database has memory-optimized data or not – the process short-circuits if there isn’t enough free space to create the files in Step 1. Unfortunately, no such validation of available memory is done for Step 3. That means you can spend a long time creating files on disk, then spend an additional lengthy amount of time streaming data to memory, only to find that you don’t have enough memory. If you think Microsoft should change this, please upvote my Connect item.

When data is streamed into memory, the wait type will be WAIT_XTP_RECOVERY.

The unwary DBA would logically think that the only time you can see WAIT_XTP_RECOVERY is when actually restoring a database with memory-optimized data, but unfortunately that’s not correct. The Microsoft documentation doesn’t list all of the possible “recovery events” that can cause restreaming, but through my own testing, I’ve come with the following list:

setting a database:

  • OFFLINE
  • READ_ONLY when it was READ_WRITE
  • READ_WRITE when it was READ_ONLY

Also, setting Read Committed Snapshot Isolation ON or OFF, will cause restreaming.

Additionally, the speed of restreaming is directly influenced by the number of volumes that you have created containers on, and the IOPS available from those volumes.

Potential solutions to OOM

  1. Open a DAC (Dedicated Admin Connection). Then delete rows, and/or move data from memory to disk.
  2. Increase system memory
  3. If Garbage Collection for row versions is not operational (due to long running transactions), clear up those long-running transactions so that GC can proceed

If you attempt to move data from memory-optimized tables to disk-based tables, i.e. using SELECT INTO, please note that it’s possible to create schema for memory-optimized tables that you can’t simply migrate to disk.

For example, the following CREATE TABLE is perfectly legal for memory-optimized tables, but will fail for disk-based tables (and also fails if using SELECT * INTO on-disktable FROM in-memtable):

The ability to create tables like this is detailed at this link, with the relevant section being:

“…you can have a memory-optimized table with a row size > 8060 bytes, even when no column in the table uses a LOB type. There is no run-time limitation on the size of rows or the data in individual columns; this is part of the table definition.”

What happens if you hit OOM

So how does hitting OOM affect workloads for memory-optimized databases?

SELECT still works, and also DELETE and DROP, but of course INSERT and UPDATE will fail.

CPU bound

Last but not least, I wanted to touch on potential CPU issues for memory-optimized databases. Database recovery can be CPU bound under the following circumstances:

  • many indexes on large memory-optimized tables (2014, 2016)
  • too many LOB columns (2016+)
  • incorrect bucket count set for HASH indexes (2014, 2016, 2017)

The first item in this list, “many indexes on large memory-optimized tables (2014, 2016)” has supposedly been addressed in SQL 2017.

LOB columns are actually stored as separate memory-optimized tables, and as noted by Dmitri Korotkevitch (blog) in this post, can impact performance.

The “incorrect bucket size for HASH indexes” issue persists to this day. If the bucket count is too low, there will be many sets of key columns that hash to the same value, increasing the chain length, and having not only a terrible effect on performance in general, but database recovery in particular.

Wrapping up

Hopefully this mini-series about resource consumption for memory-optimized workloads has given you a clear understanding of why Microsoft recommends the following:

  • 2x data set in memory for starting memory allocation (only for In-Memory, does not include memory for on-disk workload)
  • 3x workload IOPS from disks where containers are stored (handles operational workload plus read/write File Merge workload)
  • 4x durable memory-optimized data size for initial storage footprint

These are rough guides, but should be observed at first, and then tuned as required.

This concludes the series on resources issues for In-Memory OLTP.

In-Memory OLTP Resources, Part 3: OOS (Out of Storage)

Zero free space

This is a continuation of Part 1 and Part 2 of this blog post series, related to resource issues/requirements for memory-optimized databases.

In this post, we’ll continue with simulating what happens to a memory-optimized database when all volumes run out of free space.

In my lab, I’m running Windows Server 2012. Let’s use Powershell to install the File System Resource Manager, which will allow us to create a quota for the relevant folder:

add-windowsfeature –name fs-resource-manager –includemanagementtools

After installing the Windows feature we can set the quota for the folder, but we shouldn’t enable it just yet, because first we have to verify the current size of the folder.

On my server, I created a quota of 1.5GB, and then enabled it.

Now let’s INSERT rows into the table, in batches of 1000, until we reach the limit (the INSERT script is listed in Part 2, I’m trying to keep this post from getting too long).

Once the quota has been reached, we receive the dreaded 41822 error – this is what you’ll see when all of the volumes where your containers reside run out of free space (if even one of the volumes has free space, your workload can still execute).

image_thumb[389]_thumb

Just out of curiosity, we’ll verify how many rows actually got inserted. On my server, I’ve got 4,639 rows in that table, and the folder consumes 1.44GB. So theoretically, there was enough space on the drive to create more checkpoint files, but it seems as though the engine won’t just create what it can to fit in the available space. It’s more likely that the engine attempts to precreate a set of files, and it either succeeds or fails all at once, but I’ve not confirmed that.

I disabled the quota, executed a manual CHECKPOINT, and ran the diagnostic queries again:

image_thumb[26]

File Merge

Data files persist rows that reside in durable memory-optimized tables, and delta files store references to logically deleted rows. As more and more rows become logically deleted across different sets of CFPs, two things happen:

  1. the storage footprint increases (imagine that all data files have 50% of their rows logically deleted)
  2. query performance gets worse, because result sets must be filtered by entries in the delta files, which are increasing in size

Microsoft killed both of these birds with one stone: File Merge (aka Garbage Collection for data/delta files)

In the background – while your workload is running – the File Merge process attempts to combine adjacent sets of CFPs, and this is where we get to one of the file states that we didn’t cover in Part 1: MERGE TARGET

A file that has the fileType of MERGE TARGET is the new set of combined data/delta files from the File Merge process. Once the merge has completed, the MERGE TARGET transitions to ACTIVE, and as we stated earlier in this series, ACTIVE files can no longer be populated.

But what about the source files that the MERGE TARGET is derived from? After a CHECKPOINT, these files transition to WAITING FOR LOG TRUNCATION, and can be removed. It should be noted that it can take several checkpoints and transaction log backups for CFPs to transition to a state where they can actually be removed. That’s why Microsoft recommends 4x durable memory-optimized data size for the initial storage footprint.

In the images that follow, we can see that the formerly distinct transaction ranges of 101 to 200, and 201 to 300, have been combined into a single CFP, which has the range of 101 to 300.

image_thumb[29]

image_thumb[31]

image_thumb[33]

Effect on backup size

File Merge – and the requisite file state changes that CFPs must go through – explain why backups for memory-optimized databases can be considerably larger than the amount of data stored in memory. Until CFPs go through the required state changes, they must be included in backups.

IOPS

The File Merge process requires both storage and IOPS, as it reads from both sets of CFPs, and writes to a new set. Let’s say your workload requires 500 IOPS to perform well. We’ve just added another 1,000 IOPS as a requirement for your workload to maintain the same level of performance: 500 IOPS each for the read and write components of File Merge. That’s why Microsoft recommends 3x workload IOPS for your memory-optimized storage.

Potential remedies, real and imagined

What happens to your memory-optimized database when all volumes run out of free space?

In my testing of inserts that breached the quota for the folder, I saw no affect on database status. However, if I created the database, set the quota to a much lower value, and then created a memory-optimized table, the database status became SUSPECT. In a real-world situation, with hundreds of gigabytes or more of memory-optimized data, the last thing you want to do is a database restore in order to return your database to a usable state.

I was able to set the database OFFLINE, and then ONLINE, and that cleared the SUSPECT status. But keep in mind, that setting the database OFFLINE/ONLINE will restream all your data, so there will be a delay in database recovery due to that.

What can you do if your volumes run out of free space?

Well, in SQL 2014, your database went into “SUSPENDED” mode (not suspect), and it was offline, until perhaps you added more space and restarted the database (not sure, I didn’t test that). In SQL 2016+, the database goes into what’s known as “delete-only mode”, where you can still SELECT data, but modifying data is limited to deleting rows and/or dropping indexes/tables. Of course, SELECT, DELETE, and DROP to nothing to solve your problem: you need more free space.

When a database transitions to delete-only mode, that fact is written to the SQL errorlog:

[WARNING] Database ID: [9]. Checkpoint hit an error code 0x8300000a. Database is now in DeleteOnlyMode

You might think that you can issue CHECKPOINT manually, and do transaction log backups, hoping that File Merge will kick in. Or you could manually execute File Merge, with this uber-long thing:

EXEC sys.sp_xtp_checkpoint_force_garbage_collection <dbname>

But keep in mind that if there was no additional free space on the volumes to precreate CFPs, then it’s not likely that there will be enough free space to write a new set of CFPs for DBA-initiated File Merge.

The only thing you can do to remedy this situation is to either free up some space on the existing volumes, or create a new container on a new volume that has free space.

In Part 4, we’ll discuss memory in the same ways we’ve discussed storage – how it’s allocated, and what happens to your memory-optimized workload when you run out of it.