Category Archives: In-Memory OLTP

In-Memory OLTP: Optimizing data load

In-Memory OLTP: Optimizing data load

Inserting large sets of data to memory-optimized tables might be required when initially migrating data from:

harddrive-based or memory-optimized tables in

  • the same database
  • a separate database (not directly supported)

Some of the ways to load data into memory-optimized tables are:

  • SSIS
  • BULK INSERT
  • bcp
  • INSERT/SELECT

SELECT INTO is not supported for memory-optimized tables.

Harddrive-based tables

Let’s review the basic requirements to optimally load data to harddrive-based tables.

PowerPoint Presentation

Recovery model: Most if not all OLTP databases run with the recovery model set to FULL. DBAs are taught from birth that when loading data, the recovery model should be set to BULK_LOGGED so that the transaction log doesn’t explode when you load data. The next transaction log backup will still include all the data that was loaded, but if you set the recovery model to BULK_LOGGED, you won’t require the extra storage to accommodate transaction log growth.

Itzik Ben-Gan wrote an excellent article on minimal logging here. It covers Trace Flag 610 and many other aspects of loading data into harddrive-based tables.

Indexes: For harddrive-based tables, we should have the minimum amount of indexes in place or enabled, because all index modifications are fully logged, which slows down the data load (TF 610 changes this behavior). You’ll still have to rebuild/create those indexes, and that will be logged, but it’s often faster to do that than load data with indexes in place, if for some reason TF 610 can’t be used.

Clustered indexes: For harddrive-based tables, we want to load the data sorted by the clustering key, so that we can eliminate any sorting.

Memory-optimized tables

Basic requirements to optimally load data to memory-optimized tables:

PowerPoint Presentation

Most DBAs are surprised to learn that DML changes to memory-optimized tables are always fully logged, regardless of the database recovery model. For INSERT/UPDATE/DELETE on memory-optimized tables, there is no such thing as “minimally logged”.

In SQL Server 2016 we finally have the ability to use the ALTER TABLE command to change memory-optimized tables. Most ALTER TABLE operations are executed in parallel and have the benefit of being minimally logged.

I did the following to verify that index creation is indeed minimally logged (based on SQL 2016 RC3**):

  • Create a memory-optimized table and load 15 million rows
  • Execute BACKUP LOG and CHECKPOINT (a few times)
  • Execute SELECT COUNT(*) FROM fn_dblog(NULL, NULL), result is 30 rows
  • ALTER TABLE/ADD NOT NULL column: 7 seconds
  • Execute SELECT COUNT(*) FROM fn_dblog(NULL, NULL), result is 308 rows
  • Execute BACKUP LOG and CHECKPOINT (a few times)
  • Execute SELECT COUNT(*) FROM fn_dblog(NULL, NULL), result is 35 rows
  • ALTER TABLE ADD INDEX: 13 seconds
  • Execute SELECT COUNT(*) FROM fn_dblog(NULL, NULL), result is 118 rows

**If an index column is currently off-row, creating an index that references this column causes the column to be moved in-row. If the index is dropped, the column is again moved off-row. In both of these scenarios, ALTER TABLE is fully logged and single-threaded.

Then I executed a command that is definitely not minimally logged:

  • ALTER TABLE/ADD NOT NULL nvarchar(max) column: 6 minutes, 52 seconds
  • Execute SELECT COUNT(*) FROM fn_dblog(NULL, NULL), result is 210,280 rows

So from a logging perspective, it probably doesn’t make a lot of difference if non-clustered indexes are in place when data is loaded to memory-optimized tables. But concurrency will definitely suffer when creating indexes with ALTER TABLE/ADD INDEX, as the table is offline for the entire duration of any ALTER commands. That might be somewhat mitigated by the fact that you can now create multiple indexes, constraints, etc, with a single ALTER TABLE statement:

ALTER TABLE dbo. MyInMemTable ADD INDEX IX_Column1(Column1) , INDEX IX_Column2 (Column2)

“Clustered” indexes

Sadly, using the label “clustered” to describe any index on memory-optimized tables will confuse many people. For harddrive-based tables, a clustered index determines the physical order of data pages on disk, and clustered indexes for harddrive-based tables are the primary source of data – they are in fact the actual data for the table.

With regard to how data for memory-optimized tables is stored in memory, it’s not possible to have any form of ordering. Yes, you can create a “clustered” index on a memory-optimized table, but it is not the primary source of data for that table. The primary source of data is still the memory-optimized table in memory.

Loading

You should determine a way to break up the data loading process so that multiple clients can be executed in parallel. By client I mean SSMS, Powershell, SQLCMD, etc. This is no different than the approach you would take for loading data to harddrive-based tables.

When reviewing the following chart, remember that natively compiled stored procedures won’t work for any scenario that includes both harddrive-based and memory-optimized tables.

Source

Method

Notes

harddrive-based, same db

INSERT/SELECT

Supported, but excruciatingly painful with large data sets (single INSERT/SELECT statement), even if using a HASH index with bucket count properly configured. I succeeded in locking up my server several times with this approach.

harddrive-based, different db

INSERT/SELECT

Not supported.

You can use tempdb to stage the data, i.e. SELECT INTO ##temptable. Then process data with multiple clients.

harddrive-based, files

bcp out/ bcp in

Supported

harddrive-based, different db

indexed memory-optimized table variable

Supported, but not “transactional”.

Modifications to rows in a memory-optimized table variable creates row versions (see note below).

BULK INSERT is also supported, with the same restrictions as INSERT/SELECT (can’t go cross-database).

Different Source and Destination databases

a. If you are copying data between databases, i.e. Database A is the source for harddrive-based data you want to migrate, and Database B is the destination for memory-optimized data, you can’t use INSERT/SELECT. That’s because if there is a memory-optimized table as the source or destination of the INSERT/SELECT, you’ll be going “cross-database”, and that’s not allowed. You’ll either need to copy harddrive-based data to a global table (##) in TempDB, to an external file and then use BCP, or to a memory-optimized table variable (further explanation below).

b. Next, you’ll have to get the data into the memory-optimized tables. If using a ##TempTable, you can use stored procedures to process distinct key value ranges, allowing the procedures to be executed in parallel. For performance reasons, before calling these stored procedures, you’ll need to create an index on the primary key of the ##TempTable. If using stored procedures, you should determine the optimal batch size for your server/storage (see chart at the end of this post for my results using this method).

c. Natively compiled stored procedures won’t work in this scenario, because you can’t reference disk-based tables or TempDB from natively compiled stored procedures.

d. Instead of using a ##TempTable, it’s possible to insert data into an indexed memory-optimized table variable from the source database, and then use INSERT/SELECT from that variable into the destination database. That would solve the issue of making a second copy on disk, but be careful if you need to transform the data in the memory-optimized table variables, because updating data in memory-optimized table variables creates row versions, which will consume memory. That’s in addition to the memory required for the memory-optimized table variable itself.

e. Garbage collection is a process that frees memory consumed by row versions, which were created as a result of changes to data in memory-optimized tables. Unfortunately, the garbage collection process does not free up memory consumed by memory-optimized table variables – those row versions will consume additional memory (until the memory-optimized table variable goes out of scope).

In order to use a natively compiled stored procedure for copying data from one table to another, the source and destination tables must both be memory-optimized, and both must reside in the same database.

Hardware/software used for testing

Software

  • Windows Server 2012 Datacenter
  • SQL 2016 RC3
  • sp_configure max memory: 51200 MB
  • Resource pool of 70%

Hardware

  • Make/model: custom built
  • Physical memory: 64GB
  • Memory stick: Samsung M386A4G40DM0 32GB x 2
  • Dual Intel Xeon E5-2630 v3 CPU
  • Transaction log on Intel 750 PCIe SSD
  • Checkpoint File Pairs on OWC Mercury Accelsior PCIe SSD

Testing details:

  • SELECT INTO ##TempTable was used to make the source data visible from within the memory-optimized database.
  • An index was created on the primary key for ##TempTable (INT IDENTITY). The “table on SSD” in the chart below was stored on the Intel 750 PCIe SSD
  • All inserts were done by calling an interpreted TSQL stored procedure which processed rows in batches, using “PrimaryKey BETWEEN val1 and val2”. No key generation was involved, because in the procedure, SET IDENTITY_INSERT was ON.
  • There was a single HASH index on the memory-optimized table, with BUCKET_COUNT set to 10 million, in order to handle the initial data set of 5 million rows. Increasing the BUCKET_COUNT TO 30 million did not make any appreciable difference in the final test (with three sessions loading 5 million rows each).

PowerPoint Presentation

In-Memory OLTP relationship status: “it’s complicated”

Because partitioning is not supported for memory-optimized tables, Microsoft has posted workarounds here and here.

These workarounds describe how to use:

a. application-level partitioning

b. table partitioning for on-disk tables that contain cold data, in combination with memory-optimized tables for hot data.

Both of these workarounds maintain separate tables with identical schema. The first workaround would not require app changes, but the second workaround would require changes in order to know which table to insert/update/delete rows in. Technologists are not crazy about changing existing applications.

Even if we accept that these are viable solutions for existing applications, there are other potential problems with using either of these approaches.

Parent/Child issues

An OLTP database schema is usually highly normalized, with lots of parent/child relationships, and those relationships are usually enforced with PRIMARY KEY and FOREIGN KEY constraints. SQL 2016 allows us to implement PK/FK constraints for memory-optimized tables, but only if all participating tables are memory-optimized.

That leads us to an interesting problem:

How can we enforce PK and FK relationships if a database contains both disk-based and memory-optimized tables, when each table requires the same validation?

Sample scenario

In a simplified scenario, let’s say we have the following tables:

Parent table: memory-optimized, States_InMem

Child table 1: memory-optimized, contains hot data, Addresses_InMem

Child table 2: disk-based, contains cold data, Addresses_OnDisk

We must satisfy at least three conditions:

a. Condition 1: an insert/update on the memory-optimized child table must validate StateID

b. Condition 2: an insert/update on the disk-based child table must validate StateID

c. Condition 3: deleting a row from the parent table must not create orphaned child records

Example 1:

Condition 1

Assume Addresses_InMem has a column named StateID that references States_InMem.StateID.

If we create the States_InMem table as memory- optimized, the Addresses_InMem table can define a FOREIGN KEY that references it. Condition 1 is satisfied.

Condition 2

The disk-based Addresses_Disk table can use a trigger to validate the StateID for inserts or updates. Condition 2 is satisfied.

Condition 3

If we want to delete a record from the memory-optimized Parent table (States_InMem), the FK from memory-optimized Addresses_InMem will prevent the delete if child records exist (assuming we don’t cascade).

Triggers on memory-optimized tables must be natively compiled, and that means they cannot reference disk-based tables. Therefore, when you want to delete a record from the memory-optimized parent table, triggers cannot be used to enforce referential integrity to the disk-based child table.

Without a trigger or a parent/child relationship enforced at the database level, it will be possible to delete a record from States_InMem that references Addresses_OnDisk, thereby creating an orphaned child record. Condition 3 is NOT satisfied.

This “memory-optimized triggers cannot reference disk-based tables” issue also prevents the parent table from being disk-based (described next).

Example 2:

Parent table: disk-based, States_OnDisk

Child table 1: Hot data in memory-optimized table, Addresses_InMem

Child table 2: Cold data in disk-based table, Addresses_Disk

We can only define PK/FK between memory-optimized tables, so that won’t work for validating Addresses_InMem.StateID

As just described, we cannot use triggers on Addresses_InMem to enforce referential integrity, because triggers on memory-optimized tables must be natively compiled, and that means they cannot reference disk-based tables (States_OnDisk).

One solution might be to have all DML for this type of lookup table occur through interop stored procedures. But this has some drawbacks:

1. if a stored procedure must access both disk-based and memory-optimized tables, it cannot be natively compiled

2. Without PRIMARY and FOREIGN KEY rules enforced at the database engine level, invalid data can be introduced

Ideally we would like to have only a single copy of the parent table that can be referenced from either disk-based or memory-optimized child tables.

Separate “lookup” database

You might think that you can simply put reference tables in a separate database, but this approach won’t work, because memory-optimized tables don’t support cross-database queries. Also, the example of the States lookup table is overly simplified – it’s a single table that is a parent to child tables, but itself has no parent.

What if the tables were not Addresses and States, but instead Orders and OrderDetails? Orders might have a parent record, which can also have a parent record, and so on. Even if it was possible to place referenced tables in a separate database, this complexity will likely prevent you from doing so.

Double entry

For small lookup tables with no “parent”, one potential solution would be to store the reference data twice (on disk and in-memory). In this scenario you would modify only the disk-based table, and use triggers on the disk-based table to keep the memory-optimized lookup table in synch.

Entire table in memory

Of course if you put entire tables in memory (a single table that holds both hot and cold data), all of these problems go away. Depending on the complexity of the data model, this solution might work. However, placing both hot and cold data in memory will affect recovery time, and therefore RTO (see my other blog post on recovery for databases with memory-optimized data here).

All data in memory

You could also put your entire database in memory, but In-Memory OLTP isn’t designed for this. Its purpose is to locate tables with the highest activity to memory (or a subset of data for those hot tables). Putting your entire database in memory has even more impact on RTO than placing hot/cold data for a few tables in memory.

Also, cold data won’t benefit from most of what In-Memory OLTP has to offer, as by definition cold data rarely changes. However, there will likely be some benefit from querying data that resides solely in memory-optimized tables (no latching/locking).

Temporal

If your data is temporal in nature, it’s possible to use the new Temporal table feature of SQL 2016 to solve part of the issues discussed. It would work only for memory-optimized tables that are reference tables, like the States table.

You could define both the memory-optimized reference table and your memory-optimized referencing tables to be temporal, and that way the history of both over time is captured. At a given point in time, an Addresses record referenced a specific version of the States record (this will also work for disk-based tables, but the subject of this blog post is how In-Memory OLTP can be used to handle hot/cold data).

It’s recommended to use a clustered columnstore index on the history table to minimize the storage footprint and maximize query performance. Partitioning of the history table is also supported.

Archival data

If due to regulatory requirements multiple years of data must be retained, then you could create a view that encompassed both archival and hot data in memory-optimized temporal tables. And removing large amounts of data from the archival tables can easily be done with partitioning. But adding large amounts of data to the archival tables cannot be done seamlessly, because as mentioned earlier, partitioning is not supported for memory-optimized tables.

Down the road

With the current limitations on triggers, foreign keys, and partitioning for memory-optimized tables, enforcing referential integrity with a mix of hot and cold schemas/tables remains a challenge.

Row version lifecycle for In-Memory OLTP

    In this post we’re going to talk about a crucial element of the In-Memory database engine: the row version life cycle.

    We’ll cover:

    1. why row versions are part of the In-Memory engine
    2. which types of memory-optimized objects create row versions
    3. potential impact on production workloads of using row versioning
    4. and finally, we’ll talk about what happens to row versions after they’re no longer needed

    In a world without row versions – as was the case until SQL 2005 – due to the pessimistic nature of the SQL engine, readers and writers that tried to access the same row at the same time would block each other. This affected the scalability of workloads that had a large number of concurrent users, and/or with data that changed often.

    Creating row versions switches the concurrency model from pessimistic to optimistic, which resolves contention issues for readers and writers. This is achieved by using a process called Multi-Version-Concurrency-Control, which allows queries to see data as of a specific point in time – the view of the data is consistent, and this level of consistency is achieved by creating and referencing row versions.

    Harddrive-based tables only have row versions created when specific database options are set, and row versions are always stored in TempDB. However, for memory-optimized tables, rows versions are stored in memory, and created based on the following conditions, and are not related database settings:

    DML memory consumption:

    1. INSERT: a row version is created and consumes memory

    2. UPDATE: a row version is created, and consumes memory (logically a DELETE followed by an INSERT)

    3. DELETE: a row version is NOT created, and therefore no additional memory is consumed (the row is only logically deleted in the Delta file)

    Why must we be aware of row versions for memory-optimized tables? Because row versions affect the total amount of memory that’s used by the In-Memory engine, and so you need to allow for that as part of capacity planning.

    Let’s have a quick look at how row versioning works. On the following slide you can see that there are two processes that reference the same row – the row that has the pk value of 1.

    Before any data is changed, the value of col is 99.

    PowerPoint Presentation

    A new row version is created each time a row is modified, but queries issued before the modification commits see a version of the row as it existed before the modification.

    Process 1 updates the value of col to 100, and row version A is created. Because this version is a copy of the row as it existed before the update, row version A has a col value of 99.

    Then Process 2 issues a SELECT. It can only see committed data, and since Process 1 has not yet committed, Process 2 sees row version A, which has a col value of 99, not the value of 100 from the UPDATE.

    Next, Process 1 commits. At this point, the value of co1 in the database is 100, but it’s important to remember that row version A is still in use by the SELECT from Process 2, and that means that row version A cannot be discarded. Imagine this happening on a much larger scale, and think about the amount of memory all those row versions will consume. At the extreme end of this scenario, the In-Memory engine can actually run out of memory, and SQL Server itself can become unstable.

    Things to note:

  • Memory allocated to the In-Memory engine can never be paged out under any circumstance
  • Memory-optimized tables don’t support compression

    That’s why there must be a separate process to reclaim memory used by row versions after they’re no longer needed. A background process called Garbage Collection takes care of this, and it’s designed to allow the memory consumed by row versions to be deallocated, and therefore re-used.

    Garbage Collection is designed to be:

  • Non-blocking
  • Responsive
  • Cooperative
  • Scalable

The following slide shows various stages of memory allocation for an instance of SQL Server, and assumes that both disk-based and memory-optimized tables exist in the database. To avoid the performance penalty of doing physical IOs, data for harddrive-based tables should be cached in the buffer pool. But an ever-increasing footprint for the In-Memory engine puts pressure on the buffer pool, causing it to shrink. As a result, performance for harddrive-based tables can suffer from the ever-growing footprint of the In-Memory engine. In fact, the entire SQL Server instance can be impacted. 

PowerPoint Presentation

    We need to understand how Garbage Collection works, so that we can determine what might cause it to fail – or perform below expected levels.

    There are two types of objects that can hold rows in memory:

  • Memory-optimized tables
  • Memory-optimized table variables

Modifications to data in both types of objects will create row versions, and those row versions will of course consume memory. Unfortunately, row versions for memory-optimized table variables are not handled by the Garbage Collection process – the memory consumed by them is only released when the variable goes out of scope. If changes are made to memory-optimized table variables that affect many rows – especially if the table variable has a NONCLUSTERED index – a large amount of memory can be consumed by row versions (see Connect item here).

The Garbage Collection process

    By default, the main garbage collection thread wakes up once every minute, but this frequency changes with the number of completed transactions.

    Garbage Collection occurs in two phases:

  • Unlinking rows from all relevant indexes
  • Deallocating rows from memory

1. Unlinking rows from all relevant indexes

Before: Index references stale row versions

PowerPoint Presentation

After: Index no longer references stale row versions. As part of user activity, indexes are scanned for rows that qualify for garbage collection. So stale row versions are easily identified if they reside in an active index range. But if an index range has low activity, a separate process is required to identity stale row versions. That process is called a “dusty corner” sweep – and it has to do much more work than the user activity processes to identify stale rows. This can affect the performance of Garbage Collection, and allow the footprint for the In-Memory engine to grow.

PowerPoint Presentation

2. Deallocating rows from memory

Each CPU scheduler has a garbage collection queue, and the main garbage collection thread places items on those queues. There is one scheduler for each queue, and after a user transaction commits, it selects all queued items on the scheduler it ran on, and deallocates memory for those items. If there are no items in the queue on its scheduler, the user transaction will search on any queue in the current NUMA node that’s not empty.

PowerPoint Presentation

If transaction activity is low and there’s memory pressure, the main garbage-collection thread can deallocate rows from any queue.

    So the two triggers for Garbage Collection are memory pressure and/or transactional activity. Conversely, that means if there’s no memory pressure – or transactional activity is low – it’s perfectly reasonable to have row versions that aren’t garbage collected. There’s also no way to force garbage collection to occur.

    Monitoring memory usage per table

    We can use the sys.dm_db_xtp_table_memory_stats DMV to see how much memory is in use by a memory-optimized table.  Row versions exist as rows in the table, which is why when we SELECT from the sys.dm_db_xtp_table_memory_stats  DMV, the memory_used_by_table_kb column represents the total amount of memory in use by the table, which includes the amount consumed by row versions. There’s no way to see the amount of memory consumed by row versions at the table or database level.

    SELECT CONVERT(CHAR(20), OBJECT_NAME(object_id)) 
          ,* 
    FROM sys.dm_db_xtp_table_memory_stats 

    tablememoryallocation

    Monitoring the Garbage Collection process

    To verify the current state of garbage collection, we can look at the output from the sys.dm_xtp_gc_queue_stats DMV. The output contains one row for each logical CPU on the server.

    SELECT * 
    FROM sys.dm_xtp_gc_queue_stats
    
    

        GCstatus

        If Garbage Collection is operational, we’ll see that there are non-zero values in the current_queue_depth column, and those values change every time we select from the queue stats DMV. If entries in the current_queue_depth column are not being processed or if no new items are being added to current_queue_depth for some of the queues, it means that garbage collection is not actively reclaiming memory, and as stated before, that might be ok, depending on memory pressure and/or transactional activity.

        Also remember that if we were modifying rows in a memory-optimized table variable, Garbage Collection could not have cleaned up any row versions.

        Blocking Garbage Collection

        The only thing that can prevent Garbage Collection from being operational is a long running transaction. That’s because long running transactions can create long chains of row versions, and they can’t be cleaned up until all of the queries that reference them have completed – Garbage Collection will simply have to wait.

        So – if you expect Garbage Collection to be active, and it’s not, the first thing you should check is if there are any long running transactions.

        Summing up

        Now you know about how the Garbage Collection process works for row versions, which types of memory-optimized objects you expect it to work with, and how to determine if it’s operational. There’s also a completely separate Garbage Collection process for handling data/delta files, and I’ll cover that in a separate post.

         

      Backup and Recovery for SQL Server databases that contain durable memory-optimized data

      With regard to backup and recovery, databases that contain durable memory-optimized tables are treated differently than backups that contain only disk-based tables. DBAs must be aware of the differences so that they don’t mistakenly affect production environments and impact SLAs.

      The following image describes files/filegroups for databases that contain durable memory-optimized data:

      clip_image002

      Data/delta files are required so that memory-optimized tables can be durable, and they reside in Containers, which is a special type of folder. Containers can reside on different drives (more about why you’d want to do that in a bit).

      Database recovery occurs due to the following events:

      • Database RESTORE
      • Database OFFLINE/ONLINE
      • Restart of SQL Server service
      • Server boot
      • Failover, including
          • FCI
        • Availability Groups*
        • Log Shipping
        • Database mirroring

      The first thing to be aware of is that having durable memory-optimized data in a database can affect your Recovery Time Objective (RTO).

      Why?

      Because for each of the recovery events listed above, SQL Server must stream data from the data/delta files into memory as part of recovery.

      There’s no getting around the fact that if you have lots of durable memory-optimized data, even if you have multiple containers on different volumes, recovery can take a while. That’s especially true in SQL 2016 because Microsoft has raised the limit on the amount of memory-optimized data per database from 256GB to multiple TB (yes, terabytes, limited only by the OS). Imagine waiting for your multi-terabytes of data to stream into memory, and how that will impact your SLAs (when SQL Server streams data to memory, you’ll see a wait type of WAIT_XTP_RECOVERY).

      *One exception to the impact that failover can have is when you use Availability Groups with a Secondary replica. In that specific scenario, the REDO process keeps memory-optimized tables up to date in memory on the Secondary, which greatly reduces failover time.

      Indexes for memory-optimized tables have no physical representation on disk. That means they must be created as part of database recovery, further extending the recovery timeline.

      CPU bound recovery

      The recovery process for memory-optimized data uses one thread per logical CPU, and each thread handles a set of data/delta files. That means that simply restoring a database can cause the server to be CPU bound, potentially affecting other databases on the server.

      During recovery, SQL Server workloads can be affected by increased CPU utilization due to:

      • low bucket count for hash indexes – this can lead to excessive collisions, causing inserts to be slower
      • nonclustered indexes – unlike static HASH indexes, the size of nonclustered indexes will grow as the data grows. This could be an issue when SQL Server must create those indexes upon recovery.
      • LOB columns – new in SQL 2016, SQL Server maintains a separate internal table for each LOB column. LOB usage is exposed through the sys.memory_optimized_tables_internal_attributes and sys.dm_db_xtp_memory_consumers views. LOB-related documentation for these views has not yet been released.

      You can see from the following output that SQL 2016 does indeed create a separate internal table per LOB column. The Items_nvarchar table has a single NVARCHAR(MAX) column. It will take additional time during the recovery phase to recreate these internal per-column tables.

      image

      Corruption

      Because they don’t have any physical representation on disk (except for durability, if you so choose), memory-optimized tables are completely ignored by both CHECKDB and CHECKTABLE. There is no allocation verification, or any of the myriad other benefits that come from running CHECKDB/CHECKTABLE on disk-based tables. So what is done to verify that everything is ok with your memory-optimized data?

      CHECKSUM of data/delta files

      When a write occurs to a file, a CHECKSUM for the block is calculated and stored with the block. During database backup, the CHECKSUM is calculated again and compared to the CHECKSUM value stored with the block. If the comparison fails, the backup fails (no backup file gets created).

      Restore/Recovery

      If a backup file contains durable memory-optimized data, there is currently no way to interrogate that backup file to determine how much memory is required to successfully restore.

      I did the following to test backup/recovery for a database that contained durable memory-optimized data:

      • Created a database with only one durable memory-optimized table
      • Generated an INSERT only workload (no merging of delta/delta files)
      • INSERTed rows until the size of the table in memory was 20GB
      • Created a full database backup
      • Executed RESTORE FILELISTONLY for that backup file

      The following are the relevant columns from the FILELISTONLY output. Note the last row, the one that references the memory-optimized filegroup:

      image

      There are several things to be aware of here:

      • The size of the memory-optimized data in the backup is 10GB larger than memory allocated for the table (the combined size of the data/delta files is 30GB, hence the extra 10GB)
      • The Type for the memory-optimized filegroup is ‘S’. Within backup files, Filestream, FileTable and In-Memory OLTP all have the same value for Type, which means that database backups that contain two or more types of streaming data don’t have a way to differentiate resource requirements for restoring. A reasonable naming convention should help with that.
      • It is not possible to determine how much memory is required to restore this database. Usually the amount of memory is about the same size as the data/delta storage footprint, but in this case the storage footprint was overestimated by 50%, perhaps due to file pre-creation. There should be a fix in SQL 2016 RC0 to reduce the size of pre-created data/delta files for initial data load. However, this does not help with determining memory requirements for a successful restore.

      Now let’s have a look at a slightly different scenario — imagine that you have a 1TB backup file, and that you are tasked with restoring it to a development server. The backup file is comprised of the following:

      • 900GB disk-based data
      • 100GB memory-optimized data

      The restore process will create all of the files that must reside on disk, including files for disk-based data (mdf/ndf/ldf) and files for durable memory-optimized data (data/delta files). The general steps that the restore process performs are:

      • Create files to hold disk-based data (size = 900GB, so this can take quite a while)
      • Create files for durable memory-optimized data (size = 100GB)
      • After all files are created, 100GB of durable memory-optimized data must be streamed from the data files into memory

      But what if the server you are restoring to only has 64GB of memory for the entire SQL Server instance? In that case, the process of streaming data to memory will fail when there is no more memory available to stream data. Wouldn’t it have been great to know that before you wasted precious time creating 1TB worth of files on disk?

      When you ask SQL Server to restore a database, it determines if there is enough free space to create the required files from the backup, and if there isn’t enough free space, the restore fails immediately. If you think that Microsoft should treat databases containing memory-optimized data the same way (fail immediately if there is not enough memory to restore), please vote for this Azure UserVoice item.