Category Archives: Row versions

Memory-optimized table variable gotcha

In-Memory OLTP can increase performance for a variety of workloads. For example, if your workload creates many #TempTables, ##TempTables, or @TableVariables, they all have to be allocated in TempDB, and it’s possible that TempDB itself is a bottleneck. Some DBAs/Developers mistakenly believe that @TableVariables reside only in memory, which is not true, and has been proven many times in blog posts like this and this, by Wayne Sheffield and Gail Shaw respectively.

Microsoft has described the ways in which temp tables and table variables can be replaced by using memory-optimized objects here. It’s true that we can now have truly memory-resident temporary objects, and that if your workload was bottlenecked due to TempDB io or allocation issues (GAM/SGAM/PFS), using memory-optimized tables variables can increase workload throughput. However, what’s not mentioned in that article is the impact of choosing different types of indexes for the table variable, which has the effect of using 2x memory for the table variable. For large numbers of rows this can even result in an out-of-memory condition. This would be particularly relevant if you are migrating a large number of rows from harddrive-based tables to memory-optimized tables, and the source and destination databases are different.

Creating a memory-optimized table variable is a two step process:

1. create a table type

2. create a variable of that type

Example (note that the PK column of the table type is defined as PRIMARY KEY NONCLUSTERED HASH):

In the following script, ’64K page pool’ indicates the amount of memory allocated to memory-optimized table variables

The PK column of the table type is defined as PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 20000000)

If you instead define the PK column to use a RANGE index (non-HASH index), my testing has shown that memory allocation for the variable is almost exactly double that of using the HASH index.

HASH index, 64 page pool:

image

RANGE index, 64 page pool:

image

Not related to index choice – but still significant – is that the memory allocated to memory-optimized table variables (and their row versions, if any) is not released until the variable goes out of scope. Garbage collection for row versions ignores memory-optimized table variables.

Updating all rows in the variable will create row versions, and at least in this case, the row versions did not consume a lot of additional memory. I blogged about row versions in here.

If you think Microsoft should fix this issue bug with RANGE indexes on memory-optimized table variables, please upvote this connect item.

In-Memory OLTP: Optimizing data load

In-Memory OLTP: Optimizing data load

Inserting large sets of data to memory-optimized tables might be required when initially migrating data from:

harddrive-based or memory-optimized tables in

  • the same database
  • a separate database (not directly supported)

Some of the ways to load data into memory-optimized tables are:

  • SSIS
  • BULK INSERT
  • bcp
  • INSERT/SELECT

SELECT INTO is not supported for memory-optimized tables.

Harddrive-based tables

Let’s review the basic requirements to optimally load data to harddrive-based tables.

PowerPoint Presentation

Recovery model: Most if not all OLTP databases run with the recovery model set to FULL. DBAs are taught from birth that when loading data, the recovery model should be set to BULK_LOGGED so that the transaction log doesn’t explode when you load data. The next transaction log backup will still include all the data that was loaded, but if you set the recovery model to BULK_LOGGED, you won’t require the extra storage to accommodate transaction log growth.

Itzik Ben-Gan wrote an excellent article on minimal logging here. It covers Trace Flag 610 and many other aspects of loading data into harddrive-based tables.

Indexes: For harddrive-based tables, we should have the minimum amount of indexes in place or enabled, because all index modifications are fully logged, which slows down the data load (TF 610 changes this behavior). You’ll still have to rebuild/create those indexes, and that will be logged, but it’s often faster to do that than load data with indexes in place, if for some reason TF 610 can’t be used.

Clustered indexes: For harddrive-based tables, we want to load the data sorted by the clustering key, so that we can eliminate any sorting.

Memory-optimized tables

Basic requirements to optimally load data to memory-optimized tables:

PowerPoint Presentation

Most DBAs are surprised to learn that DML changes to memory-optimized tables are always fully logged, regardless of the database recovery model. For INSERT/UPDATE/DELETE on memory-optimized tables, there is no such thing as “minimally logged”.

In SQL Server 2016 we finally have the ability to use the ALTER TABLE command to change memory-optimized tables. Most ALTER TABLE operations are executed in parallel and have the benefit of being minimally logged.

I did the following to verify that index creation is indeed minimally logged (based on SQL 2016 RC3**):

  • Create a memory-optimized table and load 15 million rows
  • Execute BACKUP LOG and CHECKPOINT (a few times)
  • Execute SELECT COUNT(*) FROM fn_dblog(NULL, NULL), result is 30 rows
  • ALTER TABLE/ADD NOT NULL column: 7 seconds
  • Execute SELECT COUNT(*) FROM fn_dblog(NULL, NULL), result is 308 rows
  • Execute BACKUP LOG and CHECKPOINT (a few times)
  • Execute SELECT COUNT(*) FROM fn_dblog(NULL, NULL), result is 35 rows
  • ALTER TABLE ADD INDEX: 13 seconds
  • Execute SELECT COUNT(*) FROM fn_dblog(NULL, NULL), result is 118 rows

**If an index column is currently off-row, creating an index that references this column causes the column to be moved in-row. If the index is dropped, the column is again moved off-row. In both of these scenarios, ALTER TABLE is fully logged and single-threaded.

Then I executed a command that is definitely not minimally logged:

  • ALTER TABLE/ADD NOT NULL nvarchar(max) column: 6 minutes, 52 seconds
  • Execute SELECT COUNT(*) FROM fn_dblog(NULL, NULL), result is 210,280 rows

So from a logging perspective, it probably doesn’t make a lot of difference if non-clustered indexes are in place when data is loaded to memory-optimized tables. But concurrency will definitely suffer when creating indexes with ALTER TABLE/ADD INDEX, as the table is offline for the entire duration of any ALTER commands. That might be somewhat mitigated by the fact that you can now create multiple indexes, constraints, etc, with a single ALTER TABLE statement:

ALTER TABLE dbo. MyInMemTable ADD INDEX IX_Column1(Column1) , INDEX IX_Column2 (Column2)

“Clustered” indexes

Sadly, using the label “clustered” to describe any index on memory-optimized tables will confuse many people. For harddrive-based tables, a clustered index determines the physical order of data pages on disk, and clustered indexes for harddrive-based tables are the primary source of data – they are in fact the actual data for the table.

With regard to how data for memory-optimized tables is stored in memory, it’s not possible to have any form of ordering. Yes, you can create a “clustered” index on a memory-optimized table, but it is not the primary source of data for that table. The primary source of data is still the memory-optimized table in memory.

Loading

You should determine a way to break up the data loading process so that multiple clients can be executed in parallel. By client I mean SSMS, Powershell, SQLCMD, etc. This is no different than the approach you would take for loading data to harddrive-based tables.

When reviewing the following chart, remember that natively compiled stored procedures won’t work for any scenario that includes both harddrive-based and memory-optimized tables.

Source

Method

Notes

harddrive-based, same db

INSERT/SELECT

Supported, but excruciatingly painful with large data sets (single INSERT/SELECT statement), even if using a HASH index with bucket count properly configured. I succeeded in locking up my server several times with this approach.

harddrive-based, different db

INSERT/SELECT

Not supported.

You can use tempdb to stage the data, i.e. SELECT INTO ##temptable. Then process data with multiple clients.

harddrive-based, files

bcp out/ bcp in

Supported

harddrive-based, different db

indexed memory-optimized table variable

Supported, but not “transactional”.

Modifications to rows in a memory-optimized table variable creates row versions (see note below).

BULK INSERT is also supported, with the same restrictions as INSERT/SELECT (can’t go cross-database).

Different Source and Destination databases

a. If you are copying data between databases, i.e. Database A is the source for harddrive-based data you want to migrate, and Database B is the destination for memory-optimized data, you can’t use INSERT/SELECT. That’s because if there is a memory-optimized table as the source or destination of the INSERT/SELECT, you’ll be going “cross-database”, and that’s not allowed. You’ll either need to copy harddrive-based data to a global table (##) in TempDB, to an external file and then use BCP, or to a memory-optimized table variable (further explanation below).

b. Next, you’ll have to get the data into the memory-optimized tables. If using a ##TempTable, you can use stored procedures to process distinct key value ranges, allowing the procedures to be executed in parallel. For performance reasons, before calling these stored procedures, you’ll need to create an index on the primary key of the ##TempTable. If using stored procedures, you should determine the optimal batch size for your server/storage (see chart at the end of this post for my results using this method).

c. Natively compiled stored procedures won’t work in this scenario, because you can’t reference disk-based tables or TempDB from natively compiled stored procedures.

d. Instead of using a ##TempTable, it’s possible to insert data into an indexed memory-optimized table variable from the source database, and then use INSERT/SELECT from that variable into the destination database. That would solve the issue of making a second copy on disk, but be careful if you need to transform the data in the memory-optimized table variables, because updating data in memory-optimized table variables creates row versions, which will consume memory. That’s in addition to the memory required for the memory-optimized table variable itself.

e. Garbage collection is a process that frees memory consumed by row versions, which were created as a result of changes to data in memory-optimized tables. Unfortunately, the garbage collection process does not free up memory consumed by memory-optimized table variables – those row versions will consume additional memory (until the memory-optimized table variable goes out of scope).

In order to use a natively compiled stored procedure for copying data from one table to another, the source and destination tables must both be memory-optimized, and both must reside in the same database.

Hardware/software used for testing

Software

  • Windows Server 2012 Datacenter
  • SQL 2016 RC3
  • sp_configure max memory: 51200 MB
  • Resource pool of 70%

Hardware

  • Make/model: custom built
  • Physical memory: 64GB
  • Memory stick: Samsung M386A4G40DM0 32GB x 2
  • Dual Intel Xeon E5-2630 v3 CPU
  • Transaction log on Intel 750 PCIe SSD
  • Checkpoint File Pairs on OWC Mercury Accelsior PCIe SSD

Testing details:

  • SELECT INTO ##TempTable was used to make the source data visible from within the memory-optimized database.
  • An index was created on the primary key for ##TempTable (INT IDENTITY). The “table on SSD” in the chart below was stored on the Intel 750 PCIe SSD
  • All inserts were done by calling an interpreted TSQL stored procedure which processed rows in batches, using “PrimaryKey BETWEEN val1 and val2”. No key generation was involved, because in the procedure, SET IDENTITY_INSERT was ON.
  • There was a single HASH index on the memory-optimized table, with BUCKET_COUNT set to 10 million, in order to handle the initial data set of 5 million rows. Increasing the BUCKET_COUNT TO 30 million did not make any appreciable difference in the final test (with three sessions loading 5 million rows each).

PowerPoint Presentation

Row version lifecycle for In-Memory OLTP

    In this post we’re going to talk about a crucial element of the In-Memory database engine: the row version life cycle.

    We’ll cover:

    1. why row versions are part of the In-Memory engine
    2. which types of memory-optimized objects create row versions
    3. potential impact on production workloads of using row versioning
    4. and finally, we’ll talk about what happens to row versions after they’re no longer needed

    In a world without row versions – as was the case until SQL 2005 – due to the pessimistic nature of the SQL engine, readers and writers that tried to access the same row at the same time would block each other. This affected the scalability of workloads that had a large number of concurrent users, and/or with data that changed often.

    Creating row versions switches the concurrency model from pessimistic to optimistic, which resolves contention issues for readers and writers. This is achieved by using a process called Multi-Version-Concurrency-Control, which allows queries to see data as of a specific point in time – the view of the data is consistent, and this level of consistency is achieved by creating and referencing row versions.

    Harddrive-based tables only have row versions created when specific database options are set, and row versions are always stored in TempDB. However, for memory-optimized tables, rows versions are stored in memory, and created based on the following conditions, and are not related database settings:

    DML memory consumption:

    1. INSERT: a row version is created and consumes memory

    2. UPDATE: a row version is created, and consumes memory (logically a DELETE followed by an INSERT)

    3. DELETE: a row version is NOT created, and therefore no additional memory is consumed (the row is only logically deleted in the Delta file)

    Why must we be aware of row versions for memory-optimized tables? Because row versions affect the total amount of memory that’s used by the In-Memory engine, and so you need to allow for that as part of capacity planning.

    Let’s have a quick look at how row versioning works. On the following slide you can see that there are two processes that reference the same row – the row that has the pk value of 1.

    Before any data is changed, the value of col is 99.

    PowerPoint Presentation

    A new row version is created each time a row is modified, but queries issued before the modification commits see a version of the row as it existed before the modification.

    Process 1 updates the value of col to 100, and row version A is created. Because this version is a copy of the row as it existed before the update, row version A has a col value of 99.

    Then Process 2 issues a SELECT. It can only see committed data, and since Process 1 has not yet committed, Process 2 sees row version A, which has a col value of 99, not the value of 100 from the UPDATE.

    Next, Process 1 commits. At this point, the value of co1 in the database is 100, but it’s important to remember that row version A is still in use by the SELECT from Process 2, and that means that row version A cannot be discarded. Imagine this happening on a much larger scale, and think about the amount of memory all those row versions will consume. At the extreme end of this scenario, the In-Memory engine can actually run out of memory, and SQL Server itself can become unstable.

    Things to note:

  • Memory allocated to the In-Memory engine can never be paged out under any circumstance
  • Memory-optimized tables don’t support compression

    That’s why there must be a separate process to reclaim memory used by row versions after they’re no longer needed. A background process called Garbage Collection takes care of this, and it’s designed to allow the memory consumed by row versions to be deallocated, and therefore re-used.

    Garbage Collection is designed to be:

  • Non-blocking
  • Responsive
  • Cooperative
  • Scalable

The following slide shows various stages of memory allocation for an instance of SQL Server, and assumes that both disk-based and memory-optimized tables exist in the database. To avoid the performance penalty of doing physical IOs, data for harddrive-based tables should be cached in the buffer pool. But an ever-increasing footprint for the In-Memory engine puts pressure on the buffer pool, causing it to shrink. As a result, performance for harddrive-based tables can suffer from the ever-growing footprint of the In-Memory engine. In fact, the entire SQL Server instance can be impacted. 

PowerPoint Presentation

    We need to understand how Garbage Collection works, so that we can determine what might cause it to fail – or perform below expected levels.

    There are two types of objects that can hold rows in memory:

  • Memory-optimized tables
  • Memory-optimized table variables

Modifications to data in both types of objects will create row versions, and those row versions will of course consume memory. Unfortunately, row versions for memory-optimized table variables are not handled by the Garbage Collection process – the memory consumed by them is only released when the variable goes out of scope. If changes are made to memory-optimized table variables that affect many rows – especially if the table variable has a NONCLUSTERED index – a large amount of memory can be consumed by row versions (see Connect item here).

The Garbage Collection process

    By default, the main garbage collection thread wakes up once every minute, but this frequency changes with the number of completed transactions.

    Garbage Collection occurs in two phases:

  • Unlinking rows from all relevant indexes
  • Deallocating rows from memory

1. Unlinking rows from all relevant indexes

Before: Index references stale row versions

PowerPoint Presentation

After: Index no longer references stale row versions. As part of user activity, indexes are scanned for rows that qualify for garbage collection. So stale row versions are easily identified if they reside in an active index range. But if an index range has low activity, a separate process is required to identity stale row versions. That process is called a “dusty corner” sweep – and it has to do much more work than the user activity processes to identify stale rows. This can affect the performance of Garbage Collection, and allow the footprint for the In-Memory engine to grow.

PowerPoint Presentation

2. Deallocating rows from memory

Each CPU scheduler has a garbage collection queue, and the main garbage collection thread places items on those queues. There is one scheduler for each queue, and after a user transaction commits, it selects all queued items on the scheduler it ran on, and deallocates memory for those items. If there are no items in the queue on its scheduler, the user transaction will search on any queue in the current NUMA node that’s not empty.

PowerPoint Presentation

If transaction activity is low and there’s memory pressure, the main garbage-collection thread can deallocate rows from any queue.

    So the two triggers for Garbage Collection are memory pressure and/or transactional activity. Conversely, that means if there’s no memory pressure – or transactional activity is low – it’s perfectly reasonable to have row versions that aren’t garbage collected. There’s also no way to force garbage collection to occur.

    Monitoring memory usage per table

    We can use the sys.dm_db_xtp_table_memory_stats DMV to see how much memory is in use by a memory-optimized table.  Row versions exist as rows in the table, which is why when we SELECT from the sys.dm_db_xtp_table_memory_stats  DMV, the memory_used_by_table_kb column represents the total amount of memory in use by the table, which includes the amount consumed by row versions. There’s no way to see the amount of memory consumed by row versions at the table or database level.

    SELECT CONVERT(CHAR(20), OBJECT_NAME(object_id)) 
          ,* 
    FROM sys.dm_db_xtp_table_memory_stats 

    tablememoryallocation

    Monitoring the Garbage Collection process

    To verify the current state of garbage collection, we can look at the output from the sys.dm_xtp_gc_queue_stats DMV. The output contains one row for each logical CPU on the server.

    SELECT * 
    FROM sys.dm_xtp_gc_queue_stats
    
    

        GCstatus

        If Garbage Collection is operational, we’ll see that there are non-zero values in the current_queue_depth column, and those values change every time we select from the queue stats DMV. If entries in the current_queue_depth column are not being processed or if no new items are being added to current_queue_depth for some of the queues, it means that garbage collection is not actively reclaiming memory, and as stated before, that might be ok, depending on memory pressure and/or transactional activity.

        Also remember that if we were modifying rows in a memory-optimized table variable, Garbage Collection could not have cleaned up any row versions.

        Blocking Garbage Collection

        The only thing that can prevent Garbage Collection from being operational is a long running transaction. That’s because long running transactions can create long chains of row versions, and they can’t be cleaned up until all of the queries that reference them have completed – Garbage Collection will simply have to wait.

        So – if you expect Garbage Collection to be active, and it’s not, the first thing you should check is if there are any long running transactions.

        Summing up

        Now you know about how the Garbage Collection process works for row versions, which types of memory-optimized objects you expect it to work with, and how to determine if it’s operational. There’s also a completely separate Garbage Collection process for handling data/delta files, and I’ll cover that in a separate post.