Category Archives: SQL Server

The subtleties of In-Memory OLTP Indexing

For this post, I wanted to cover some of the indexing subtleties for memory-optimized tables, with an accent on columnstore indexes

Let’s create a memory-optimized table:

CREATE TABLE [dbo].[InMem]
(
    [PK] [INT] IDENTITY(1, 1) NOT NULL
   ,[col1] [CHAR](100) NOT NULL
   ,PRIMARY KEY NONCLUSTERED HASH ([PK]) WITH (BUCKET_COUNT = 1000)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

Now, let’s attempt to create a NONCLUSTERED COLUMNSTORE INDEX:

ALTER TABLE dbo.InMemADD 
ADD INDEX NCCI_InMem NONCLUSTERED COLUMNSTORE (col1);

Msg 10794, Level 16, State 76, Line 76
The feature ‘NONCLUSTERED COLUMNSTORE’ is not supported with memory optimized tables.

It fails because we can only create a CLUSTERED columnstore index (CCI). For 25 years, Microsoft SQL Server differentiated between indexes that physically ordered data on storage (CLUSTERED) and those that did not (NONCLUSTERED). Unfortunately, they chose to ignore that pattern when creating the syntax for memory-optimized tables; using the word CLUSTERED is required when creating a columnstore index on memory-optimized tables.

Can we create a clustered columnstore index on a memory-optimized table that is defined as SCHEMA_ONLY?

Only one way to find out:

DROP TABLE IF EXISTS [dbo].[InMem];
GO

CREATE TABLE [dbo].[InMem]
(
    [PK] [INT] IDENTITY(1, 1) NOT NULL
   ,[col1] [CHAR](100) NOT NULL
   ,PRIMARY KEY NONCLUSTERED HASH ([PK]) WITH (BUCKET_COUNT = 1000)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);

ALTER TABLE dbo.InMem ADD INDEX CCI_InMem CLUSTERED COLUMNSTORE

Msg 35320, Level 16, State 1, Line 39
Column store indexes are not allowed on tables for which the durability option SCHEMA_ONLY is specified.

That won’t work, so let’s create our table with SCHEMA_AND_DATA:

DROP TABLE IF EXISTS [dbo].[InMem];
GO
CREATE TABLE [dbo].[InMem]
(
    [PK] [INT] IDENTITY(1, 1) NOT NULL PRIMARY KEY NONCLUSTERED
   ,[col1] [CHAR](100) NOT NULL,
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO

Now, let’s create a clustered columnstore index:

ALTER TABLE dbo.InMem ADD INDEX CCI_InMem CLUSTERED COLUMNSTORE

Success! Let’s attempt to create a NONCLUSTERED index….

ALTER TABLE dbo.InMem ADD INDEX IX_Index1 (col1);

Msg 10794, Level 16, State 15, Line 117
The operation ‘ALTER TABLE’ is not supported with memory optimized tables that have a column store index.

Ooops – no can do. Once you add a clustered columnstore index to a memory-optimized table, the schema is totally locked down.

What about if we create the CCI and nonclustered index inline?

DROP TABLE IF EXISTS [dbo].[InMem];
GO
CREATE TABLE [dbo].[InMem]
(
    [PK] [INT] IDENTITY(1, 1) NOT NULL
   ,[col1] [CHAR](100) NOT NULL
   ,PRIMARY KEY NONCLUSTERED HASH ([PK]) WITH (BUCKET_COUNT = 1000)
   ,INDEX CCI_InMem CLUSTERED COLUMNSTORE
   ,INDEX IX_InMem1 (col1)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO

Awesome! We’ve proven that we can create both clustered columnstore and nonclustered indexes, but we must create them inline.

Now that we’ve got our indexes created, let’s try to add a column:

ALTER TABLE dbo.InMem ADD col2 INT NULL;

Msg 12349, Level 16, State 1, Line 68
Operation not supported for memory optimized tables having columnstore index.

Hey, when I said that the schema is locked down once you add a clustered columnstore index, I mean it!

What type of index maintenance is possible for indexes on memory-optimized tables?

For HASH indexes there is only one possible type of index maintenance, and that’s to modify/adjust the bucket count. There is zero index maintenance for RANGE/NONCLUSTERED indexes.

Let’s create a memory-optimized table with a HASH index, and verify the syntax for rebuilding the bucket count.

DROP TABLE IF EXISTS [dbo].[InMem];
GO
CREATE TABLE [dbo].[InMem]
(
    [PK] [INT] IDENTITY(1, 1) NOT NULL
   ,[col1] [CHAR](100) NOT NULL
   ,PRIMARY KEY NONCLUSTERED HASH ([PK]) WITH (BUCKET_COUNT = 1000)
   ,INDEX IX_InMem1 HASH (col1) WITH (BUCKET_COUNT = 1000)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO

Here’s the syntax for rebuilding the bucket count for a HASH INDEX:

ALTER TABLE dbo.InMem 
ALTER INDEX IX_InMem1 
REBUILD WITH(BUCKET_COUNT = 1001)
GO

We can add a column, as long as we don’t have a CCI in place:

ALTER TABLE dbo.InMem ADD col2 INT NULL;
GO

How about trying to rebuild the bucket count if we created the memory-optimized table with inline CCI and HASH indexes?

DROP TABLE IF EXISTS [dbo].[InMem];
GO

CREATE TABLE [dbo].[InMem]
(
    [PK] [INT] IDENTITY(1, 1) NOT NULL
   ,[col1] [CHAR](100) NOT NULL
   ,PRIMARY KEY NONCLUSTERED HASH ([PK]) WITH (BUCKET_COUNT = 1000)
   ,INDEX CCI_InMem CLUSTERED COLUMNSTORE
   ,INDEX IX_InMem1 HASH (col1) WITH (BUCKET_COUNT = 1000)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO

ALTER TABLE dbo.InMem ALTER INDEX IX_InMem1 REBUILD WITH(BUCKET_COUNT = 1001)
GO

Msg 10794, Level 16, State 13, Line 136
The operation ‘ALTER TABLE’ is not supported with memory optimized tables that have a column store index.

You can’t rebuild that index if you also have a columnstore index on the table. We would have to drop the columnstore index, reconfigure the bucket count for the HASH index, and then recreate the columnstore index. Both the drop and the create of the columnstore index will be fully logged, and executed serially. Not a huge problem if the amount of data is not too large, but it’s a potentially much larger problem if you’ve got a lot of data.

We can create a clustered columnstore index on a #temp table (on-disk):

DROP TABLE IF EXISTS #tempgo
GO

CREATE TABLE #temp
(
    col1 INT NOT NULL
);
GO

-- succeeds
CREATE NONCLUSTERED COLUMNSTORE INDEX NCCI_InMem ON #temp (col1);

--OR clustered, which also succeeds
CREATE CLUSTERED COLUMNSTORE INDEX NCCI_InMem ON #temp;

Can we create a columnstore index on a memory-optimized table variable?

DROP TYPE IF EXISTS dbo.typeTableMem
GO

CREATE TYPE dbo.typeTableMem AS TABLE
(
    [PK] INT NOT NULL PRIMARY KEY NONCLUSTERED
   ,[col1] NVARCHAR(255) NOT NULL
)
WITH (MEMORY_OPTIMIZED = ON);
GO

DECLARE @Table1 dbo.typeTableMem;
-- fails--Msg 102, Level 15, State 1--Incorrect syntax near '@Table1'.
ALTER TABLE @Table1 ADD INDEX CCI_InMem CLUSTERED COLUMNSTORE

Create a table that includes a LOB column with a MAX datatype, then add a clustered columnstore index:

DROP TABLE IF EXISTS [dbo].[InMemLOB];
GO
CREATE TABLE [dbo].[InMemLOB]
(
    [OrderId] [INT] IDENTITY NOT NULL
   ,[StoreID] INT NOT NULL
   ,[CustomerID] INT NOT NULL
   ,[OrderDate] [DATETIME] NOT NULL
   ,[DeliveryDate] DATETIME NULL
   ,[Amount] FLOAT NOT NULL
   ,[Notes] [NVARCHAR](MAX) NULL
   ,[ValidFrom] [DATETIME2](7) NOT NULL
   ,[ValidTo] [DATETIME2](7) NOT NULL
   ,CONSTRAINT [PK_InMemLOB_OrderID] PRIMARY KEY NONCLUSTERED (OrderId)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO

ALTER TABLE dbo.[InMemLOB] 
ADD INDEX CCI_InMemLOB CLUSTERED COLUMNSTORE

Msg 35343, Level 16, State 1, Line 22    The statement failed. Column ‘Notes’ has a data type that cannot participate in a columnstore index. Omit column ‘Notes’.   

Msg 1750, Level 16, State 1, Line 22    Could not create constraint or index. See previous errors.

For memory-optimized tables, LOB columns prevent creation of a clustered columnstore index.

Now let’s try creating a table using CHAR(8000). Astute readers will notice that the following table would create rows that are 32,060 bytes wide – this would fail for on-disk tables, but is perfectly valid for memory-optimized tables:

DROP TABLE IF EXISTS [dbo].[InMemLOB];
GO
CREATE TABLE [dbo].[InMemLOB]
(
    [OrderId] [INT] IDENTITY NOT NULL
   ,[StoreID] INT NOT NULL
   ,[CustomerID] INT NOT NULL
   ,[OrderDate] [DATETIME] NOT NULL
   ,[DeliveryDate] DATETIME NULL
   ,[Amount] FLOAT
   ,[Notes1] [CHAR](8000) NULL
   ,[Notes2] [CHAR](8000) NULL
   ,[Notes3] [CHAR](8000) NULL
   ,[Notes4] [CHAR](8000) NULL
   ,[ValidFrom] [DATETIME2](7) NOT NULL
   ,[ValidTo] [DATETIME2](7) NOT NULL
   ,CONSTRAINT [PK_InMemLOB_OrderID] PRIMARY KEY NONCLUSTERED (OrderId)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO

Msg 41833, Level 16, State 1, Line 29    Columnstore index ‘CCI_InMemLOB’ cannot be created, because table ‘InMemLOB’ has columns stored off-row.   
Columnstore indexes can only be created on memory-optimized table if the columns fit within the 8060 byte limit for in-row data.   
Reduce the size of the columns to fit within 8060 bytes.

Create a table with non-MAX LOB columns, but they are stored on-row,  then add a clustered columnstore index:

DROP TABLE IF EXISTS [dbo].[InMemLOB];
GO
CREATE TABLE [dbo].[InMemLOB]
(
    [OrderId] [INT] IDENTITY NOT NULL
   ,[StoreID] INT NOT NULL
   ,[Notes1] [CHAR](8000) NULL
   ,[ValidFrom] [DATETIME2](7) NOT NULL
   ,[ValidTo] [DATETIME2](7) NOT NULL
   ,CONSTRAINT [PK_InMemLOB_OrderID] PRIMARY KEY NONCLUSTERED (OrderId)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO

-- works
ALTER TABLE InMemLOB ADD INDEX IndOrders_StoreID (StoreID);
GO

-- works
ALTER TABLE dbo.[InMemLOB] ADD INDEX CCI_InMemLOB CLUSTERED COLUMNSTORE
GO

--Add data to table:
SET NOCOUNT ON
GO
INSERT dbo.[InMemLOB]
SELECT
    3 AS [StoreID]
   ,'ABC' AS [Notes1]
   ,GETDATE() AS [ValidFrom]
   ,GETDATE() AS [ValidTo]
GO 1000

SELECT *
FROM dbo.[InMemLOB]

Let’s create a natively compiled module that selects from this table:

DROP PROCEDURE IF EXISTS dbo.InMem_native_sp
GO
CREATE PROCEDURE dbo.InMem_native_sp
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
    SELECT
        [OrderId]
       ,[StoreID]
       ,[Notes1]
       ,[ValidFrom]
       ,[ValidTo]
    FROM dbo.[InMemLOB];
END;
GO

ENABLE “Actual Plan” and SELECT – which index is used?

SELECT
    [OrderId]
   ,[StoreID]
   ,[Notes1]
   ,[ValidFrom]
   ,[ValidTo]
FROM dbo.[InMemLOB];

CCIPlan1

Now highlight the EXEC statement, and click “Estimated Plan” – which index is used?

EXEC dbo.InMem_native_sp

CCIPlan2

The SELECT statement uses the columnstore index, but the natively compiled procedure does not (that’s because natively compiled procedures ignore columnstore indexes).

Summing up

In this post, we’ve covered some of the finer points of indexing memory-optimized tables. Never know when they might come in handy….

SQL Server on Linux, Part 1

SQL 2017 is just about to be released, and one of the big ticket items is that SQL Server is now supported on the Linux platform.

In subsequent posts, I’ll be reporting on In-Memory OLTP on Linux, but first we’ll need to cover some Linux basics. I flirted with Unix ages ago, and I’ll be the first to admit that my brain doesn’t really work that way (perhaps no one’s brain does).

First, a note about environments – I usually like to work on a server in my home lab, because it has a lot of cores, 64GB of memory, and there’s no hourly cost for using it (and also because I built it….).

So I downloaded a copy of Ubuntu, CentOS, and a trial copy of Redhat Enterprise Linux, and attempted to install each one in my VMware Workstation environment. I spun my wheels for a few hours, and could not get any of them up and running in the way that I required. So, in the interest of saving time, I hit my Azure account, created a VM running Redhat, and proceeded to install SQL 2017 CTP2. Instructions for installing SQL 2017 on Linux can be found at this link. It should be noted that the installation varies by Linux distribution.

Those of us who don’t know Linux commands by heart, and are used to firing up GUI-based virtual machines, are in for a bit of a rude awakening. While it is possible to install GNOME on RHEL, you can’t simply RDP into the VM without a lot of Linux admin setup for xdp (I never did get it to work). So how do you connect to your Linux VM running SQL Server to do basic tasks? The answer is: PuTTY

PuTTY can be downloaded from this link, and after you install it on your client machine (your laptop or home workstation), connecting to your Azure VM is very easy. When you run PuTTY, you’re presented with the following window, and you can simply enter your IP address into the “Host Name (or IP address)” section, and click the “Open” button:

PUTTY

(you might receive a warning to confirm you want to connect).

Once you connect to the Azure VM, you are prompted for your user name and password, and after logging in to the VM, you arrive at the home directory for your login.

HomeDir

Once you’ve installed SQL Server according to the instructions at this link, you can use SSMS from your desktop to connect over the public internet, and manage your SQL Server environment. It’s a really good idea to limit the inbound connections for your VM to only your IP address, otherwise bots from all over the globe will attempt to hack your machine (you have been warned….).

Now that SQL Server is installed and running, we can attempt to connect, and create a database.

In SSMS, click connect, choose “Database Engine”, and when prompted, enter the user name and password. Make sure “SQL Server Authentication” is chosen, and not “Windows Authentication”.

The first thing I did was to determine where the system databases were stored, so I executed:

sp_helpdb master

master

I used the same path as the master database files to create a test database:

USE master
GO
CREATE DATABASE [TestDB]
ON PRIMARY
       (
           NAME = N’TestDBData’
          ,FILENAME = N’/var/opt/mssql/data/TestDB.mdf’
          ,SIZE = 100MB
          ,MAXSIZE = UNLIMITED
          ,FILEGROWTH = 100MB
       )
LOG ON
    (
        NAME = N’TestDBLog’
       ,FILENAME = N’/var/opt/mssql/data/TestDB.ldf’
       ,SIZE = 100MB
       ,MAXSIZE = 2048GB
       ,FILEGROWTH = 100MB
    );
GO

That worked fine, but what if we want to create a database in a separate folder?

Using PuTTY, we can create a folder using the mkdir command (xp_cmdshell is not currently supported for SQL Server running on Linux):

mkdir /var/opt/sqldata

mkdir1

Unfortunately, that didn’t go as planned! We don’t have permission to create that folder, so we’ll try using sudo (more on sudo at this link):

sudo mkdir /var/opt/sqldata

sudomkdir

sudo prompts you for your password, after which it will create the directory.

Now that the directory has been created, we can attempt to create a new database there.

USE master
GO
CREATE DATABASE [TestDB2]
ON PRIMARY
       (
           NAME = N’TestDB2Data’
          ,FILENAME = N’/var/opt/sqldata/TestDB2.mdf’
          ,SIZE = 100MB
          ,MAXSIZE = UNLIMITED
          ,FILEGROWTH = 100MB
       )
LOG ON
    (
        NAME = N’TestDB2Log’
       ,FILENAME = N’/var/opt/sqldata/TestDB2.ldf’
       ,SIZE = 100MB
       ,MAXSIZE = 2048GB
       ,FILEGROWTH = 100MB
    );
GO

error1

Still no luck – what could be the issue?

Let’s check the security context of the mssql service:

ps aux | grep mssql

mssql service

So, the sqlserver process executes under the mssql user account. Let’s check permissions in the sqldata directory with:

stat –format “%A” /var/opt/sqldata

On my VM, the results are:

rwxr-xr-x

Permissions for Linux files are separated into three sections:

  • owner
  • group (for the file or directory)
  • others

Each section can have the following attributes:

  • (r)ead
  • (w)rite
  • e(x)ecute

For more information on these attributes, please visit this link.

It’s easier to interpret the output if we break it up:

[rwx] [r-x] [r-x]

  • the directory owner has read, write, and execute permission
  • the directory group has read and execute permission
  • others have read and execute permission

When we create a directory, it’s owned by root. The problem with creating a database in this directory should be obvious: only the owner of the directory has write permission.

Let’s make the mssql user the owner of the sqldata directory:

chown mssql:mssql /var/opt/sqldata

chown
And finally, we’ll check the permissions for the sqldata folder:

final

Now let’s retry our CREATE DATABASE statement.

USE master
GO
CREATE DATABASE [TestDB2]
ON PRIMARY
       (
           NAME = N’TestDB2Data’
          ,FILENAME = N’/var/opt/sqldata/TestDB2.mdf’
          ,SIZE = 100MB
          ,MAXSIZE = UNLIMITED
          ,FILEGROWTH = 100MB
       )
LOG ON
    (
        NAME = N’TestDB2Log’
       ,FILENAME = N’/var/opt/sqldata/TestDB2.ldf’
       ,SIZE = 100MB
       ,MAXSIZE = 2048GB
       ,FILEGROWTH = 100MB
    );
GO

Voila! We successfully created a database in the intended folder.

Seasoned DBAs might be wondering about Instant File Initialization (IFI), a best practice on Windows that greatly increases the speed of creating or extending data files.

When IFI is not configured, data files must be zeroed when created or extended. Does Linux have something akin to IFI? The answer is…..IFI does not exist as a thing you can configure on the file systems that SQL on Linux supports (EXT4, available on all distributions, or XFS file system, available only on Redhat).

However, the good news is that on the Linux platform, data files are not initialized with zeros when created or extended – Linux takes care of this without any intervention from DBAs.

Anthony Nocentino (@centinosystems) just blogged about the internals of file initialization on the Linux platform in this post.

SQL 2017 In-Memory roundup

SQL Server 2017 includes enhancements to many features, and some of those enhancements include In-Memory OLTP.

  • Microsoft states that ALTER TABLE for memory-optimized tables is now “usually substantially faster”. I asked for clarity about that – if it means that ALTER TABLE is faster for the same events that were able to be executed in parallel and minimally logged in SQL 2016, or if there are new ALTER TABLE statements which now execute in parallel. They replied that there is no change to the set of operations that executed in parallel. So the ALTER TABLE commands that executed fast now (theoretically) execute faster.
  • Up to and including SQL 2016, the maximum number of nonclustered indexes on a memory-optimized table was eight, but that limitation has been removed for SQL 2017. I’ve tested this with almost 300 indexes, and it worked. With this many supported indexes, it’s no wonder they had to….
  • Enhance the index rebuild performance for nonclustered indexes during database recovery. I confirmed with Microsoft that the database does not have be in SQL 2017 compatibility mode (140) to benefit from the index rebuild enhancement. This type of rebuild happens not only for database restore and failover, but also for other “recovery events” – see my blog post here.
  • In SQL 2017, memory-optimized tables now support JSON in native modules (functions, procedures and check constraints).
  • Computed columns, and indexes on computed columns are now supported
  • TSQL enhancements for natively compiled modules include CASE, CROSS APPLY, and TOP (N) WITH TIES
  • Transaction log redo of memory-optimized tables is now done in parallel. This has been the case for on-disk tables since SQL 2016, so it’s great that this potential bottleneck for REDO has been removed.
  • Memory-optimized filegroup files can now be stored on Azure Storage, and you can also backup and restore memory-optimized files on Azure Storage.
  • sp_spaceused is now supported for memory-optimized tables
  • And last but definitely not least,  drum roll, please…….we can now rename memory-optimized tables and natively compiled modules

While Microsoft continues to improve columnstore indexes for on-disk tables, unfortunately columnstore for memory-optimized tables gets left further and further behind. Case in point would be support for LOB columns for on-disk tables in SQL 2017, but no such support for memory-optimized tables. And my good friend Niko Neugebauer (b|t) just reminded me that computed columns for on-disk CCI are supported in SQL 2017, but they are not supported for in-memory CCI. For an in-depth comparison of columnstore differences between on-disk and memory-optimized tables, see my  post here.

In addition to what’s listed above, I tested the following functionality for natively compiled stored procedures:

My wish list for the In-Memory OLTP feature is huge, but it’s great to see Microsoft continually improve and extend it.

Which events cause durable memory-optimized data to be streamed to memory?

Those who have studied In-Memory OLTP are aware that in the event of “database restart”, durable memory-optimized data must be streamed from disk to memory. But that’s not the only time data must be streamed, and the complete set of events that cause this is not intuitive. To be clear, if your database had to stream data back to memory, that means all your memory-optimized data was cleared from memory. The amount of time it takes to do this depends on:

  • the amount of data that must be streamed
  • the number of indexes that must be rebuilt
  • the number of containers in the memory-optimized database, and how many volumes they’re spread across
  • how many indexes must be recreated (SQL 2017 has a much faster index rebuild process, see below)
  • the number of LOB columns
  • BUCKET count being properly configured for HASH indexes

The following list is for standalone SQL Server instances (under some circumstances, the streaming requirements are different between FCIs and AGs).

Database RESTORE – this is a no brainer – if you restore a database with durable memory-optimized data, of course your data must be streamed from disk into memory. And if you are under the impression that SQL Server verifies if your server has enough memory to complete the RESTORE, you would be mistaken. See my post here.

SQL Service restart in this case, all databases will go through the recovery process, and all memory-optimized databases will stream durable memory-optimized data to memory.

Server reboot – same as “SQL Service restart”

In addition to the list above, there are a few database settings that cause data to be streamed.

  • Changing a database from READ_ONLY to READ_WRITE, or from READ_WRITE to READ_ONLY
  • Setting READ_COMMITTED_SNAPSHOT OFF or ON
  • Taking a database OFFLINE and then ONLINE

A database that contains durable memory-optimized data will not be online until all memory-optimized data is finished streaming, which affects the availability of “traditional” tables (while a database is waiting for streaming to complete, the wait type is “WAIT_XTP_RECOVERY”). There’s nothing you can do to speed up the process, other than having initially defined enough containers on enough volumes, so that streaming executes in parallel.

SQL 2017 enhancements

Because modifications to HASH and NONCLUSTERED/RANGE indexes are not logged for memory-optimized tables, they must be rebuilt when data is streamed to memory. Both SQL 2014 and SQL 2016 have a limit of 8 NONCLUSTERED indexes per memory-optimized table (any combination of HASH and RANGE). Microsoft has designed a new process for enhancing index rebuild speed in SQL 2017. This dovetails perfectly with the removal of the 8-indexes-per-table limit in SQL 2017 (I have personally created a table with 298 NONCLUSTERED indexes in SQL 2017).

Migrating tables to In-Memory OLTP

One of the first things you might do when considering migrating a table to In-Memory OLTP, is to run the “Transaction Performance Analysis Overview” report:

BlogTPAO

Then you arrive at the following window:

BlogTPAnalysis

If you click on “Tables Analysis”, a sort of “magic quadrant” appears:

BlogQuadrant

The chart attempts to display both the best candidates and the degree of difficulty for migration. But there are a few problems with the “difficulty” rating of this internal query, and as a result, if we really want to know how easy/difficult the process might be, we’ll have to roll our own query.

The original query counts the following conditions (the list is out of order on purpose):

nullable columns
LOB data types, i.e. NVARCHAR(MAX)
unique indexes/constraints
default constraints
check constraints
UDT
foreign keys
triggers
index with type > 2
computed columns
sparse columns
IDENTITY <> 1, 1
assembly
unsupported data types, i.e. DATETIMEOFFSET
filestream columns

Some of those conditions are 100% supported in SQL 2016, without any issue, while others have variable levels of migration difficulty, and still others are not supported at all. But even if we remove the items that are completely supported, all of the remaining items have the same weight. That could be pretty misleading, and might cause you to rule out migrating a table to In-Memory that could potentially be an excellent candidate.

Now let’s look at the list in another way:

**************************
supported without any issues
**************************
nullable columns
LOB data types, i.e NVARCHAR(MAX)
unique indexes/constraints

**********************************************************************
supported with a range of migration difficulty (from no difficulty to very difficult)
**********************************************************************
default constraints
check constraints
UDT
foreign keys
triggers

index with type > 2
0 = Heap
1 = Clustered
2 = Nonclustered
3 = XML
4 = Spatial
5 = Clustered columnstore index
6 = Nonclustered columnstore index
7 = Nonclustered hash index

********************
unsupported/blocking
********************
computed columns
sparse columns
IDENTITY <> 1, 1
assembly
unsupported data types, i.e. DATETIMEOFFSET
filestream columns

My version of the script removes the checks for nullable and LOB columns, and also for UNIQUE indexes/constraints.

And for the remaining conditions, since they’re all weighted the same by virtue of counting them, I wanted to place them in different buckets. After running my script on a sample database, I can see that the AuditTrail table has the following potential migration “issues”:

BlogIssues

There are a total of 8 issues, although migrating default constraints, user-defined data types, and LOB columns will be easy. It’s the foreign keys that might prove difficult, potentially leading to a long chain of tables that would have to be migrated (because foreign key constraints on a memory-optimized table can only reference other memory-optimized tables).

We definitely have a much clearer picture of the potential migration difficulties. With this version of the script, you can make informed choices about which tables you might want to migrate to In-Memory OLTP.

Also note that computed columns are supported in SQL 2017, so this script could have some intelligence added to allow for that.

Availability Groups and Native Compilation

For disk-based tables, query plans for interpreted/traditional stored procedures will be recompiled when statistics have changed. That’s because when you update statistics, cached query plans for interpreted stored procedures are invalidated, and will automatically recompile the next time they’re executed. That’s true for an interpreted stored procedure that references disk-based tables, and/or memory-optimized tables.

As of SQL 2016, the database engine automatically updates statistics for memory-optimized tables (documentation here), but recompilation of native modules must still be performed manually. But hey, that’s way better than SQL 2014, when you couldn’t recompile at all; you had to drop/recreate the native module. And natively compiled stored procedures don’t reside in the plan cache, because they are executed directly by the database engine.

This post attempts to determine if the requirement to manually recompile native modules is any different for AG secondary replicas.

Stats on the primary

Statistics that are updated on the primary replica will eventually make their way to all secondary replicas. This blog post by Sunil Agarwal details what happens on the secondary replica if the statistics are stale (relative to any temporary statistics that were created on the secondary).

How do we…?

The first question we must answer is: how do you determine when the last time a natively compiled stored procedure was compiled?

We can do that by checking the value of the cached_time column from the following query:

The query is simple, but you won’t get any results unless you enable the collection of stored procedure execution statistics for natively compiled procedures. Execution statistics can be collected at the object level or instance level.

NOTE: Enabling the collection of stored procedure statistics for natively compiled procedures can crush your server, potentially resulting in disastrous performance impact. You must be extremely careful with this method of troubleshooting.

Once you’ve enabled stats collection for native procedures, you should get results from the query above.

How I tested

Here are the steps I executed, after creating an AG that used synchronous mode (containing a single database with a memory-optimized filegroup):

  1. Create a sample table
  2. Insert some rows
  3. Create a natively compiled procedure that selects from the sample table
  4. Execute the native procedure on the primary and secondary (it must be executed at least once in order to have usage stats collected)
  5. Enable collection of stored procedure execution statistics on the primary and secondary replicas
  6. Again execute the native procedure on the primary and secondary
  7. Note the value of sys.dm_exec_procedure_stats.cached_time on the primary and secondary
  8. Recompile the native procedure on the primary
  9. Execute the native procedure on the primary and secondary
  10. Again note the value of sys.dm_exec_procedure_stats.cached_time on the primary and secondary

Results

The cached_time value on the secondary did not get updated when the native module was recompiled on the primary.

What does this mean for DBAs that are responsible for maintaining AGs that use native compilation?  It means that when you recompile native modules on the primary replica (which you would always do after updating statistics on the primary), those modules must be recompiled on all secondary replicas. The recompilation on the secondary can be performed manually or perhaps through some automated mechanism. For example, if you have a SQL Agent job on the primary replica to update statistics, one of the job steps might be for marking all natively compiled stored procedures on the secondary for recompilation, using sp_recompile.

How would that job step handle the recompile for all secondary replicas?

Perhaps after defining linked servers, you could do something like:

EXEC SecondaryServer1.msdb.dbo.sp_start_job @job_name = N’Recompile native procs’;

EXEC SecondaryServer2.msdb.dbo.sp_start_job @job_name = N’Recompile native procs’;

But it might be involved to define this for all secondary replicas – it sounds like a topic for another post…..

Happy recompiling –

Clustered columnstore: on-disk vs. in-mem

This post will highlight the fairly wide gap in functionality between clustered columnstore indexes for on-disk tables compared to memory-optimized tables, for SQL 2016.

The first thing to note – which is not specific to columnstore for either type of table – is that Microsoft chose a really poor name for this type of index.

Why do I say that?

For the following reasons:

1. Traditional/regular clustered indexes physically sort data, but data within a columnstore index is unordered (that’s true for clustered and nonclustered columnstore indexes).

2. For a traditional/regular clustered index, the key columns are added to all nonclustered indexes. Columnstore indexes don’t have a “key” so nothing about them gets copied to any nonclustered indexes.

Now that we’ve established the issues with the name – and the potential confusion as a result – let’s get to the differences between clustered columnstore indexes for on-disk tables compared to memory-optimized tables.

The only type of columnstore index you are allowed to create on a memory-optimized table is a clustered columnstore. But as you’ll find out in a moment, that naming  convention has it’s own set of issues when discussing memory-optimized tables…..

1. Primary data source

On-disk: When you create a clustered columnstore index on a disk-based table, it actually replaces the rowstore, and becomes the primary data source. Columnstore compression typically reduces the storage footprint greatly (often by ~90% or so), depending on the specific data types represented in the table.

Memory-optimized: When you create a clustered columnstore index on a memory-optimized table, the rows in memory remain the primary data source. Creating a “clustered columnstore” index on a memory-optimized table actually increases the storage footprint (same for nonclustered HASH and RANGE indexes ) by about 10%. That’s why I say that all indexes on memory-optimized tables are nonclustered (because they are). Keep in mind that “storage footprint” in this case means “additional memory”. Despite what you may have read or heard elsewhere, clustered columnstore indexes on memory-optimized tables are persisted to disk, so that also take up space on your persistent storage. This reason this is done is to reduce the impact on RTO (see my blog post here for more info).

2. Schema changes

On-disk: After adding a clustered columnstore index to an on-disk table, you can use ALTER TABLE to modify the schema, adding or dropping both columns and nonclustered indexes.

Memory-optimized: After adding a clustered columnstore index to an memory-optimized table, you can not use ALTER TABLE to modify the schema. The schema becomes read-only, but of course you can still perform DML on the table.

3. LOB data

On-disk: You can create a clustered columnstore index on an on-disk table that has LOB data in vNext (not SQL 2014 or 2016).

Memory-optimized: As of vNext RC1, LOBs are not supported for memory-optimized tables, although this might change by the time we get to RTM.

(Update: 27 Aug 2017 – SQL 2017 does not support LOB columns for CCI on memory-optimized tables)

4. NC indexes

On-disk: After you create a clustered columnstore index on an on-disk table, you can use ALTER TABLE to create nonclustered indexes.

Memory-optimized: After you create a clustered columnstore index on a memory-optimized table, you can not use ALTER TABLE to create nonclustered indexes, but they can be created inline as part of the CREATE TABLE statement.

5. Parallelism

On-disk: When you query an on-disk table that has a columnstore index, the database engine can use parallelism to process the results more quickly.

Memory-optimized: When you query a memory-optimized table that has a columnstore index, the database engine can use parallelism to process the results more quickly, BUT, that statement is only true if you use interop. Natively compiled modules are always executed serially.

6. Archival compression

On-disk: There are two compression options for on-disk tables: columnstore and columnstore archival.

Memory-optimized: There is only one compression option for memory-optimized tables: columnstore.

7. Index maintenance

On-disk: You can use REBUILD or REORGANIZE for clustered columnstore indexes for on-disk tables.

Memory-optimized: REORGANIZE is not supported for memory-optimized tables, but in order to migrate rows from the virtual delta rowgroup into compressed rowgroups, you can use: sys.sp_memory_optimized_cs_migration. REBUILD is supported, but only for changing the bucket count for HASH indexes.

8. Filtered indexes

On-disk: For on-disk tables, it’s possible to create a nonclustered columnstore index with a filter.

Memory-optimized: For memory-optimized tables, it’s not possible to create a clustered columnstore index with a filter (and you cannot create a nonclustered columnstore index).

9. Temporary structures

On-disk: I’m not sure why you’d want to do it, but it’s possible to create a clustered columnstore index on a #temp table.

Memory-optimized: The equivalent of #temp tables would be memory-optimized table variables. It’s not possible to create clustered columnstore indexes on them, and besides, memory-optimized table variables are only able to be created with the body of a natively compiled module, and as we discussed earlier in this post, natively compiled modules cannot make use of columnstore indexes.

10. Other items to note

There are a few more wrinkles in the memory-optimized universe for columnstore indexes…..

Durability: The memory-optimized table must be durable (created with DURABILITY = SCHEMA_AND_DATA)

Off-row: You cannot create a clustered columnstore index on a memory-optimized table that has off-row columns

11. (special thanks to Niko Neugebauer)

On-disk: Columnstore contains Vertipaq optimisations (this changes for CCI when we build secondary nonclustered rowstore indexes).

Memory-optimzed: Columnstore does not contain Vertipaq optimisations.

12. Computed columns (special thanks to Niko Neugebauer)

On-disk: CCI supports computed columns

Memory-optimzed: CCI does not support computed columns (but memory-optimized tabes support computed columns and indexes on computed columns)

Repro script:

--##############################
DROP TABLE IF EXISTS [dbo].[InMem];
GO
CREATE TABLE [dbo].[InMem]
(
	 [PK] [INT] IDENTITY(1, 1) NOT NULL
	,[col1] [CHAR](100) NOT NULL
	,PRIMARY KEY NONCLUSTERED HASH ([PK]) WITH (BUCKET_COUNT = 1000)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
 
-- fails
/*
Msg 10794, Level 16, State 76, Line 76
The feature ‘NONCLUSTERED COLUMNSTORE’ is not supported with memory optimized tables.
*/
ALTER TABLE dbo.InMem
ADD INDEX NCCI_InMem NONCLUSTERED COLUMNSTORE (col1);
 
--##############################
DROP TABLE IF EXISTS [dbo].[InMem];
GO
CREATE TABLE [dbo].[InMem]
(
	 [PK] [INT] IDENTITY(1, 1) NOT NULL
	,[col1] [CHAR](100) NOT NULL
	,PRIMARY KEY NONCLUSTERED HASH ([PK]) WITH (BUCKET_COUNT = 1000)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);
 
-- fails if DURABILITY = SCHEMA_ONLY
--Msg 35320, Level 16, State 1, Line 39
--Column store indexes are not allowed on tables for which the durability option SCHEMA_ONLY is specified.
 
ALTER TABLE dbo.InMem ADD INDEX CCI_InMem CLUSTERED COLUMNSTORE
 
--##############################
DROP TABLE IF EXISTS [dbo].[InMem];
GO
CREATE TABLE [dbo].[InMem]
(
	 [PK] [INT] IDENTITY(1, 1) NOT NULL PRIMARY KEY NONCLUSTERED
	,[col1] [CHAR](100) NOT NULL,
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO
-- works
ALTER TABLE dbo.InMem ADD INDEX CCI_InMem CLUSTERED COLUMNSTORE
GO
-- now let’s add a NC index….
ALTER TABLE dbo.InMem ADD INDEX IX_Index1 (col1);
/*
Msg 10794, Level 16, State 15, Line 117
The operation ‘ALTER TABLE’ is not supported with memory optimized tables that have a column store index.
*/
 
--##############################
DROP TABLE IF EXISTS [dbo].[InMem];
GO
CREATE TABLE [dbo].[InMem]
(
	 [PK] [INT] IDENTITY(1, 1) NOT NULL
	,[col1] [CHAR](100) NOT NULL
	,PRIMARY KEY NONCLUSTERED HASH ([PK]) WITH (BUCKET_COUNT = 1000)
	,INDEX CCI_InMem CLUSTERED COLUMNSTORE
	,INDEX IX_InMem1 (col1)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO
 
-- fails if memory-optimized table has a clustered columnstore index
--Msg 12349, Level 16, State 1, Line 68
--Operation not supported for memory optimized tables having columnstore index.
ALTER TABLE dbo.InMem ADD col2 INT NULL;
 
-- fails if memory-optimized table has a clustered columnstore index
--Msg 10794, Level 16, State 15, Line 83
--The operation ‘ALTER TABLE’ is not supported with memory optimized tables that have a column store index.
 
ALTER TABLE dbo.InMem ADD INDEX IX_InMem_Col1 (col1);
 
--##############################
DROP TABLE IF EXISTS [dbo].[InMem];
GO
CREATE TABLE [dbo].[InMem]
(
	 [PK] [INT] IDENTITY(1, 1) NOT NULL
	,[col1] [CHAR](100) NOT NULL
	,PRIMARY KEY NONCLUSTERED HASH ([PK]) WITH (BUCKET_COUNT = 1000)
	--,INDEX CCI_InMem CLUSTERED COLUMNSTORE
	,INDEX IX_InMem1 HASH (col1) WITH (BUCKET_COUNT = 1000)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO
 
-- can’t REBUILD an index on memory-optimized tables, except for modifying the bucket count for hash indexes
-- but you can’t rebuild that index if you also have a columnstore index on the table….
ALTER TABLE dbo.InMem
ALTER INDEX IX_InMem1
REBUILD WITH (BUCKET_COUNT = 1001)
 
--##############################
DROP TABLE IF EXISTS [dbo].[InMem];
GO
CREATE TABLE [dbo].[InMem]
(
	 [PK] [INT] IDENTITY(1, 1) NOT NULL
	,[col1] [CHAR](100) NOT NULL
	,PRIMARY KEY NONCLUSTERED HASH ([PK]) WITH (BUCKET_COUNT = 1000)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
 
GO
 
-- add column
ALTER TABLE dbo.InMem ADD col2 INT NULL;
 
--##############################
/*
We can create a columnstore index on a #temp table
*/
 
CREATE TABLE #temp
(
	col1 INT NOT NULL
);
GO
-- succeeds
CREATE NONCLUSTERED COLUMNSTORE INDEX NCCI_InMem
ON #temp (col1);
 
/*
Can we create a columnstore index on a memory-optimized table variable?
*/
 
CREATE TYPE dbo.typeTableMem AS TABLE
(
	 [PK] INT NOT NULL PRIMARY KEY NONCLUSTERED
	,[col1] NVARCHAR(255) NOT NULL
)
WITH (MEMORY_OPTIMIZED = ON);
GO
DECLARE @Table1 dbo.typeTableMem;
 
-- fails
--Msg 102, Level 15, State 1
--Incorrect syntax near ‘@Table1’.
CREATE CLUSTERED COLUMNSTORE INDEX CCI_Table1 ON @Table1 (col1)
 
-- fails
ALTER TABLE @Table1
ADD INDEX CCI_InMem CLUSTERED COLUMNSTORE
 
--###################################
 
/*
LOBs
Create a table that includes a LOB column with a MAX datatype,
then add a clustered columnstore index
*/
 
DROP TABLE IF EXISTS [dbo].[InMemLOB];
GO
CREATE TABLE [dbo].[InMemLOB]
(
	 [OrderId] [INT] IDENTITY NOT NULL
	,[StoreID] INT NOT NULL
	,[CustomerID] INT NOT NULL
	,[OrderDate] [DATETIME] NOT NULL
	,[DeliveryDate] DATETIME NULL
	,[Amount] FLOAT NOT NULL
	,[Notes] [NVARCHAR](MAX) NULL
	,[ValidFrom] [DATETIME2](7) NOT NULL
	,[ValidTo] [DATETIME2](7) NOT NULL
    ,CONSTRAINT [PK_InMemLOB_OrderID] PRIMARY KEY NONCLUSTERED (OrderId)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO
 
/*
Msg 35343, Level 16, State 1, Line 22
The statement failed. Column ‘Notes’ has a data type that cannot participate in a columnstore index. Omit column ‘Notes’.
Msg 1750, Level 16, State 1, Line 22
Could not create constraint or index. See previous errors.
*/
 
--ALTER TABLE dbo.[InMemLOB] ADD INDEX CCI_InMemLOB; CLUSTERED COLUMNSTORE
 
/*
Off-row
Create a table with non-MAX LOB columns, but they are stored off-row,
then add a clustered columnstore index on a LOB column
*/
--##############################
DROP TABLE IF EXISTS [dbo].[InMemLOB];
GO
CREATE TABLE [dbo].[InMemLOB]
(
	 [OrderId] [INT] IDENTITY NOT NULL
	,[StoreID] INT NOT NULL
	,[CustomerID] INT NOT NULL
	,[OrderDate] [DATETIME] NOT NULL
	,[DeliveryDate] DATETIME NULL
	,[Amount] FLOAT
	,[Notes1] [VARCHAR](8000) NULL
	,[Notes2] [VARCHAR](8000) NULL
	,[Notes3] [VARCHAR](8000) NULL
	,[Notes4] [VARCHAR](8000) NULL
	,[ValidFrom] [DATETIME2](7) NOT NULL
	,[ValidTo] [DATETIME2](7) NOT NULL
    ,CONSTRAINT [PK_InMemLOB_OrderID] PRIMARY KEY NONCLUSTERED (OrderId)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
GO
-- works
ALTER TABLE InMemLOB ADD INDEX IndOrders_StoreID (StoreID);
GO
-- fails
/*
Msg 41833, Level 16, State 1, Line 29
Columnstore index ‘CCI_InMemLOB’ cannot be created, because table ‘InMemLOB’ has columns stored off-row.
Columnstore indexes can only be created on memory-optimized table if the columns fit within the 8060 byte limit for in-row data.
Reduce the size of the columns to fit within 8060 bytes.
*/
ALTER TABLE dbo.[InMemLOB] ADD INDEX CCI_InMemLOB CLUSTERED COLUMNSTORE
GO

How NOT to benchmark In-Memory OLTP

In forums over the last few months, I’ve seen a number of posts like the following:

  • “I tested native compilation, and it’s not much faster than using interpreted TSQL”
  • “I’m seeing performance issues with memory-optimized tables”

Tools and latency

Sometimes the bottleneck is the tool that’s used for testing. One person was using Ostress.exe and logging output to a file, using the –o parameter. This caused the benchmark they ran for memory-optimized tables to actually perform worse than disk-based tables! The overhead of logging Ostress output to disk created a high degree of latency, but once they removed the –o parameter, In-Memory OLTP performed super-fast for their workload.

Across the wire

Client/server messaging has overhead, and this cannot be improved by using In-Memory OLTP. Whether you’re returning one million rows from a disk-based table or a memory-optimized table, you’re still sending one million rows across the wire, which is not a valid test of In-Memory OLTP performance.

Core count

When you do a proof of concept, you should keep in mind that In-Memory OLTP is designed to work with many cores, and many concurrent processes. If you do your POC on a laptop with a single-threaded workload, In-Memory OLTP is not likely to deliver orders-of-magnitude performance benefits.

Simple queries used for testing Native Compilation

If you test with a query like:

SELECT col1
FROM table1

 

then native compilation will probably not be much faster than disk-based tables. Native compilation will show the greatest benefit when encapsulating complex business logic.

“Test” workloads

Doing a proof of concept with a contrived workload will not accurately determine if your real workload would benefit from migrating some or all data to In-Memory OLTP. The best way to do a proof of concept would be to use a copy of your production database with a realistic workload. You could run against disk-based tables first, and after migrating data to In-Memory, you could re-run and compare the results.

Deploying In-Memory OLTP can increase workload performance in several ways:

  • latch/lock free architecture
  • reduced/enhanced logging – modifications to indexes are not logged, and also the entire logging process has been redesigned for memory-optimized tables
  • interpreted TSQL overhead
  • temp table/tempdb overhead
  • excessive computation

Obviously you’d need to have a reasonable amount of concurrent activity in order to determine if In-Memory OLTP would achieve performance gains for your workload.

Monitoring In-Memory OLTP Resource Pools

For databases that contain memory-optimized data, it’s considered a best practice to create a separate resource pool that limits memory consumption for memory-optimized objects. If you don’t use use Resource Governor to constrain the amount of memory allocated to memory-optimized objects, it’s possible that the buffer pool will respond to memory pressure, shrink, and performance for traditional/on-disk tables will be impacted (using Resource Governor also allows you to track memory consumption specific to that database). We must carefully monitor the amount of memory used by the resource pool, and respond when it goes beyond a certain threshold. If we don’t, then we can reach an out-of-memory condition (OOM).

This blog post details the various approaches I considered.

Option 1

SQL Agent Performance Condition Alerts can respond to changes on the “Resource Pool Stats” object. This gets us pretty close to the stated goal, but unfortunately you have to hard code a threshold value for “Used memory (KB)”. If you needed to deploy this type of monitoring on many servers, and over time some of them upgrade physical memory, you have to remember to go back and adjust the hard-coded value for the new threshold. I was hoping to be able to evaluate this as a percentage of “Target Memory (KB)”, and then alert above x%, but that doesn’t seem to be possible with Performance Condition Alerts.

image

Option 2

SQL Server Event Alerts can take action when a specific error number occurs. I checked sys.sysmessages for references to the word ‘pool’, but as far as I can tell, none of them allow you to proactively respond to memory conditions before you run out of memory. We have errors like:

  • Disallowing page allocations for database ‘%.*ls’ due to insufficient memory in the resource pool ‘%ls’.
  • There is insufficient memory in the resource pool ‘%ls’ to run this operation on memory-optimized tables.
  • There is insufficient system memory in resource pool ‘%ls’ to run this query.

image

Option 3

Another possibility would be to have a TSQL Agent job that constantly polls sys.dm_os_performance_counters:

This would allow you to calculate the ratio you want to alert on, and then send an email. I like this approach for its simplicity, but you’d have to compare the value of cntr_value for two different rows to calculate the percentage.

Option 4

I looked into WMI, and the documentation is pretty bad about exactly what’s supported for SQL Server. I was surprised to find that the class I needed was part of root\cimv2, not in the SQL Server branch. Note that there can be security issues with accessing WMI, so that might be a concern for your environment (NOTE: WMI is pretty old, so it’s not recommended to keep using it. Instead, we should CIM, Common Information Model).

This does exactly what we need:

No matter how much memory is assigned to the pool, it calculates the percentage that’s used. If the value of Used memory exceeds a pre-defined percentage of Target memory, you might send an email, write to the SQL Errorlog (and then notify based on that error number), or use other options for logging and notification.

Transactional Replication meets In-Memory OLTP

Transactional replication hasn’t changed much since it was re-written for SQL 2005. However, with the release of SQL 2014 , there is at least one new possibility: memory-optimized tables at the subscriber.

With the release of SQL 2016, some of the restrictions for memory-optimized subscriber tables have been lifted:

  • snapshot schema files that create the memory-optimized tables no longer have to be manually modified (but see “Gotcha #6, silent schema killer” below)
  • tables replicated to memory-optimized tables on a subscriber are no longer limited to the SQL 2014 max row length of 8060 bytes for memory-optimized tables. This seems sort of moot, because published tables cannot themselves be memory-optimized, and are therefore still restricted to 8060 bytes. However, if for some reason you needed to add a lot of columns to the subscriber table that cause it to be greater than 8060 bytes, you can do it. Note that there is no limit on how large a row can be for memory-optimized tables in SQL 2016. The following statement is perfectly valid:

 

Why would you want to use memory-optimized tables in a subscriber database? There can only be one answer: speed.

Subscriber latency due to data volume could be a result of the following, in combination with each other, or individually:

  • excessive logging – changes to indexes are not logged for memory-optimized tables, and in general logging is much more efficient than for traditional/on-disk tables
  • locking – no locks are taken for DML statements that touch memory-optimized tables
  • blocking – blocking as a result of a transaction making changes to rows is not possible for memory-optimized tables
  • latching – no latches are taken on memory-optimized tables

The design of the In-Memory OLTP engine can alleviate latency due to these issues – BUT – before you start jumping for joy, you’ll need to be aware of the impact of deploying In-Memory OLTP in general.

DBAs love to tune things (indexes, queries, etc.), and subscriber tables are no exception. Until SQL 2014, when memory-optimized subscriber tables were introduced, some of the things that DBAs tuned on the subscriber included:

  • compression settings
  • different ways that the data in subscriber tables can be reinitialized, i.e. TRUNCATE TABLE, DELETE, DROP/CREATE table, or do nothing (these choices are for the ‘Action if name is in use’ section of the ‘Destination Object’, see the next screen shot).
  • custom indexes
  • snapshot isolation

For reinitializing, being able to use TRUNCATE TABLE is a great benefit, because all custom indexes and compression settings are retained for the destination table. If you choose drop/create, all compression settings and custom indexing must be reapplied to the subscriber table upon (re)initialization.

Deployment considerations

Article properties

On the dialog for Article Properties, you’ll need to make sure that both “Enable Memory Optimization” and “Convert clustered index to nonclustered index for memory optimized article” are set to “True”. Despite what you might have read, there is no concept of a “clustered” index for a memory-optimized table. If you have a clustered index on the published table, the DDL will fail when applied on the subscriber unless you set this option.

ReplicationArticleMemOptHighlighted

Subscription Properties

The Subscription Properties can be configured when initially creating the subscription:

ReplicationSubscriptionMemOptHighlight

or from the Subscription Properties dialog, if the subscription already exists:

ReplicationSubscriptionHighlilghted

Gotcha #1, DML support

Reinitialization is likely to happen at some point in the future, and so we’ll need to make the correct choice for “Action if name is in use”, on the same Article Properties dialog.

TRUNCATE TABLE is not supported for memory-optimized tables. If the table must be dropped, you’ll have to reapply scripts to handle any subscriber-level customization.

ReplicationArticleNameInUse_dropdown

Gotcha #2, compression

On-disk tables are stored in pages. Memory-optimized tables are not stored in pages, and therefore don’t support any form of compression (columnstore indexes on memory-optimized tables create a separate compressed copy of the rows, but the primary data source remains the rows in memory).

Gotcha #3, potential WRITELOG bottleneck

All DML operations on durable memory-optimized tables are fully logged, regardless of database-level recovery settings (for more details, see my post on “Optimizing Data Load” here). If deploying In-Memory OLTP solves the latency issues your app was experiencing, WRITELOG is likely to become one of the top waits. This prevents realizing the full potential of deploying In-Memory OLTP, but fear not – as of SQL 2016/SP1, NVDIMM is supported for the transaction log, reducing/eliminating the log as a performance bottleneck. See the link here for more detail.

Gotcha #4, impact on RTO

If by chance you must restore a subscriber database that contains a lot of durable memory-optimized data (I realize that “a lot” is subjective), RTO will be affected. That’s because the number and placement of containers has a significant effect on the amount of time required to recover a database that contains durable memory-optimized data. See my post “In-Memory OLTP: The moving target that is RTO” here for more details. You might also be interested in “Backup and Recovery for SQL Server databases that contain durable memory-optimized data” here.

Gotcha #5, resource consumption

Updates on memory-optimized tables are performed as DELETE + INSERT, and INSERTs create row versions, and the newly inserted row becomes the current version. Older versions consume additional memory, and must be retained as long as any processes that reference them are still executing (like queries running on the subscriber). It’s possible to have long chains of versioned rows, and that means your environment might require additional memory. For a detailed explanation of row versioning, including the Garbage Collection process, see my post on “Row version lifecycle for In-Memory OLTP” here. There are additional considerations if your workload uses memory-optimized table variables (also detailed in that post).

Gotcha #6, silent schema killer

Let’s say you’ve done you’re homework, and that your configuration for memory-optimized subscriber tables is perfect. There is additional database configuration that must be done to support memory-optimized tables, and without that, your subscriber tables will not be initialized/reinitialized as memory-optimized (they’ll still be created on the subscriber, but will be traditional/on-disk tables). In the stored procedure that executes on the subscriber, there is validation to determine if there is a memory-optimized filegroup for the subscriber database (there are other conditions, but this is the one we’re interested in).

If you lookup the definition of sys.filegroups, it relates to sys.data_spaces, and there we see a column named “type” that can have the following values:

FG = Filegroup
FD = FILESTREAM data filegroup
FX = Memory-optimized tables filegroup
PS = Partition scheme

If the query finds a filegroup of type “FX”, the table is created as memory-optimized, and if not (along with some other conditions), it’s created as a traditional/on-disk table.

While it seems obvious that you should have already configured your database to have a memory-optimized filegroup, if you skipped that step, there is no warning, error, or other type of message received, stating that the subscriber database is not memory-optimized. Of course, simply having a memory-optimized filegroup is not enough to create memory-optimized tables, because you must also have containers that belong to that memory-optimized filegroup. The “memory-optimized filegroup exists” validation will pass, but the (re)initialization will fail because no containers exist, and you’ll receive an error about that.

Index limitations

As of this writing (SQL 2016, SP1), a memory-optimized table can have a maximum of 9 indexes (if one of them is a columnstore index). That may or may not be an issue for your environment, but it’s a much lower number than traditional/on-disk tables.

Stored procedure execution

A quick review of Interop vs. Native Compilation:

  • Interop – interpreted TSQL as existed prior to SQL 2014. The full TSQL surface area is available with interop mode, and you can access both on-disk and memory-optimized tables.
  • Native Compilation – for maximum speed, you’ll want to use natively compiled stored procedures. There are restrictions for natively compiled modules, the most significant being that they can only reference memory-optimized tables, and the full TSQL surface area is not available. As of SQL 2016/SP1, natively compiled modules don’t support CASE statements, views, and there many other restrictions. For more details, check “Transact-SQL Constructs Not Supported by In-Memory OLTP” here.

If you execute an UPDATE or DELETE that affects a large number of rows, then that statement is turned into individual UPDATE or DELETE statements that are sent to the distributor, and finally to the subscriber(s). To avoid the overhead of sending all those changes, it’s possible to publish the “execution” of a stored procedure. The documentation says: “..only the procedure execution is replicated, bypassing the need to replicate the individual changes for each row..” Please refer to the document about replicating stored procedure execution here.

The documentation also states that you can customize the stored procedure on the subscriber. Although the documentation doesn’t mention it, the stored procedure can be natively compiled, which should greatly increase performance on the subscriber for transactions that affect a large number of rows. Keep in mind that any changes made to the procedure at the publisher are sent to the subscriber. If this isn’t the behavior you want, disable the propagation of schema changes before executing ALTER PROCEDURE.

IDENTITY crisis

You’ll likely be disappointed with native compilation if you’re trying to INSERT many rows at the subscriber, and the destination table includes an IDENTITY column. That’s because it’s not possible to insert a row that has an IDENTITY column in a natively compiled stored procedure. Even if you SET IDENTITY_INSERT on before calling the procedure, the insert still fails with: “The function ‘setidentity’ is not supported with natively compiled modules.”

Custom stored procedures

There is a difference between “replicating stored procedure execution”, and using “custom stored procedures”.  Microsoft does not support anything you might create as a “custom stored procedure”, whether or not it’s natively compiled.

Please check the documentation here.

Wrapping up

In-Memory OLTP is steadily making its way into the full feature set offered by SQL Server. If you’re running SQL 2016 SP1, In-Memory OLTP is now included with all editions of SQL 2016, except LocalDB.