Monthly Archives: November 2018

SQL 2019 In-Memory hotness

SQL 2019 is on track to become one of the most awesome releases – the product touches so many realms of the data platform, it’s truly mind boggling.

Since I have such a keen interest in Hekaton/In-Memory OLTP, when the CTPs are released for a new version of SQL Server, I look forward to any potential announcements about that feature.

So far, there’s been only one publicly announced enhancement for In-Memory OLTP in SQL 2019: system tables in TempDB will be “Hekatonized”. This will forever solve the issue of system table contention in TempDB, which is a fantastic use of Hekaton. I’m told it will be “opt in”, so you can use this enhancement if you want to, but you can also back out of it, which would require a restart of the SQL Server service.

But there’s at least one other enhancement that’s not been announced, although the details of its implementation are not yet known.

When you start to research the Hekaton feature, most are shocked to learn that CHECKDB does not verify anything about durable In-Memory tables: it silently ignores them.

That appears to have changed in SQL 2019, although either the informational message about what it does is misleading, or behind the scenes it does something different.

This is the output for DBCC CHECKDB of a memory-optimized database in SQL 2017:

Object ID 949578421 (object ‘inmem_table’): The operation is not
supported with memory optimized tables. This object has been skipped and will not be processed.

(the emphasis was added by me)

This is the output for DBCC CHECKDB of a memory-optimized database in SQL 2019:

DBCC results for ‘inmem_table’.
There are 101 rows in 3 pages for object “inmem_table”.

Why do I say the message is misleading?

Because durable data for memory-optimized tables is not stored in pages, but instead in a streaming fashion in files known as checkpoint file pairs (or data and delta files). Also, while it’s true that there are 101 rows in this table, the engine pre-creates a number of data and delta files, and it would make DBAs sleep a lot better at night, if all of those files were verified as being corruption free.

We’ll just have to stay tuned to the future CTPs and RTM of SQL 2019 to see how all of this shakes out.

In Pursuit of Batch Mode on Rowstore

In her excellent blog post entitled “Batch Mode Hacks for Rowstore Queries in SQL Server“, Kendra Little b|t pays homage to Itzik Ben-Gan, Niko Neugebauer, and others.

The solutions she details will indeed result in batch mode for rowstore queries. I had already seen the solution proposed by Mr. Ben-Gan, and as is typically the case, a simple example is given to illustrate the concept, and these types of examples are almost always single-threaded.

I have a client that used Itzik Ben-Gan’s solution of creating a filtered nonclustered columnstore index to achieve batch mode on a rowstore (in fact I proposed that the client consider it). They have an OLTP system, and often perform YTD calculations. When they tested, processing time was reduced by 30 to 50 percent, without touching a single line of application code. If that ain’t low hanging fruit, I don’t know what is —

However, during testing, I noticed some intermittent blocking that didn’t make sense to me. But I couldn’t nail it down, and they went live with the “filtered nonclustered columnstore index” solution.

Once they deployed – and there was a lot of concurrency – I could see what had eluded me during my proof of concept: blocking in tempdb.

The repro is super-simple:

  • Create a table, and insert some sample data
  • Create a stored procedure that does the following:
    SELECT from that table into a #temp table
    Create a filtered nonclustered columnstore index on the #temp table, using a filter that cannot possibly be true, i.e. IDcolumn < 0 and IDcolumn > 0
    SELECT from the #temp table (return results)

From the first connection, issue a BEGIN TRAN execute the stored procedure. Note the spid for this connection. Then open a separate connection, issue a BEGIN TRAN and execute the stored procedure. Note the spid for this connection.

You’ll notice that the first connection has no issues, but when you execute the proc in the second connection, it gets blocked.

When you peel back the layers, you can see that the first connection requests and obtains a schema modification lock on the #temp table (Sch-M).

The second connection requests a schema stability lock on the same object_id, and is blocked (Sch-S).

To be clear, what’s happening here is that separate connections are placing incompatible locks on the same temporary object in tempdb, which is supposed to be impossible (but in fact the object_id is the same). My gut tells me that this is perhaps related to metadata when creating the NCCI, but I couldn’t prove that.

It should be noted that if you remove the filter on the NCCI, there is no blocking, and also if you use a regular filtered nonclustered index (not columnstore), this issue persists. Of course, in the real world, removing the filter is not an option because what we’re interested in speed, and if there’s one thing that columnstore indexes are not fast at, it’s being created.

Hopefully if/when Microsoft fixes this, it will be back ported to earlier versions of SQL Server.

I can reproduce this on SQL 2016 and 2017 (and even 2019, but that’s not really fair, cause it’s not RTM yet…)

If you think that Microsoft should fix this, please upvote my Azure User Voice entry here.

Repro code: