Category Archives: DBA

Trials and Tribulations of SQL Transactional Replication

I’ve dealt with SQL replication for decades, and in a sense, not a lot has changed. I mean this from a basic configuration and troubleshooting perspective, though it has in some ways been extended a bit through the years, for new SQL Server features (like In-Memory OLTP, Azure, etc.).

Many refer to replication as the the Swiss Army Knife of SQL Server, and I can understand why, but with this “extreme flexibility” comes “extreme shortcomings”, and this post will delve into some of the issues you should be aware of.

Many have blogged about SQL replication, but Kendra Little (b|t) stands out – her posts are clear and concise, and you will always learn something from her.

But despite having read all of her posts about replication, and those of many others, when I started my current role almost 3 years ago, I was woefully ignorant of many facets of the replication feature.

This post is my effort to save you some of the pain and head scratching that I’ve endured!

Gotcha #1: you want to go parallel for initial sync

Much of the pain of SQL replication occurs when tables/articles are large. Setting up subscribers to receive terabytes of data requires changing some very specific things in very specific places, in order to have a chance of going parallel.

Unfortunately, the out-of-the-box defaults for replication place you squarely in the single-threaded camp. The Snapshot Agent can be configured with a setting named maxbcpthreads that can make snapshot generation much faster by using parallel threads to generate the snapshot.

The catch?

It’s ignored unless the sync_method for the publication is native. Now you might have looked at the docs and determined that in fact you’re already getting native snapshots, because you are using the default sync_method of concurrent, but your snapshot will still be generated by a single thread with concurrent. In order to get parallel snapshot generation, you must have a sync_method of native and use the maxbcpthreads switch on the Snapshot Agent job step. You have been warned!

Gotcha #2: Agent job naming sucks

Deploying replication generates tons of SQL Agent jobs, especially if you use push subscriptions, and you have the distribution database on the publisher. If you have long publication names (or even if you don’t), when it comes time to look for a specific job, it can be difficult to discern which ones are for generating snapshots, which are for distribution, etc. That’s why I always rename my replication jobs according to function/category. I append “_Snapshot” or “_Distribution” to the relevant jobs, using a script.

Doing this might make it impossible from other places in the UI to bring up the proper job when drilling down in Replication Monitor, but I don’t use the monitor for that – it’s a sacrifice I’m willing to make.

Gotcha #3: Password rotation can break replication

Most companies have a password rotation policy for domain users, as well as domain service accounts. If you use a domain account for anything having to do with replication agent security, obviously the SQL Server environment will have no knowledge of this password rotation. So while using domain accounts is sometimes unavoidable when using replication, the runbook you use for password rotation should include this often-overlooked area.

Gotcha #4: Replication cleanup issues

The default batch size for replication cleanup is 5,000 records. If you are in charge of a very busy system, the replication cleanup process will never catch up, and your distribution database will continue to grow. Depending on where you have the distribution database located, this could affect your production databases, because the drive can run out of space.

The cleanup job runs often by default, and so if you change the batch size to something larger, it might cause blocking in your system.

A different approach – if you have the storage to handle it – is to schedule the cleanup job so that it doesn’t run at all during the week, and use a large batch size when it runs on the weekend (assuming you have a maintenance window to actually do this type of work on weekends).

Gotcha #5: Not likely to get compression for your snapshots

The rules around getting compression for snapshots are baffling and mysterious. There is some “2GB per file” rule that I can never seem to find a way to have properly observed, and so I’ve never been successful at creating compressed snapshots. Mind you, generating compressed snapshots is not likely to be any faster than uncompressed snapshots, and in fact is likely to be slower. But if you have not-so-great bandwidth, then it would be great to not send all that data across the wire.

The documentation for snapshot compression is located here (search for “Compressed snapshots”).

Gotcha #6: Initializing from a backup will probably not float your boat

Now before you comment below and tell me that you’ve read the paper written by Robert Davis and Kenneth Fischer about initializing from a filegroup backup, most databases in the wild have tons of user tables on the PRIMARY filegroup, and so are not really candidates for what’s described in that paper.

The main issue is that you can’t really recover just a slice (or multiple slices) of your database to initialize a subscriber. You have to install PRIMARY plus the filegroups that you’re interested in, and that’s too large 99% of the time. Or you restore your entire database backup, then delete what you don’t want. But you’ll have to disable/remove Foreign Keys, etc.

Other than for demos and/or really small databases, I don’t see how this path will work.

Gotcha #7: No easy/good way to filter horizontally if > 1 table is involved

See my thread with the late, great Robert Davis here.

Gotcha #8: Log Reader woes

The log reader will sometimes stop running, and if that happens, you won’t receive any warning/error, unless you’re polling for that. Perhaps a better way is to add Retry attempts to the job step. That will work, but you’ll have to remember to re-add that retry for each log reader when you build a new environment.

Gotcha #9: DR sucks with SQL replication

How will you handle setting up replication in a DR environment? One possibility is to automagically script the entire PRD replication environment on some type of schedule, and then modify the server names in those scripts for deployment to DR. But no matter how you handle deploying replication to DR, you will still have to initialize the new DR subscribers, and as described in Gotcha #1, that can be painful.

Gotcha #10: Rows deleted at the subscriber will break distribution

For transactional replication, I can’t think of a good reason that writes should be allowed on subscription databases. But if they are allowed, eventually someone will delete a row that appears in the distribution database, and when the change to that row is attempted at the subscriber, it will of course fail. And all of the rows behind it in the distribution database will also fail to be distributed.

You can work around this – even on a temporary basis, just to get past it – by choosing a different profile for the Distribution Agent. I believe they even have some that are canned for getting past this specific error. It will allow your distribution database to stop growing, but unless you want to leave this profile in place permanently, you’ll eventually have to reinitialize at least the offending article in the relevant publication, and switch back to the default/former profile.

Gotcha #11: Nothing sucks worse than schema changes to replicated tables

Why? Because they force you to do the following:

  • remove the table from replication
  • make the schema change at the publisher
  • add the table back to the subscription
  • reinitialize the subscriber(s) (that’s right, you’re back to Gotcha #1 if your tables are large)

If done properly, the snapshot you generate will only include the table you removed from the subscription and added back. But if that table is large, it can still be painful. Also, It’s easy to make a mistake when removing a table/article from a subscription.

Of course, using AGs removes this hassle, because schema changes will simply flow through to secondary replicas via the log blocks that are sent to them (your send and redo queues will explode for large tables…). But hey, if you could be running AGs, you’d probably never have deployed replication in the first place.

Gotcha #12: CI/CD woes

Thought your CI/CD pipeline meant “push button” deployments? Think again, because CI/CD has no idea that you have tables that are replicated. This causes DBAs to get involved with deployments due to Gotcha #11. 

Gotcha #13: Monitoring for replication errors/latency

There’s nothing out-of-the-box that I’m aware of that properly handles monitoring the facets of transactional replication that most shops would be interested in.

For example – it might be fine during the day to have 10 million rows pending, but at night when ETL jobs are executing, that threshold might need to be increased to 20 million.

But what if the number of pending rows was within the allowable threshold but did not decrease after 20 minutes? Or after 40 minutes? Every shop will likely want to tweak this kind of monitoring, but you’ll probably have to roll your own and execute it across all environments where replication is deployed, and take appropriate action.

Sometimes, everything can look fine – no latency – but replication is “down”, because the log reader is down. My point is there are many things that can contribute to a replication failure.

Kendra has an excellent post on monitoring replication here.

Be forewarned that any script that literally counts the number of undistributed commands can be very slow in doing so. Imagine an environment where several publications have millions of rows pending in the distribution database – it would probably take longer to count them all than you have as a threshold for monitoring latency.

That’s why when I had to solve this problem, I cycled through each publication, and stored the out put of calling sp_replmonitorpendingcmds  to a table, and then alerted if required.

Gotcha #14: Immediate_sync is misunderstood

Most DBAs don’t really understand what this setting is used for, and that out-of-the box settings for it can cause you to have to….reinitialize.

If a subscription is set to sync immediately (immediate_sync = 1, which is the default), the data is kept in the distribution database for a specific period of time (72 hours by default). This allows new subscribers to be synced “immediately”, without having to create a new snapshot. But if a subscriber goes offline for too long, you will have to…..reinitialize.

In my current role, I set immediate_sync to 0 for all subscriptions. I’ve only got 2 subscribers, and I’m ok with having to generate a new snapshot for a new subscriber (which never happens). 

But I don’t want SQL server telling me when I have to reinitialize.

Mohammed Moinudheen wrote an excellent post here about retention periods for replication.

Mohammed Moinudheen also wrote an excellent post here about immediate_sync.

Gotcha #15: Replication takes precedence over AGs, by default

If you replicate a subset of data from a database that belongs to an asynchronous AG, you should know that the default behavior is that the Log Reader won’t harvest transactions to send to subscribers that have not yet been sent to the AG secondary replicas.

That means your production database transaction log can’t be cleared of these transactions, and will continue to grow, potentially affecting your production environment. You’ll need Trace Flag 1448 to solve this, and luckily it takes effect immediately.

Andy Mallon wrote an excellent post on TF 1448 here.

TF 1448 only applies to async replicas. If you’re running synchronous replicas, the default behavior still applies – subscribers don’t receive new data until the sync replicas have been hardened.

More “gotchas”

Undocumented “gotchas” of transactional replication by Robert Davis can be found here.

I’m sure there are tons more gotchas for transactional replication, but these are the most glaring I’ve tripped across.

Hekatonized Tempdb

At PASS Summit 2018, I attended a session led by Pam Lahoud (t) of the SQL Tiger Team , entitled “TempDB: The Good, The Bad, and The Ugly”. If you have access to the PASS recordings from 2018, I highly recommend watching this session.

It was a really fantastic presentation, detailing the full history of how the SQL Server engineering team has attempted to optimize TempDB in various ways. The two problems that busy servers can have with regard to TempDB are allocation page contention, and metadata contention, and the engineering team should be applauded for its clever approaches to solving these types of contention throughout the years. To be clear, all of the optimizations were related to temp table usage in stored procedures, not scripts.

However, none of those solutions for contention scaled – some only relocated the issue. As part of Pam’s presentation, she did a demo with a single TempDB metadata table that was “Hekatonized”  – actually using the In-Memory OLTP engine – and the difference in throughput was significant. She said that Microsoft intends to convert the remaining system tables in TempDB to be memory-optimized (you’ll need SQL 2019 CTP 3.0 or later to test).

So once you’ve got it installed or have started a container running it – how to you automagically convert TempDB system tables to be memory-optimized? With TSQL, of course:

ALTER SERVER CONFIGURATION SET MEMORY_OPTIMIZED TEMPDB_METADATA = ON;

Like other changes to TempDB, in order for the new memory-optimization to take effect a restart of the SQL Server service is required. Once the service is restarted, system tables in TempDB are now memory-optimized (it should be that way for RTM, but in CTP 3.0, it could be the case that not all system tables have been converted to Hekaton). You can reverse this setting with the following command, and again restarting the SQL Server service:

ALTER SERVER CONFIGURATION SET MEMORY_OPTIMIZED TEMPDB_METADATA = OFF;

Unless your workload was truly hammering TempDB, you probably won’t see much difference in TempDB performance.

We need to be careful with this new In-Memory power, because depending on workload characteristics, we might need a whole lot more memory just to handle what’s going on in TempDB. Also, if you have scripts and/or monitoring that interrogate system tables in TempDB, you might be affected by some of the restrictions if TempDB system tables are memory-optimized. As the CTP release notes, state:

“A single transaction may not access memory-optimized tables in more than one database. This means that any transactions that involve a memory-optimized table in a user database will not be able to access TempDB system views in the same transaction.”

Another thing I want to make clear is that this new TempDB optimization only affects system tables in TempDB – not the tables you create; #table and ##table do not become memory-optimized as a result of this new feature.

After all, it’s name is MEMORY_OPTIMIZED_TEMPDB_METADATA

Database Administration: A Point of Departure

In this post, I want to delve into one aspect of managing your career as a DBA that’s not often discussed: being a DBA is likely best used a point of departure for a different, but related role.

Don’t believe me?

Just look at some of the folks who used to be DBAs that have moved on:

Erin Stellato, now a consultant for SQL Skills
Glen Berry, now a consultant for SQL Skills
Jonathan Kehayias, now a consultant for SQL Skills
Brent Ozar, created a consulting and training company
Argenis Fernandez, worked for Pure Storage, now works on the SQL Server Tiger Team
Brian Carrig, now works on the SQL Server Tiger Team
Mike Fal, database manager at Rubrik
Denny Cherry, started his own consulting company
Joey D’Antoni, now a consultant for Denny Cherry and Company
Chris Adkin, pre-sales for Pure Storage
Thomas Grohser, architect for a hedge fund
Kendra Little, worked for Brent Ozar Unlimited, now works for Redgate

and the list goes on….

Why have all of these great technologists abandoned the DBA role? I’m guessing that there are several possible reasons:

  • who wants to do the same thing for 40 years?
  • more financial opportunity in sales and/or consulting
  • they move up, becoming database managers

Under specific circumstances, you might have a long career as a DBA, but if you’re not smart about it, your options can be limited.

Not mentioned so far is the fact that the likelihood of getting good/great roles as a DBA diminishes as you get older. Here in the USA, it’s not legal to ask a job applicant their age, but employers often get around that by asking you what year you graduated high school (why on earth would that be relevant, except to determine your age?).

How many older, gray-haired DBAs do you see in the field? Not too many, I’m guessing. An exception to this might be a DBA that has been at the same company for a very long time. Or someone who was hired specifically because they have decades of experience.

So, is being a production DBA the exclusive domain of younger technologists? I suppose that depends on where you draw the line between “young” and “mature”. For example, one of the all-time greatest DBAs was Robert Davis, aka @SQLSoldier. While he may have had a stint or two outside the DBA role, for almost all of his career, he worked as a DBA. Unfortunately, Robert passed away in early 2018, so we’ll never know if he would have stayed on that path. He did have consulting jobs on the side, but once he started to work at a hedge fund here in NYC, the hoops he had to jump through to get “approval” weren’t worth the hassle, so he no longer did outside work. 

Consulting shops are often the next stop in a DBA’s career. But there can be a lot of travel when working for a consulting shop, and that lifestyle isn’t for everyone (especially if you have young kids).

A superset of the DBA skill set would be that of an architect, which requires deep expertise in a variety of areas, such as storage, networking, HA/DR, perhaps Azure and/or AWS.

If you have less working years left than you’ve already worked, you might consider staying in the DBA role for your remaining working years. But that role is evolving,  and you’ll probably go the way of the dinosaurs, unless you also evolve.

If you’re a younger DBA – how sure are you that what you do on a daily basis will not be automated away by the cloud over the next decade?

Sharks must keep moving, or they’ll die. DBAs are pretty much the same, but have to be smarter than sharks about where they move, and what they move into.

What dedication and community engagement can do for your career

In July of 2012, I started a new role, but after a few months, I could see that there wouldn’t be much opportunity for me to learn there, and/or the pace of learning was simply too slow. The biggest problem I faced was that I had to move forward in the professional development realm on my own time. A brief overview of my life schedule looked like:

  • Monday to Friday: work from 10am to 6pm, get home and study SQL Server until 2am
  • Saturday and Sunday: study SQL Server from 10 am until 2am

Yeah, that’s not much of a life – or to be brutally honest, that’s no life at all, and I did this from 2012 until just last week. I’d say that at least 45 to 48 weeks of the year, I stuck to that schedule.

My work role was split between SQL development and DBA tasks, and it was a pretty small company. I was trying to get a dedicated DBA role, but that type of role usually exists at larger companies, and without recent large company experience, I was often not a good fit for the roles I was seeing. Add to that the fact that I have zero SSIS in my career (many roles require that), and we have a stumbling block to moving forward/upward.

SET PERSEVERANCE ON

In the interest of attaining advanced knowledge of SQL Server, I attended the following training and conferences since 2011:

  • 2011, SQL Skills Immersion Event (Performance Tuning)
  • 2013, SQL Cruise – on this trip I met Aaron Bertrand, Mike Fal, Stacia Varga, Brandon Leach, Buck Woody, Tim Ford, and others
  • 2014, Brent Ozar – Senior SQL DBA
  • 2015, Allan Hirt , Mission Critical SQL Server
  • 2016, Edwin Sarmiento online HA class
  • PASS Summit, 2013, 2015, 2016, 2017, 2018

I devoured blog posts from Brent, Jonathan Kehayias, Robert Davis, Paul Randal, Kimberly Tripp, Paul White, Aaron Bertrand, Kendra Little, Edwin Sarmiento, Allan Hirt, and many others.

As is often said, if you really want to learn something, you’ve got to teach it, and that’s why since 2016 I’ve been blogging and presenting at many SQL Saturdays across the USA.

I always believed that my next role would come from engagement with the SQL community – that someone out there would recognize my dedication, passion for learning, and desire to help others. I came close to getting a new role a few times, but nothing panned out, although during the initial phone screen for one of the positions I applied for, the interviewer told me that he had solved a production problem from reading one of my blog posts.

Not long ago I saw a post from a colleague on the NYC SQL user group message board about needing to fill a role for a strong DBA, and I’m thrilled to write that I’ve got a new dedicated DBA role at an international financial powerhouse. What struck me during the interview process was that I was not asked a single technical question about SQL Server – it seems my reputation had preceded me.

There are benefits to dedicating yourself to a life of learning, and helping others — you just never know when it might pay off.

 

Frameworks O How I Hate Thee

I’ve seen a lot of tech come and go in my time, but nothing I’ve seen vexes me more than “framework generated SQL”.  No doubt I’m ignorant about some aspects of it, but its usage continues to confound many a DBA.

To troubleshoot one of these bad boys, you might consider Google Glass, but it will fail you. The first issue is that these crappy frameworks generate a code tsunami that’s almost (or actually) unreadable by humans. The tables you know and love are aliased with names such as “Extent1” and the like. Multiple nestings of that, and it’s all gobbledygook aka spaghetti code.

Developers love frameworks, because they don’t have to spend time coding/maintaining SQL queries. I’m guessing it’s mostly used for UI, cause if it’s used for much more than that, performance is likely to absolutely suck. So you wicked smart developers theoretically save a bunch of money generating SQL for your UI, but then – because you have a totally crap schema – you have to pay expensive DBAs to drill down and resolve performance issues. And in the end, if your code and/or schema is bad enough, you relent, and convert it to a stored procedure call, which is exactly what your sharp DBA told you to do 20 billable hours ago.

A typical response to why developers use frameworks for databases is that they want their code to be “portable”. How many times have you seen a shop change database platforms? I could understand that argument if you used frameworks for all your code, reports, UI, everything. But if you use frameworks for the UI, and stored procedures for reporting, I guarantee that you’d have a heck of a time making that stored procedure code “generic”, such that it could be used against Oracle, Sybase, SQL Server, or DB2.

The more I think about it, I should totally love frameworks. I say that because if they were not in use, think of all the times I’d be stuck trying to improve performance, when now I can simply say: “Hey – that’s a framework query, and there’s absolutely nothing I can do about it – have a nice day….”

 

SQL 2019 In-Memory hotness

SQL 2019 is on track to become one of the most awesome releases – the product touches so many realms of the data platform, it’s truly mind boggling.

Since I have such a keen interest in Hekaton/In-Memory OLTP, when the CTPs are released for a new version of SQL Server, I look forward to any potential announcements about that feature.

So far, there’s been only one publicly announced enhancement for In-Memory OLTP in SQL 2019: system tables in TempDB will be “Hekatonized”. This will forever solve the issue of system table contention in TempDB, which is a fantastic use of Hekaton. I’m told it will be “opt in”, so you can use this enhancement if you want to, but you can also back out of it, which would require a restart of the SQL Server service.

But there’s at least one other enhancement that’s not been announced, although the details of its implementation are not yet known.

When you start to research the Hekaton feature, most are shocked to learn that CHECKDB does not verify anything about durable In-Memory tables: it silently ignores them.

That appears to have changed in SQL 2019, although either the informational message about what it does is misleading, or behind the scenes it does something different.

This is the output for DBCC CHECKDB of a memory-optimized database in SQL 2017:

Object ID 949578421 (object ‘inmem_table’): The operation is not
supported with memory optimized tables. This object has been skipped and will not be processed.

(the emphasis was added by me)

This is the output for DBCC CHECKDB of a memory-optimized database in SQL 2019:

DBCC results for ‘inmem_table’.
There are 101 rows in 3 pages for object “inmem_table”.

Why do I say the message is misleading?

Because durable data for memory-optimized tables is not stored in pages, but instead in a streaming fashion in files known as checkpoint file pairs (or data and delta files). Also, while it’s true that there are 101 rows in this table, the engine pre-creates a number of data and delta files, and it would make DBAs sleep a lot better at night, if all of those files were verified as being corruption free.

We’ll just have to stay tuned to the future CTPs and RTM of SQL 2019 to see how all of this shakes out.

In Pursuit of Batch Mode on Rowstore

In her excellent blog post entitled “Batch Mode Hacks for Rowstore Queries in SQL Server“, Kendra Little b|t pays homage to Itzik Ben-Gan, Niko Neugebauer, and others.

The solutions she details will indeed result in batch mode for rowstore queries. I had already seen the solution proposed by Mr. Ben-Gan, and as is typically the case, a simple example is given to illustrate the concept, and these types of examples are almost always single-threaded.

I have a client that used Itzik Ben-Gan’s solution of creating a filtered nonclustered columnstore index to achieve batch mode on a rowstore (in fact I proposed that the client consider it). They have an OLTP system, and often perform YTD calculations. When they tested, processing time was reduced by 30 to 50 percent, without touching a single line of application code. If that ain’t low hanging fruit, I don’t know what is —

However, during testing, I noticed some intermittent blocking that didn’t make sense to me. But I couldn’t nail it down, and they went live with the “filtered nonclustered columnstore index” solution.

Once they deployed – and there was a lot of concurrency – I could see what had eluded me during my proof of concept: blocking in tempdb.

The repro is super-simple:

  • Create a table, and insert some sample data
  • Create a stored procedure that does the following:
    SELECT from that table into a #temp table
    Create a filtered nonclustered columnstore index on the #temp table, using a filter that cannot possibly be true, i.e. IDcolumn < 0 and IDcolumn > 0
    SELECT from the #temp table (return results)

From the first connection, issue a BEGIN TRAN execute the stored procedure. Note the spid for this connection. Then open a separate connection, issue a BEGIN TRAN and execute the stored procedure. Note the spid for this connection.

You’ll notice that the first connection has no issues, but when you execute the proc in the second connection, it gets blocked.

When you peel back the layers, you can see that the first connection requests and obtains a schema modification lock on the #temp table (Sch-M).

The second connection requests a schema stability lock on the same object_id, and is blocked (Sch-S).

To be clear, what’s happening here is that separate connections are placing incompatible locks on the same temporary object in tempdb, which is supposed to be impossible (but in fact the object_id is the same). My gut tells me that this is perhaps related to metadata when creating the NCCI, but I couldn’t prove that.

It should be noted that if you remove the filter on the NCCI, there is no blocking, and also if you use a regular filtered nonclustered index (not columnstore), this issue persists. Of course, in the real world, removing the filter is not an option because what we’re interested in speed, and if there’s one thing that columnstore indexes are not fast at, it’s being created.

Hopefully if/when Microsoft fixes this, it will be back ported to earlier versions of SQL Server.

I can reproduce this on SQL 2016 and 2017 (and even 2019, but that’s not really fair, cause it’s not RTM yet…)

If you think that Microsoft should fix this, please upvote my Azure User Voice entry here.

Repro code:

Dangerous moves: Setting max size for In-Memory OLTP containers

I recently saw a thread on twitter, where the OP talked about setting the max size for an In-Memory OLTP container. I responded as I always do: it’s not possible to set a limit on anything having to do with storage for In-Memory OLTP.

Unfortunately, that’s not correct: through SSMS or TSQL, you can in fact set a max size for a container.

But you should not ever do that…..

Why?

Because if you do, and your checkpoint files exceed the max size of the container, your database can go into the In Recovery, Suspect, or OFFLINE state. The following code reproduces this issue:

Note that I’ve not yet found a way around this. The OP from that thread on twitter said he had to actually restart the SQL Server service to resolve the issue with that database, but I don’t see why that would make any difference (when I tried it, the database attempted recovery, but eventually went offline).

Setting a max size for the container is a really, really really bad idea, because it guarantees that the database will have some form of outage when you hit the threshold. The bottom line is that containers must be free to grow, period. That’s part of the capacity planning good DBAs will do before deploying the In-Memory OLTP feature.

Trials and tribulations of learning Linux

Decades ago, before Microsoft SQL Server existed, I spent $500 (quite a hefty sum in those days) attempting to learn C language programming and Unix. It was the best $500 I ever spent, because it informed me that my brain simply does not work well with that technology (or at least, it didn’t back then). Fast forward to 2017, and voila!: SQL Server runs on Linux. But this time, there are some big differences. For one thing, Powershell can ease the burden of learning *nix commands. Also, Linux has the ability to install a desktop.

And so I’ve begun my deep dive into various aspects of running SQL Server on Linux, and Ubuntu was my distribution of choice.

Windows man

This life-long Windows SQL Server DBA depends on the ease with which one can copy and paste in either direction between a guest VM and the host, using RDP – it’s a huge time saver. Folks in the Linux world love to type stuff, and that’s ok with me, because I started in technology in the days before Windows existed, so I’m a seasoned MS-DOS/command line guy.

While researching various aspects of what’s possible on Linux, I read a lot of blog posts, and some of them had long lists of commands. While I could have collected those commands into a file on my Windows host and copied that file to the Linux guest, I simply wanted to copy and paste to and from my Ubuntu VM running on Hyper-V.

Alas, that was not to be.

If you search the web for “copy paste Ubuntu Hyper-V”, you’ll find loads of answers in forums, dispensing all types of advice that might have been good at that time. But now it’s 2018, and I tripped across this blog post from Craig Wilhite @ Microsoft:

Sneak Peek: Taking a Spin with Enhanced Linux VMs

There, it details how to go about setting up Enhanced Linux VMs, and so I downloaded Ubuntu Server 18.04, and got to work, following that blog post to the letter.

Denied

I spent the better part of a week after hours, trying to get this to work, plugging the error messages into search engines to see what came back.

After entering credentials into xrdp, I received the message: “Video remoting was disconnected”, and searching for that led me to this thread on github, which is related to Craig Wilhite’s post.

So clearly, others had experienced this issue, but there didn’t seem to be any resolution. I posted a message, asking for what next steps I might take, and followed recommendations, but nothing panned out. Finally, Craig suggested that perhaps the difference was due to the fact that I was using Ubuntu server, and he had verified the steps using Ubuntu desktop. I just finished testing with Ubuntu desktop, and hallelujah, Enhanced Session Linux VMs work with Ubuntu desktop.

But the entire reason I wanted to experiment with the server version was to investigate Kubernetes, and I wanted to use Ubuntu server for that.

As luck would have it, the next day I attended a webinar given by Argenis Fernandez (b| t) on using SQL Server on containers, and during the presentation, Argenis mentioned MobaXterm, which allows copy/paste, and has a free version. So I reinstalled Ubuntu server, installed MobaXterm, and lo and behold, I now have bidirectional copy/paste between host and guest.

That’s how it is when you learn any new, unfamiliar technology – you spin your wheels, make mistakes, fail, and if you push through and leave your mind open, you can be rewarded with expertise.

New kid on the block: sp_BlitzInMemoryOLTP

In-Memory OLTP has been included in the last three releases of SQL Server, starting with 2014 through 2017, and now runs on Linux, Windows, Azure SQL Database, and Azure Managed Instances. Additionally, since SQL 2016/SP1, the In-Memory OLTP feature has been available in non-enterprise editions.

What does this all mean?

It most likely means that it’s only a matter of time before a memory-optimized database lands on your doorstep, and you’ll probably have no idea how or why it’s different.

For a while now, I’ve been working on a script to evaluate a SQL Server environment for anything related to In-Memory OLTP, and I had help with testing, general suggestions, and final touches from Konstantin Taranov and Aleksey Nagorskiy; their assistance was invaluable. Konstantin suggested to Erik Darling and Brent Ozar that my script be included as part of their great Blitz series, and the the result is…..sp_BlitzInMemoryOLTP.

It is now part of the awesomeness known as the First Responder Kit, and the direct link to the script can be found here.

sp_BlitzInMemoryOLTP reports on two categories: instance level and database level.

First let’s discuss which parameters it sp_BlitzInMemoryOLTP accepts, and then we’ll break out the results, section by section.

@instanceLevelOnly BIT

This flag determines whether or not to simply report on the server-level environment (if applicable, i.e. there is no server-level environment for Azure SQL Database). With this parameter, memory-optimized databases are ignored. If you specify @instanceLevelOnly and a database name, the database name is ignored.

@dbName NVARCHAR(4000) = N’ALL’

If you don’t specify a database name, then sp_BlitzInMemoryOLTP reports on all memory-optimized databases within the instance that it executes in, or in the case of Azure SQL Database, the database that you provisioned. This is because the default for the @dbName parameter is N’ALL’.

Example:

It’s also possible to report on a specific database name.

Example:

The results of calling sp_BlitzInMemoryOLTP this way are explained later in this post.

@tableName NVARCHAR(4000) = NULL

Example:

If you only want to report on a specific memory-optimized table, you would supply a value for the @tableName parameter, and sp_BlitzInMemoryOLTP will search through all memory-optimized databases, looking for memory-optimized user tables that match. There is currently no wildcard matching for the @tableName parameter.

@debug BIT

Using the @debug =1 parameter tells sp_BlitzInMemoryOLTP to only print the TSQL statements that would have been executed. This allows you (or more likely, me) to resolve problems like missing quotes, or other potential issues that can occur when using dynamic SQL.

Example:

Supported platforms

This script has been tested on SQL 2014, SQL 2016, SQL 2017, and Azure SQL Database. It has not been tested against Azure Managed Instances

In the comments, please let me know other things about memory-optimized environments and/or databases you’d like to see included in the script.

How to interpret the results for sp_BlitzInMemoryOLTP

When you execute sp_BlitzInMemoryOLTP, it runs several queries that pertain to the In-Memory OLTP environment. It should be noted that if there are no results for a given query, i.e. no temporal memory-optimized tables, sp_BlitzInMemoryOLTP does not return an empty result set (this keeps the clutter to a minimum).

For example, it could be that a memory-optimized filegroup has been added to a database, but no memory-optimized objects have been created. Depending on the version of SQL Server, there might not be details about the containers or files within them, so sp_BlitzInMemoryOLTP won’t return information on that.

Instance level

Instance level evaluates the following:

  • the version/edition of SQL server
  • SQL Server ‘max memory’ setting
  • memory clerks
  • XTP memory consumers, aggregated
  • XTP memory consumers, detailed
  • the value of the committed_target_kb column from sys.dm_os_sys_info
  • whether or not instance-level collection of execution statistics has been enabled for all natively compiled stored procedures (because this can kill their performance….)
  • when running Enterprise, if there are any resource groups defined, and which memory-optimized databases are bound to them
  • XTP and buffer pool memory allocations, because In-Memory OLTP can affect on-disk workloads
  • summary of memory used by XTP

Section 1: version/edition of SQL server

Documentation here.

Section 2: SQL Server ‘max memory’ setting

Documentation here.

Section 3: memory clerks

Documentation here.

Section 4: XTP memory consumers, aggregated

Documentation here.

Section 5: XTP memory consumers, detailed

Section 6: the value of the committed_target_kb column from sys.dm_os_sys_info. The amount of memory that SQL Server can use for the In-Memory OLTP feature is a percentage of the committed_target_kb value. But be forewarned, this value is not static. Details in my post here.

Section 7: whether or not instance-level collection of execution statistics has been enabled for all natively compiled stored procedures. Enabling this on a production server could be considered drastic. More details can be found in my post here.

Section 8: if running Enterprise, if there are any resource groups defined, and which memory-optimized databases are bound to them. Binding memory-optimized database to a Resource Pool (using Resource Governor) is considered a best practice, but unfortunately this capability is still Enterprise only. But if you’re on that edition, you should also be monitoring how close to the out of memory threshold you’re getting, and fire an alert when required. More details in my post here.

Section 9: XTP and buffer pool memory allocations, because In-Memory OLTP can affect on-disk workloads

Database level

For a given memory-optimized database (or all memory-optimized databases), database level evaluates the following:

  • all memory-optimized tables
  • all indexes on all memory-optimized tables
  • the average chain length for HASH indexes (and informs you if the bucket count is too low)
  • the number of indexes per memory-optimized table
  • all natively compiled stored procedures
  • which native modules are loaded (stored procedures only, and this is not relevant for Azure SQL Database)
  • the number of natively compiled procedures
  • whether or not the collection of execution statistics is enabled for any natively compiled procedures
  • if using the temporal feature for memory-optimized tables, the amount of memory consumed by hidden temporal internal tables (which are memory-optimized)
  • memory structures for LOB columns (off-row)
  • all memory-optimized table types
  • database layout, which includes mdf, ldf, ndf, and containers, and the size in various formats (KB/MB/GB). The totalSizeMB column is the total for the entire database (uses a Window Function).

Three separate result sets that describe containers:

  • Container details by container name
  • Container details by fileType and fileState
  • Container file details by container_id, fileType and fileState

For Azure SQL Database, sp_BlitzInMemoryOLTP:

  • verifies if you are running on the Premium tier (that’s the only tier that supports In-Memory OLTP)
  • displays all records for xtp_storage_percent, in descending order (more info here)
  • displays the status of XTP_PROCEDURE_EXECUTION_STATISTICS and XTP_QUERY_EXECUTION_STATISTICS (more info here)

The output in the photos that follow was returned from executing sp_BlitzInMemoryOLTP, for a database named OOM-DB. You can get information on all memory-optimized databases if you don’t supply a database name when calling sp_BlitzInMemoryOLTP.

Section 1: Listing of memory-optimized databases on this instance of SQL Server

· Section 2: memory-optimized tables, including row counts

Section 3: indexes on memory-optimized tables. It’s helpful to know how many, and what type of indexes there are.

Section 4: average chain length for HASH indexes (if any). When a HASH index is created for a memory-optimized table, a value must be supplied for what’s known as the “bucket count”. But it doesn’t get adjusted automatically, and as a result, it can cause performance problems. More details here.

Section 5: Number of indexes per memory-optimized table. SQL 2014 and SQL 2016 have a limit of 8 nonclustered (RANGE) indexes per memory-optimized table. That ceiling was lifted in SQL 2017, and I’ve tested creating several hundred indexes on a single memory-optimized table (but please don’t do that in production!).

clip_image028

Sections 6 through 8:

  • natively compiled stored procedures
  • which natively compiled stored procedures are currently loaded
  • how many natively compiled stored procedures there are

Section 9: if using the temporal feature for memory-optimized tables, the amount of memory consumed by hidden temporal internal tables (which are memory-optimized). For temporal tables, there’s a difference between how things are handled if the temporal table is memory-optimized. I’ve written about that in this post.

Section 10: memory structures for LOB columns (off-row). For memory-optimized tables, LOB columns are actually stored as separate tables, and this can lead to performance problems. MCM Dimitri Korotkovitch has a great post on it here.

Section 11: memory-optimized table types. Yes, tables and table types can be memory-optimized, and you’ll want to be aware of the potential gotchas with those memory-optimized types, as detailed in my post.

Section 12: all database files, including the name, size, and location for each container.

Sections 13 through 15 pertain to the amount of storage consumed by durable memory-optimized tables. The files that persist durable data to storage go through several state changes over time. As a result, the storage footprint for memory-optimized databases that contain durable data can be surprisingly large, relative to the amount of data that’s stored in memory (Microsoft suggest 4x memory-optimized data size as a starting point). So it’s a good idea to keep an eye on the storage footprint.

Section 13: Container details by container name

One row per container, listing the aggregated size of all files within that container, as well as how many files per container

Section 14: Container details by fileType and fileState

Here, the breakdown is a bit different, taking into account the type of file.

For each type of file, i.e. DATA or DELTA, aggregate the storage consumed and number of files for each file type, across ALL containers for this database. For example, there are a total of 11 files of fileType DATA with a fileState of ACTIVE, across all containers for this memory-optimized database. (Note that SQL 2014 has file types that don’t exist in later versions of SQL Server)

Section 15: Container file details by container_id, fileType and fileState

For each type of file, i.e. DATA or DELTA, aggregated the storage consumed and number of files for each file type, PER CONTAINER.

In the prior example, we saw that there were a total of 11 files of fileType DATA with a fileState of ACTIVE, across all containers for this memory-optimized databases.

This result shows the breakdown of each fileType and fileState PER CONTAINER. The container named InMemDB_inmem1 has 3 files that have a fileType of DATA and a fileState of ACTIVE. So we expect to see 8 more files with this type and state, in the remaining containers. Sure enough, we see that the container named InMemDB_inmem2 has an additional 8 files with a fileType of DATA and a fileState of ACTIVE.

Understanding how In-Memory OLTP works (with all of its various gotchas) can only be addressed by putting in the required time. If you read the documentation, and then study the real-world deployment concepts detailed in my extensive blog post series on In-Memory OLTP, you’ll be on the right path. Once you begin to wrap your brain around In-Memory OLTP, you’ll need some help evaluating memory-optimized environments and/or databases, and that’s where sp_BlitzInMemoryOLTP can help.