Dangerous moves: Setting max size for In-Memory OLTP containers

I recently saw a thread on twitter, where the OP talked about setting the max size for an In-Memory OLTP container. I responded as I always do: it’s not possible to set a limit on anything having to do with storage for In-Memory OLTP.

Unfortunately, that’s not correct: through SSMS or TSQL, you can in fact set a max size for a container.

But you should not ever do that…..

Why?

Because if you do, and your checkpoint files exceed the max size of the container, your database can go into the In Recovery, Suspect, or OFFLINE state. The following code reproduces this issue:

Note that I’ve not yet found a way around this. The OP from that thread on twitter said he had to actually restart the SQL Server service to resolve the issue with that database, but I don’t see why that would make any difference (when I tried it, the database attempted recovery, but eventually went offline).

Setting a max size for the container is a really, really really bad idea, because it guarantees that the database will have some form of outage when you hit the threshold. The bottom line is that containers must be free to grow, period. That’s part of the capacity planning good DBAs will do before deploying the In-Memory OLTP feature.

5 thoughts on “Dangerous moves: Setting max size for In-Memory OLTP containers

  1. Pingback: Don’t Set Max Size For Containers In In-Memory OLTP – Curated SQL

  2. Nick

    Just implemented this, and your explanation clarifies things much better than the little blurb warning of the same issue on the msft doc. Cheers!

    Reply
  3. Sangeeta

    In sql 2017, I had to add unlimited space on drive and did Checkpoint. Still failed.

    Added container on another drive . Checkpoint still failed.

    I had to offline database and online . Now it came online. But I saw 35+ files in another container.

    Reply

Leave a Reply to Nick Cancel reply

Your email address will not be published. Required fields are marked *