I recently saw a thread on twitter, where the OP talked about setting the max size for an In-Memory OLTP container. I responded as I always do: it’s not possible to set a limit on anything having to do with storage for In-Memory OLTP.
Unfortunately, that’s not correct: through SSMS or TSQL, you can in fact set a max size for a container.
But you should not ever do that…..
Why?
Because if you do, and your checkpoint files exceed the max size of the container, your database can go into the In Recovery, Suspect, or OFFLINE state. The following code reproduces this issue:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
USE master GO DROP DATABASE IF EXISTS InMemTest CREATE DATABASE InMemTest EXEC sp_helpdb InMemTest USE InMemTest GO ALTER DATABASE InMemTest ADD FILEGROUP InMemTestFG CONTAINS MEMORY_OPTIMIZED_DATA ALTER DATABASE InMemTest ADD FILE ( NAME = 'Container1' ,FILENAME = 'H:\SQLDATA\InMemTest_Container1' ) TO FILEGROUP InMemTestFG /* ######################### sp_helpdb doesn't show the size of the containers for a memory-optimized database, so we must reference sys.dm_db_xtp_checkpoint_files ######################### */ DROP TABLE IF EXISTS dbo.InMemT1 CREATE TABLE dbo.InMemT1 ( PKcol INT IDENTITY PRIMARY KEY NONCLUSTERED ,description VARCHAR(8000) NOT NULL ) WITH (DURABILITY = SCHEMA_AND_DATA, MEMORY_OPTIMIZED = ON) -- verify how much space the checkpoint files consume. On my system, it/s 936MB, -- so I set the max container size to 1000MB SELECT FORMAT(SUM(file_size_in_bytes / 1048576.0), '####') AS fileSizeMBTotal FROM sys.dm_db_xtp_checkpoint_files GO ALTER DATABASE InMemTest MODIFY FILE ( NAME = 'Container1' ,MAXSIZE = 1000MB ) USE InMemTest GO BACKUP DATABASE InMemTest TO DISK = 'nul' WITH STATS = 1 BACKUP LOG InMemTest TO DISK = 'nul' WITH STATS = 1 SELECT FORMAT(SUM(file_size_in_bytes / 1048576.0), '####') AS fileSizeMBTotal FROM sys.dm_db_xtp_checkpoint_files SET NOCOUNT ON INSERT InMemT1 ( description ) SELECT REPLICATE('A', 100) GO 1000 -- we're good up to here, but issuing this CHECKPOINT CHECKPOINT -- this CHECKPOINT succeeds -- now I see 952MB SELECT FORMAT(SUM(file_size_in_bytes / 1048576.0), '####') AS fileSizeMBTotal FROM sys.dm_db_xtp_checkpoint_files -- you might have to do this a few times, before the subsequent CHECKPOINT will fail BACKUP LOG InMemTest TO DISK = 'nul' WITH STATS = 1 /* ################### running this checkpoint causes the files in the container to grow beyond 1000MB ################### */ CHECKPOINT /* Msg 9001, Level 21, State 4, Line 88 The log for database 'InMemTest' is not available. Check the event log for related error messages. Resolve any errors and restart the database. Msg 596, Level 21, State 1, Line 87 Cannot continue the execution because the session is in the kill state. Msg 0, Level 20, State 0, Line 87 A severe error occurred on the current command. The results, if any, should be discarded. database goes to "In Recovery" state restart SQL Server service database goes to "Suspect" state after a while, the db is OFFLINE */ USE master GO ALTER DATABASE InMemTest SET SINGLE_USER WITH ROLLBACK IMMEDIATE ALTER DATABASE InMemTest SET OFFLINE -- db goes to In Recovery, Recovery Pending, and then finally OFFLINE state ALTER DATABASE InMemTest SET ONLINE -- fails ALTER DATABASE InMemTest MODIFY FILE ( NAME = 'Container1' ,MAXSIZE = 2000MB ) SELECT FORMAT(SUM(file_size_in_bytes / 1048576.0), '####') AS fileSizeMBTotal ,FORMAT(SUM(file_size_used_in_bytes / 1048576.0), '####') AS fileSizeMBUsed ,SUM(file_size_used_in_bytes / 1048576.0) FROM sys.dm_db_xtp_checkpoint_files |
Note that I’ve not yet found a way around this. The OP from that thread on twitter said he had to actually restart the SQL Server service to resolve the issue with that database, but I don’t see why that would make any difference (when I tried it, the database attempted recovery, but eventually went offline).
Setting a max size for the container is a really, really really bad idea, because it guarantees that the database will have some form of outage when you hit the threshold. The bottom line is that containers must be free to grow, period. That’s part of the capacity planning good DBAs will do before deploying the In-Memory OLTP feature.