Options for smaller backups

I have a client that is running SQL 2016 Enterprise, and wants to get a full backup offsite every day. They’ve been doing it for over 5 years, and are now seeing scalability issues. 

In researching this blog post, I found a lot of useful information written by Dmitri Korotkevitch, who blogged about “Size does matter: 10 ways to reduce the database size and improve performance in SQL Server”. There is some overlap between his post and mine, but those who are interested in this topic will probably want to read both.

SPARSE columns

IF a column contains mostly NULLs, then depending on the data type, you can achieve space savings by using the SPARSE property (documentation here). SPARSE columns can be used with filtered indexes to theoretically reduce storage space and increase query performance. But there are a boatload of gotchas, such as issues with query plan caching (filtered indexes), and the fact that if you use SPARSE columns, neither the table or indexes can have any form of compression (the documentation is clear about not supporting table compression, but does not mention index compression being an issue – but it is).

As the documentation clearly states, when converting a column from non-sparse to sparse, the following steps are taken:

  1. Adds a new column to the table in the new storage size and format
  2. For each row in the table, updates and copies the value stored in the old column to the new column
  3. Removes the old column from the table schema
  4. Rebuilds the table (if there is no clustered index) or rebuilds the clustered index to reclaim space used by the old column

For large tables with even a few columns that you wanted to convert to SPARSE, this process would take forever, because you must do this for each column you want to convert.

In 2016+, if the conditions are right, we can get minimal logging plus parallelism for INSERT statements (see this CAT team blog post for more information). You might do something like:

  1. create a new table, adding SPARSE to the relevant columns
  2. use INSERT <newtable> WITH (TABLOCK)/SELECT FROM <originaltable>
  3. recreate indexes
  4. drop original table

In my case, I decided to not use SPARSE columns, because of the restrictions related to using other forms of compression on tables/indexes.

Data and/or index Compression

Compressing rowstore data and/or indexes used to be an Enterprise-only feature, but that’s changed since SQL 2016/SP1. However, to get any real benefit from doing this (especially for an OLTP system), you need to use some form of partitioning (see below), which can be a monumental task. Some have stated that when attempting to use compression on very wide tables (500+ columns), compression can fail, and in that case, SPARSE columns are your only option, assuming you can’t use other features described in this post.

COMPRESS() and DECOMPRESS()

ROW and PAGE compression only work with in-row data. However, SQL 2016 introduced the ability to compress off-row data with the COMPRESS() function. Depending on how much off-row data your databases contain, you might get storage savings when using this, although it will require some form of application change to decompress the relevant column(s) when required.

CLUSTERED COLUMNSTORE

Another formerly Enterprise-only feature, again included in other editions since SQL 2016/SP1. For the right type of workload, i.e. not too write intensive, you might consider replacing a rowstore with a clustered columnstore. I want to be clear that when I write about clustered columnstore indexes replacing a rowstore, I’m referring to on-disk tables only. There’s a lot of confusion about this because memory-optimized tables also support clustered columnstore, but in that case, the columnstore does not replace the rowstore (please refer to my blog post on the differences between columnstore for on-disk vs. in-mem here). When using partitioning with data compression, you can decide which partitions are compressed, if any, and what form of compression to deploy – the supported options are PAGE, ROW, and NONE. Columnstore is “all or nothing at all”, even when used with table partitioning. You can choose between ARCHIVAL and non-archival columnstore compression, but there is no way to designate specific partitions as uncompressed, as is the case with data compression. The deltastore (where inserts initially land) is an uncompressed rowstore.

One potential problem when using clustered columnstore is that you can’t deploy it on a table that has triggers. Also, LOB types (NVARCHAR(MAX)) are not supported for clustered columnstore indexes until SQL 2017.

Separating clustered and PRIMARY KEY

If you have an existing clustered rowstore that’s defined as a CONSTRAINT (for example with CREATE/ALTER TABLE), and you want to replace it with a clustered columnstore, then you’ll have to drop the constraint before creating the columnstore. That’s because the DROP_EXISTING = ON syntax is not supported for ALTER TABLE.

And because the key columns of a clustered index are also stored in every nonclustered index, it might be faster to drop nonclustered indexes before dropping a constraint that’s also the clustering key.

Keep in mind that even though a clustered columnstore contains the word “clustered” – which in the rowstore world means that it’s physically ordered – clustered columnstore indexes have no order. To achieve the best rowgroup elimination, you would first have to physically order your data using a regular clustered index, and then create the clustered columnstore with DROP_EXISTING = ON.

Data types

Violating the fundamentals of database design can have far reaching effects, long after the original designers have moved on. Common mistakes are using MAX for VARCHAR/NVARCHAR columns that don’t need it, like FirstName/LastName/Address, etc., and using DATETIME when you don’t need the time tick values, like for a check date. You’re not likely to see the negative effects of this for a long time, but those who come after you will be left with headaches that are difficult to fix. Let’s say that you had a CheckDate column on a table with billions of rows, and the CheckDate column was part of the clustering key. All nonclustered indexes store the clustering key internally, so instead of storing 3 bytes for a CheckDate column based upon the DATE datatype, each nonclustered index will store an extra 5 bytes (total of 8 bytes) for the DATETIME datatype.

Other solutions

If you want to optimize the size of your backups, what’s been discussed to far can help. But eventually, you’ll probably hit some type of time and/or size constraint when doing backups, even if using compression. One solution to this issue is to use some form of partitioning, be it partitioned tables and/or partitioned views.

With partitioned tables, you can mark filegroups as readonly, back them up once, and from that point on do only full and differential filegroup backups. Even CHECKDB can be run for specific filegroups. But be forewarned – table partitioning was introduced in SQL 2005, and there hasn’t been a lot of investment in this feature in recent years. Partitioned views solve a lot of the problems that exist with partitioned tables, but they have their own gotchas, such as not being able to insert through a partitioned view if the any of the base tables have columns that use the IDENTITY property.

As is often the case, choosing the best solution includes balancing requirements with feature limitations.

One thought on “Options for smaller backups

  1. Pingback: Reducing Backup Sizes – Curated SQL

Leave a Reply

Your email address will not be published. Required fields are marked *