Troubleshooting Long-Running SHRINK Operations

Ever had your SHRINK operation “stuck” and didn’t know why, or how to solve it? It’s more common than you think. Check out this post for more info, tricks and tips.

TL;DR: This blog post, these two scripts and also this script.

Disclaimer

Let me start off with a mandatory warning: as a rule of thumb, shrink operations are to be avoided, if possible. Lots of professionals have spoken about this topic and the reasons why it’s so bad, so I wouldn’t waste the space here. You can view the link above for more info.

With that being said, we must also acknowledge that there are some (rare) use cases where you would want to perform a shrink anyway…

Problem

Using the TRUNCATEONLY option would be preferable whenever possible. It will clear out unused space from the “tail” of your file without moving any pages. But, more often than not, it’s not good enough. Maybe you deleted or migrated a huge chunk of data which you know will never return. Maybe you activated data compression on a significant portion of your tables and now you want to reclaim that freed space for other uses. Or maybe you simply have a very serious shortage of disk space and you gotta scrape every gigabyte you can (is that scenario even possible in our day and age? Oh you’d be surprised).

Whatever the cause may be, you find yourself running DBCC SHRINKFILE without the TRUNCATEONLY option, and now you’re waiting… And waiting… And waiting… Until you start getting worried. So you run a small query from sys.dm_exec_requests or use something like spWhoIsActive, and you discover that your shrink session has been stuck on the same percentage of completion for the past half-hour, even though it seems to do… Well, something! Otherwise, why do the reads and writes counters keep going up??? It takes minutes, and then hours, and in some cases – even days! Good heavens, what to do?!!?

The Unusual Suspects

Hey, that’s a really good title for a movie!
Anyways, um…

If you happen to be an experienced SQL Server DBA, you’re probably familiar with this blog post from Paul S. Randal, or maybe this ancient blog post, also from Paul S. Randal, who explains that slow SHRINK operations can happen due to how the SQL Server storage engine has to perform a table scan for each LOB_DATA and ROW_OVERFLOW_DATA value that it wants to move around during SHRINK. If you didn’t know that, well then now you do!

When a text page is moved by data file shrink, a table scan must be performed for each LOB value to find the ‘owning’ data record so the pointer can be updated. […]

Paul S. Randal (source)
Anger from inside-out movie
Paul really hates SHRINK with a burning passion. Like, seriously, he really does.

In any case, while this is an horrendous behavior in the SQL Server storage engine, it doesn’t seem like Microsoft are planning to fix this issue anytime soon. So, we gotta deal with it for the time being.

But wait! There’s more!

Remember Heap tables? Remember what happens if you create non clustered indexes on them? What is saved at the leaf level of every such index? That’s right, it’s our old pal RID, which represents the physical location of every row in the heap. This is used for the RID Lookup operator in execution plans.

But wait… What happens if a row in such a table needs to move around during a SHRINK operation? You guessed it – SQL Server has to go to each and every non-clustered index and update the RID so that it would point to the new row location. That is something that can definitely make your SHRINK operations go very slow.

Even if you have a heap table without any non clustered indexes to maintain, there’s still one more use case that can slow down your SHRINK operations: Table Partitioning. If you happen to have a partitioned heap table, it will cause the same issues that a heap table with non clustered indexes would have caused, even if the table has no indexes at all.

Solution

There are a few things that you can do to assist in your shrinking operation:

1. Solve the damn issue

Look, if you’ve read the paragraphs leading up to this point, then you should realize by now that we actually know already what causes shrink operations to run slow. That means we can detect these scenarios in advance, and that means we can solve them (most of them, at least). So why not do that?

I wrote the following script which will find all the problematic tables in your database which may cause your shrink operations to run slow, so that you’d know what to look out for:

/*
Author: Eitan Blumin | https://www.eitanblumin.com
Create Date: 2020-03-18
Description:
This script will detect tables in your database that may cause DBCC SHRINK operations
to run really slow:
– Tables with LOB_DATA or ROW_OVERFLOW_DATA
– Heap tables with non-clustered indexes
– Heap tables with partitions
You may adjust the @TableSizeThresholdMB parameter to filter the tables based on their size.
*/
DECLARE
@TableSizeThresholdMB INT = 500
;WITH TabsCTE
AS
(
SELECT DISTINCT
'Table with LOB or ROW-OVERFLOW data' AS Issue,
p.object_id
FROM sys.system_internals_allocation_units au
JOIN sys.partitions p ON au.container_id = p.partition_id
WHERE type_desc <> 'IN_ROW_DATA' AND total_pages > 8
AND p.rows > 0
UNION ALL
SELECT
'Heap with Non-clustered indexes',
p.object_id
FROM sys.partitions AS p
WHERE p.index_id = 0
AND p.rows > 0
AND EXISTS (SELECT NULL FROM sys.indexes AS ncix WHERE ncix.object_id = p.object_id AND ncix.index_id > 1)
UNION ALL
SELECT DISTINCT
'Partitioned Heap',
p.object_id
FROM sys.partitions AS p
WHERE p.index_id = 0
AND p.rows > 0
AND p.partition_number > 1
)
SELECT t.*,
OBJECT_SCHEMA_NAME(t.object_id) table_schema,
OBJECT_NAME(t.object_id) table_name,
SUM(p.rows) AS RowCounts,
CAST(ROUND((SUM(a.used_pages) / 128.00), 2) AS NUMERIC(36, 2)) AS Used_MB,
CAST(ROUND((SUM(a.total_pages) SUM(a.used_pages)) / 128.00, 2) AS NUMERIC(36, 2)) AS Unused_MB,
CAST(ROUND((SUM(a.total_pages) / 128.00), 2) AS NUMERIC(36, 2)) AS Total_MB
FROM TabsCTE AS t
INNER JOIN sys.partitions p ON t.object_id = p.object_id
INNER JOIN sys.allocation_units a ON p.partition_id = a.container_id
GROUP BY t.Issue, t.object_id
HAVING SUM(a.used_pages) / 128.00 >= @TableSizeThresholdMB
ORDER BY Used_MB DESC
/*
Author: Eitan Blumin | https://www.eitanblumin.com
Create Date: 2020-03-18
Last Update: 2020-06-22
Description:
This script will detect currently running sessions in your database which are running DBCC SHRINK commands.
It will also output the name of any tables and indexes the session is currently locking.
Use this query to find out what causes a SHRINK to run for too long.
You may need to run it multiple times to "catch" the relevant info.
*/
SELECT DISTINCT
req.session_id,
req.start_time,
req.command,
req.status,
req.wait_type,
req.last_wait_type,
req.reads,
req.cpu_time,
req.blocking_session_id,
blockertext.text AS blocker_sql_batch,
ISNULL(req.database_id, rsc_dbid) AS dbid,
rsc_objid AS ObjId,
OBJECT_SCHEMA_NAME(rsc_objid, rsc_dbid) AS SchemaName,
OBJECT_NAME(rsc_objid, rsc_dbid) AS TableName,
rsc_indid As IndexId,
indexes.name AS IndexName
FROM sys.dm_exec_requests AS req
LEFT JOIN master.dbo.syslockinfo ON req_spid = req.session_id AND rsc_objid <> 0
LEFT JOIN sys.indexes ON syslockinfo.rsc_objid = indexes.object_id AND syslockinfo.rsc_indid = indexes.index_id
LEFT JOIN sys.dm_exec_connections AS blocker ON req.blocking_session_id = blocker.session_id
OUTER APPLY sys.dm_exec_sql_text(blocker.most_recent_sql_handle) as blockertext
WHERE req.command = 'DbccFilesCompact'
These two scripts are available in my Gists

You might notice that there’s also an extra script file there that does something different. While the first script has a “proactive” purpose (i.e. detect potential trouble makers BEFORE they start making trouble), the second script has a “reactive” purpose (i.e. you use it once the trouble has already started):

The second script will query for any currently running shrink operations, and detect which tables they are currently holding locks on. I found that looking into the locks held by the session was more reliable than inspecting the transaction log file (even though shrink operations are fully logged). You may have to run the query multiple times until you “catch” the session at the right moment when it holds the locks. It might be annoying sometimes, but eventually, it works.

Once you detect which table is causing all the fuss, now you know where to focus your efforts.
Let’s break it down by use-cases:

A. Un-Heap Your Heap Tables

Whether it’s a heap table with or without non clustered indexes, with or without partitioning, the solution is the same: un-heap it. Simply adding a clustered index will definitely solve the issue and send your shrink operation flying like a champ.

You can use the script in the following blog post to help you out with un-heaping your heaps:

How to Un-Heap your Heaps (Resolving Tables Without Clustered Indexes)

B. Reorganize or Rebuild the LOB and ROW OVERFLOW data

One of the things you could try is to rebuild or reorganize your entire table. It’s not guaranteed to help, but there is a chance that while moving the data elsewhere in the file, you’d eliminate the need to move it during the SHRINK operation, and it’ll work better (hopefully, with the TRUNCATEONLY option). This will only help if you have enough unused space somewhere in the beginning of your file. Otherwise, you may just make things much worse by moving your whole table to the end of the file – right where the SHRINK operation will cause serious havoc.

To cause less impact on production activity, you can reorganize your indexes:

ALTER INDEX ALL ON <object> WITH REORGANIZE

If you got Enterprise edition, or Azure SQL DB / Managed Instance, you can do rebuild online:

ALTER INDEX ALL ON <object> REBUILD WITH (ONLINE=ON)

If the table is a Heap:

ALTER TABLE <object> REBUILD

Just to be sure, after you do this, try running the SHRINK operation with TRUNCATEONLY, to avoid moving the pages around again.

Is there a way for us to predict whether rebuilding/reorganizing our table would actually help? Possibly, by visualizing our data page allocation. Check out the following open-source project that I created for this purpose:

SQL Server Data Allocation Reports – Queries and reports to visualize your SQL data page allocations, which pages are reserved for which objects, and which pages are not used.

C. Move Out Your LOB and ROW OVERFLOW DATA

Another possible solution is to move your whole table somewhere outside the file entirely. Whether it be to a different filegroup, or even a different database.

To move your table to a different filegroup, you could do the following:

  1. Use SSMS to generate a CREATE script for all of the table’s indexes, including the clustered index.
  2. In the generated script, change the filegroup of each of the indexes (don’t forget to add a new one if you don’t have one already).
  3. Set the DROP_EXISTING=ON option for all of the indexes.

PRO TIP: If you have a combination of clustered and non-clustered indexes, it’s recommended to first drop all of your non-clustered indexes, rebuild the clustered index, and only then re-create the non-clustered indexes. Otherwise, you’d have to wait for SQL to rebuild all the clustered keys in your non-clustered indexes to RIDs and back to clustered keys again.

To move the table to a different database, you could use the SELECT INTO syntax and then TRUNCATE the original table (wait, woah… did I really just say that?? 😬 I should probably say something here about how dangerous this is and prone to catastrophic user error! CAREFUL NOT TO LOSE YOUR DATA! Remember: Backups are your soul-mates).

After that, you can try running your shrink operation again, and when you’re done, move the table back again (or don’t. I won’t judge. It’s your funeral 🤷‍♂️).

2. Shrink using smaller intervals

Moving your tables to different filegroups, dropping and creating indexes… Those are all serious schema changes, and may not always be possible. But there might be a different solution which wouldn’t require any schema change:

Instead of shrinking directly to your desired file size right away, you should perform the shrink in “chunks”. The smallest possible interval in SHRINK is 1 MB. You’d be surprised, but there are some scenarios where even such a small chunk can take upwards of 15 minutes to complete. It may not solve your issue entirely (or at all), but at least it’ll make your shrink operations more manageable.

To help you with this, I wrote the following script below. Change the parameters to fine-tune it for your use case:

/*
—————————————————————————-
Shrink a Database File in Specified Increments
—————————————————————————-
Author: Eitan Blumin (t: @EitanBlumin | b: eitanblumin.com)
Creation Date: 2020-01-05
Last Update: 2020-08-23
—————————————————————————-
Description:
This script uses small intervals to shrink a file (in the current database)
down to a specific size or percentage (of used space).
This can be useful when shrinking files with large heaps and/or LOBs
that can cause regular shrink operations to get stuck.
Change the parameter values below to customize the behavior.
—————————————————————————-
!!! DON'T FORGET TO SET THE CORRECT DATABASE NAME !!!
—————————————————————————-
Change log:
2020-08-23 – Added new parameters: @AGReplicaLinkedServer, @MaxReplicaRecoveryQueue, @RecoveryQueueSeverity, and @WhatIf
2020-06-22 – Added @RegrowOnError5240 parameter
2020-06-21 – Added @DelayBetweenShrinks and @IterationMaxRetries parameters
2020-03-18 – Added @DatabaseName parameter
2020-01-30 – Added @MinPercentFree, and made all parameters optional
2020-01-05 – First version
—————————————————————————-
Parameters:
—————————————————————————-
*/
DECLARE
@DatabaseName SYSNAME = NULL Leave NULL to use current database context
,@FileName SYSNAME = NULL Leave NULL to shrink the file with the highest % free space
,@TargetSizeMB INT = 20000 Leave NULL to rely on @MinPercentFree exclusively.
,@MinPercentFree INT = 80 Leave NULL to rely on @TargetSizeMB exclusively.
Either @TargetSizeMB or @MinPercentFree must be specified.
If both @TargetSizeMB and @MinPercentFree are provided, the largest of them will be used.
,@IntervalMB INT = 50 Leave NULL to shrink the file in a single interval
,@DelayBetweenShrinks VARCHAR(12) = '00:00:01' Delay to wait between shrink iterations (in 'hh:mm[[:ss].mss]' format). Leave NULL to disable delay. For more info, see the 'time_to_execute' argument of WAITFOR DELAY: https://docs.microsoft.com/en-us/sql/t-sql/language-elements/waitfor-transact-sql?view=sql-server-ver15#arguments
,@IterationMaxRetries INT = 3 Maximum number of attempts per iteration to shrink a file, when cannot successfuly shrink to desired target size
,@RegrowOnError5240 BIT = 1 Error 5240 may be resolved by temporarily increasing the file size before shrinking it again.
,@AGReplicaLinkedServer SYSNAME = NULL Linked Server name of the AG replica to check. Leave as NULL to ignore.
,@MaxReplicaRecoveryQueue INT = 10000 Maximum recovery queue of AG replica (in KB). Use this to prevent overload on the AG.
,@RecoveryQueueSeverity INT = 16 Error severity to raise when @MaxReplicaRecoveryQueue is breached.
,@WhatIf BIT = 0 Set to 1 to only print the commands but not run them.
————————————————————————–
DON'T CHANGE ANYTHING BELOW THIS LINE —
————————————————————————–
SET NOCOUNT, ARITHABORT, XACT_ABORT ON;
SET ANSI_WARNINGS OFF;
DECLARE @CurrSizeMB INT, @StartTime DATETIME, @sp_executesql NVARCHAR(1000), @CMD NVARCHAR(MAX), @SpaceUsedMB INT, @SpaceUsedPct VARCHAR(10), @TargetPct VARCHAR(10);
DECLARE @NewSizeMB INT, @RetryNum INT
SET @DatabaseName = ISNULL(@DatabaseName, DB_NAME());
SET @RetryNum = 0;
IF @DatabaseName IS NULL
BEGIN
RAISERROR(N'Database "%s" was not found on this server.',16,1,@DatabaseName);
GOTO Quit;
END
IF DATABASEPROPERTYEX(@DatabaseName, 'Updateability') <> 'READ_WRITE'
BEGIN
RAISERROR(N'Database "%s" is not writeable.',16,1,@DatabaseName);
GOTO Quit;
END
IF @TargetSizeMB IS NULL AND @MinPercentFree IS NULL
BEGIN
RAISERROR(N'Either @TargetSizeMB or @MinPercentFree must be specified!', 16, 1);
GOTO Quit;
END
IF @IntervalMB < 1
BEGIN
RAISERROR(N'@IntervalMB must be an integer value of 1 or higher (or NULL if you want to shrink using a single interval)', 16,1)
GOTO Quit;
END
SET @sp_executesql = QUOTENAME(@DatabaseName) + '..sp_executesql'
DECLARE @RecoveryQueueCheckCmd NVARCHAR(4000), @RecoveryQueueCheckParams NVARCHAR(4000), @PartnerServer SYSNAME, @RecoveryQueue INT
DECLARE @RecoveryQueueCheckExec NVARCHAR(4000);
IF @AGReplicaLinkedServer IS NOT NULL AND @MaxReplicaRecoveryQueue IS NOT NULL
BEGIN
IF NOT EXISTS (SELECT * FROM sys.servers WHERE server_id > 0 AND name = @AGReplicaLinkedServer)
BEGIN
RAISERROR(N'Specified linked server "%s" was not found.', 16,1, @AGReplicaLinkedServer);
GOTO Quit;
END
IF @RecoveryQueueSeverity IS NULL OR @RecoveryQueueSeverity NOT BETWEEN 0 AND 16
BEGIN
RAISERROR(N'@RecoveryQueueSeverity "%d" is invalid. Must be between an integer 0 and 16.', 16, 1, @RecoveryQueueSeverity);
GOTO Quit;
END
SET @RecoveryQueueCheckParams = N'@DBNAME SYSNAME, @CounterName VARCHAR(1000), @PartnerServer SYSNAME OUTPUT, @CounterValue INT OUTPUT'
SET @RecoveryQueueCheckCmd = N'SELECT @PartnerServer = @@SERVERNAME, @CounterValue = cntr_value
FROM sys.dm_os_performance_counters
WHERE object_name LIKE ''%:Database Replica%''
AND counter_name = @CounterName AND instance_name = @DBNAME'
SET @RecoveryQueueCheckExec = QUOTENAME(@AGReplicaLinkedServer) + '.' + @sp_executesql
IF @WhatIf = 1 PRINT @RecoveryQueueCheckCmd;
EXEC @RecoveryQueueCheckExec @RecoveryQueueCheckCmd, @RecoveryQueueCheckParams, @DatabaseName, 'Recovery Queue', @PartnerServer OUTPUT, @RecoveryQueue OUTPUT
IF @RecoveryQueue IS NULL
BEGIN
RAISERROR(N'Unable to fetch "Recovery Queue" for database "%s" via linked server "%s".', 16, 1, @DatabaseName, @AGReplicaLinkedServer);
GOTO Quit;
END
ELSE
BEGIN
RAISERROR(N'Successfully connected to replica server "%s". Current recovery queue for databsae "%s": %d KB.', 0, 1, @PartnerServer, @DatabaseName, @RecoveryQueue) WITH NOWAIT;
END
END
SET @CMD = N'
SELECT TOP 1
@FileName = [name]
,@CurrSizeMB = size / 128
,@SpaceUsedMB = CAST(FILEPROPERTY([name], ''SpaceUsed'') AS int) / 128.0
FROM sys.database_files
WHERE ([name] = @FileName OR @FileName IS NULL)
AND ([size] / 128 > @TargetSizeMB OR @TargetSizeMB IS NULL OR [name] = @FileName)
AND type IN (0,1) — data and log files only
ORDER BY CAST(FILEPROPERTY([name], ''SpaceUsed'') AS float) / size ASC;'
IF @WhatIf = 1 PRINT @CMD;
EXEC @sp_executesql @CMD, N'@FileName SYSNAME OUTPUT, @CurrSizeMB INT OUTPUT, @SpaceUsedMB INT OUTPUT, @TargetSizeMB INT'
, @FileName OUTPUT, @CurrSizeMB OUTPUT, @SpaceUsedMB OUTPUT, @TargetSizeMB
SET @TargetSizeMB = (
SELECT MAX(val)
FROM (VALUES
(@TargetSizeMB),(CEILING(@SpaceUsedMB / (CAST(@MinPercentFree as float) / 100.0)))
) AS v(val)
)
SET @SpaceUsedPct = CAST( CEILING(@SpaceUsedMB * 100.0 / @CurrSizeMB) as varchar(10)) + '%'
SET @TargetPct = CAST( CEILING(@SpaceUsedMB * 100.0 / @TargetSizeMB) as varchar(10)) + '%'
IF @SpaceUsedMB IS NOT NULL
RAISERROR(N'— File "%s" current size: %d MB, used space: %d MB (%s), target size: %d MB (%s)',0,1,@FileName,@CurrSizeMB,@SpaceUsedMB,@SpaceUsedPct,@TargetSizeMB,@TargetPct) WITH NOWAIT;
IF @SpaceUsedMB IS NULL OR @CurrSizeMB <= @TargetSizeMB
BEGIN
PRINT N'— Nothing to shrink'
GOTO Quit;
END
WHILE @CurrSizeMB > @TargetSizeMB
BEGIN
SET @CurrSizeMB = @CurrSizeMB@IntervalMB
IF @CurrSizeMB < @TargetSizeMB OR @IntervalMB IS NULL SET @CurrSizeMB = @TargetSizeMB
SET @CMD = N'DBCC SHRINKFILE (N' + QUOTENAME(@FileName, N'''') + N' , ' + CONVERT(nvarchar, @CurrSizeMB) + N') WITH NO_INFOMSGS; — ' + CONVERT(nvarchar(25),GETDATE(),121)
RAISERROR(N'%s',0,1,@CMD) WITH NOWAIT;
IF @WhatIf = 1
PRINT N'— @WhatIf was set to 1. Skipping execution.'
ELSE
BEGIN
BEGIN TRY
EXEC @sp_executesql @CMD
END TRY
BEGIN CATCH
File ID %d of database ID %d cannot be shrunk as it is either being shrunk by another process or is empty.
IF @RegrowOnError5240 = 1 AND ERROR_NUMBER() = 5240
BEGIN
This error can be solved by increasing the file size a bit before shrinking again
SET @CMD = N'ALTER DATABASE ' + QUOTENAME(@DatabaseName) + N' MODIFY FILE (NAME = ' + QUOTENAME(@FileName, N'''') + N', SIZE = ' + CONVERT(nvarchar, @CurrSizeMB + @IntervalMB) + N'MB); — ' + CONVERT(nvarchar(25),GETDATE(),121)
PRINT N'— Error 5240 encountered. Regrowing:'
RAISERROR(N'%s',0,1,@CMD) WITH NOWAIT;
EXEC @sp_executesql @CMD
END
ELSE
THROW;
END CATCH
Re-check new file size
EXEC @sp_executesql N'SELECT @NewSizeInMB = [size]/128 FROM sys.database_files WHERE [name] = @FileName;'
, N'@FileName SYSNAME, @NewSizeInMB FLOAT OUTPUT', @FileName, @NewSizeMB OUTPUT
See if target size was successfully reached
IF @NewSizeMB > @CurrSizeMB
BEGIN
IF @RetryNum >= @IterationMaxRetries
BEGIN
RAISERROR(N'— Unable to shrink below %d MB. Stopping operation after %d retries.', 12, 1, @NewSizeMB, @RetryNum) WITH NOWAIT;
BREAK;
END
ELSE
BEGIN
SET @RetryNum = @RetryNum + 1;
SET @CurrSizeMB = @NewSizeMB;
RAISERROR(N'— Unable to shrink below %d MB. Retry attempt %d…', 0, 1, @NewSizeMB, @RetryNum) WITH NOWAIT;
END
END
ELSE
SET @RetryNum = 0;
Sleep between iterations
IF @DelayBetweenShrinks IS NOT NULL
WAITFOR DELAY @DelayBetweenShrinks;
Check recovery queue of AG partner
IF @AGReplicaLinkedServer IS NOT NULL AND @MaxReplicaRecoveryQueue IS NOT NULL
BEGIN
SET @RecoveryQueue = NULL;
EXEC @RecoveryQueueCheckExec @RecoveryQueueCheckCmd, @RecoveryQueueCheckParams, @DatabaseName, 'Recovery Queue', @PartnerServer OUTPUT, @RecoveryQueue OUTPUT
IF @RecoveryQueue > @MaxReplicaRecoveryQueue
BEGIN
RAISERROR(N'— Stopping because the recovery queue in server "%s" has reached %d KB.', @RecoveryQueueSeverity, 1, @PartnerServer, @RecoveryQueue);
GOTO Quit;
END
END
END
END
PRINT N'— Done – ' + CONVERT(nvarchar(25),GETDATE(),121)
Quit:
The script is available in my Gists

In general, it’s a good idea to always shrink huge files gradually, even if you don’t expect any trouble. After all, that’s when trouble tends to bite the hardest – when you least expect it.

Conclusion

You should be 100 % certain that you want to do that SHRINK operation.
Are you sure you wanna do it?
Are you sure you’re willing to suffer the consequences of fragmentation, and possible performance issues resulting from file auto-growth? 🤔

If you do, I hope I gave you a few tools to, at the very least, make your SHRINK operation faster and easier to manage.

Good luck! 👍

Here’s a Turbo Snail for you

See Also

Cover Image by Michal Jarmoluk from Pixabay

3 comments

  1. Hi Eitan: Is this script compatible with sql server 2008 R2? Well it throws me the following error …

    “Shrink a Database File in Specified Increments”

    Msg 102, Level 15, State 1, Line 136
    Incorrect syntax near ‘THROW’.

    Like

    • Thanks, Raul!
      The script is supposed to work on SQL 2008 R2, but it appears that I mistakenly used the THROW command which was only introduced in newer versions.

      I’ll promptly fix the script with a proper backward-compatible version.

      Like

Leave a Reply to Eitan Blumin Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.