Skip to content
Home » The Curious Case of… Collations and Performance in SQL Server

The Curious Case of… Collations and Performance in SQL Server

One fateful night, cold and dreary, I’ve stumbled upon how collations can significantly affect performance in SQL Server… Yes, really!

Problem

While normalizing a table in their database, one of my clients had weird and inconsistent performance problems and couldn’t find a solution for it for quite a while.

They had two comparable queries that performed a LIKE query on the normalized column:

The Original (Un-Normalized) Query:
Scanned 2.5 million rows and executed for 3 seconds.
It looked something like this:

SELECT someColumns 
FROM MyTable 
WHERE StringColumn LIKE 'SomeText%'

The New (Normalized) Query:
Scanned 10 million rows and executed for 50 seconds.
It looked something like this:

SELECT someColumns 
FROM MyTableNormalized AS main 
JOIN MyLookup AS lu ON main.TextID = lu.ID 
WHERE lu.StringColumn LIKE 'SomeText%' 

Each query was executed against a different table, but the data in the tables was identical (the row number difference was due to the normalization’s effect on the execution plan).
The column we filtered on was varchar(255) in both tables.

For a while we couldn’t wrap our heads around the reason for the huge difference in performance. I mean, it’s true that 10 million is bigger than 2.5 million. But nowhere near a 50 to 3 scale. Something just didn’t seem right.

hmmmmmmm

Solution

During my investigation I noticed an intriguing difference in the execution plans, that in the first query (the fast one) an implicit conversion was performed for the varchar column, but no such conversion occurred in the slow query. At first I didn’t pay attention to it because, as a DBA, I was trained to think that implicit or explicit conversions HURT performance, and not the other way around – so this couldn’t have been the cause.

But eventually (and with a little push from Shy Engelberg – Thank you Shy!) I took a closer look to find out why this conversion was happening in the first place.

We found that the collation of the varchar column was different between the two tables (must’ve happened due to a human error during the normalization process).

The first table had a Latin SQL collation, yet the second table had an ANSI collation (or Windows collation as they’re called by Microsoft). The implicit conversion in the first query happened due to a mismatch with the filter parameter data type.

We tried to change the collation of the second (normalized) table to the same SQL collation as well – and presto! The query now executed for 5 seconds – 10 times faster than before! (and only 2 times slower than the 3 second query, a much more sensible difference).

Why did this happen?

I looked for materials regarding this issue, and couldn’t find much except this (apparently important) white paper: http://support.microsoft.com/kb/322112

Here are the relevant highlights:

Unicode sorting rules are much more complex than the rules for a non-Unicode SQL sort order. […]

If you are storing and handling your data by using non-Unicode data types (char, varchar, text), and [if] you are using a SQL collation, string comparisons will be performed with a non-Unicode SQL sort order.

If you are storing and handling your data by using non-Unicode data types (char, varchar, text), and [if] you are using a Windows collation, string comparisons will be performed with the Unicode sorting rules. This may cause certain operations that are unusually dependent on string sorting performance to take longer and to use more CPU than a similar operation that is performed with a SQL collation.

If you are using Unicode data types (nchar, nvarchar, ntext), there is no difference in the sorting behavior for the SQL and the Windows collations. Both will use Unicode sorting rules.

In short:

Windows collations perform as slow as Unicode data types, and SQL collations perform much faster.

However, in such cases the sorting order must be taken into consideration because it could change the results. Though in the case of this client, the first table was the original way to go anyway, so we’re actually “fixing” the sorting order difference (in case such existed).

Later we also tested the performance of a binary collation and saw that it was even faster in queries that required comparison and grouping, but it performed worse in queries that required sorting. This makes sense since binary collations must be parsed textually before being sorted – a heavier operation than doing the same for SQL collations – as opposed to a comparison operation which can be as fast as doing XOR of two values.

Lab Experiment

The T-SQL script below can be used to simulate and demonstrate the performance impact of different collations in SQL Server.

The script will do the following:

  • Create 4 different tables, each with a single textual column and of a different collation: Unicode, Windows, SQL and Binary.
  • Generate, let’s say, 1 MILLION RECORDS, into one of the tables, and then COPY the exact same records to each of the other tables.
  • Turn on STATISTICS IO and STATISTICS TIME to start measuring IO and CPU impact
  • On each of the 4 tables, run some kind of query which will have to be implemented using a table scan.
  • Turn off STATISTICS IO and STATISTICS TIME to stop measuring IO and CPU impact, and examine the results in the messages pane.
DROP TABLE IF EXISTS UnicodeTable;
DROP TABLE IF EXISTS WindowsTable;
DROP TABLE IF EXISTS SQLTable;
DROP TABLE IF EXISTS BinTable;

CREATE TABLE UnicodeTable
(StringColumn NVARCHAR(255) NOT NULL);

CREATE TABLE WindowsTable
(StringColumn VARCHAR(255) COLLATE Hebrew_CI_AS NOT NULL);

CREATE TABLE SQLTable
(StringColumn VARCHAR(255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL);

CREATE TABLE BinTable
(StringColumn VARCHAR(255) COLLATE Hebrew_100_BIN2 NOT NULL);
GO

SET NOCOUNT ON;

INSERT INTO UnicodeTable
SELECT TOP 10000 NEWID()
FROM sys.all_columns a CROSS JOIN sys.all_columns b
GO 100

INSERT INTO WindowsTable
SELECT StringColumn FROM UnicodeTable;
GO

INSERT INTO SQLTable
SELECT StringColumn FROM UnicodeTable;
GO

INSERT INTO BinTable
SELECT StringColumn FROM UnicodeTable;
GO

SET STATISTICS TIME, IO ON;

PRINT N'
===================
BinTable:
===================
'
SELECT COUNT(*) AS RowsFoundInUnicodeTable
FROM BinTable
WHERE StringColumn LIKE '%ABCD%'
OPTION(MAXDOP 1);

PRINT N'
===================
SQLTable:
===================
'
SELECT COUNT(*) AS RowsFoundInUnicodeTable
FROM SQLTable
WHERE StringColumn LIKE '%ABCD%'
OPTION(MAXDOP 1);

PRINT N'
===================
WindowsTable:
===================
'
SELECT COUNT(*) AS RowsFoundInUnicodeTable
FROM WindowsTable
WHERE StringColumn LIKE '%ABCD%'
OPTION(MAXDOP 1);

PRINT N'
===================
UnicodeTable:
===================
'
SELECT COUNT(*) AS RowsFoundInUnicodeTable
FROM UnicodeTable
WHERE StringColumn LIKE '%ABCD%'
OPTION(MAXDOP 1);

SET STATISTICS TIME, IO OFF;
GO

Also note how, just to prove a point, I’ve started the measurements with the tables that we expect to run faster, so that you won’t be coming to me with excuses about “buffer cache” and whatnot and how it causes subsequent queries to run faster. I also added OPTION(MAXDOP 1) for all queries to eliminate any unexpected behavior caused by parallelism, making the results more consistent.

Here is the output for one such execution on my laptop:

=========================
BinTable:
=========================
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'BinTable'. Scan count 1, logical reads 8169, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
SQL Server Execution Times:
CPU time = 297 ms, elapsed time = 300 ms.
=========================
SQLTable:
=========================
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'SQLTable'. Scan count 1, logical reads 8169, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
SQL Server Execution Times:
CPU time = 391 ms, elapsed time = 412 ms.
=========================
WindowsTable:
=========================
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'WindowsTable'. Scan count 1, logical reads 8169, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
SQL Server Execution Times:
CPU time = 1937 ms, elapsed time = 1939 ms.
=========================
UnicodeTable:
=========================
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'UnicodeTable'. Scan count 1, logical reads 18382, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
SQL Server Execution Times:
CPU time = 2000 ms, elapsed time = 1992 ms.

A few interesting things to note here:

  • The logical reads for all queries except for the one on the Unicode data show the exact same value: 8,169 reads.
  • The Unicode table uses twice as much storage, so it has twice as much logical reads: 18,382 reads.
  • The CPU time differences is where this gets really interesting: The tables with Binary and SQL collations performed quite well: 391 ms for the SQL collation, and 297 ms for the Binary collation.
  • But check out the difference with the Windows and Unicode collations: 1,937 ms and 2,000 ms! Almost identical, with Unicode being just a tiny little bit slower. And both of them are almost exactly 10 times slower than their SQL and Binary counterparts! Hey, what doya know, that’s exactly the ratio difference in my actual use case!
Boom indeed

Your results may vary, of course, but the ratios will be more or less the same.

We could also make a similar experiment to measure performance for sorting, by doing something like creating an index on the textual column in each of the tables:

=================
BinTable:
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'BinTable'. Scan count 9, logical reads 7813, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
SQL Server Execution Times:
CPU time = 2252 ms, elapsed time = 376 ms.
=================
SQLTable:
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'SQLTable'. Scan count 9, logical reads 7813, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
SQL Server Execution Times:
CPU time = 2578 ms, elapsed time = 417 ms.
=================
WindowsTable:
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'WindowsTable'. Scan count 9, logical reads 7813, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
SQL Server Execution Times:
CPU time = 5563 ms, elapsed time = 827 ms.
=================
UnicodeTable:
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'UnicodeTable'. Scan count 9, logical reads 12346, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
SQL Server Execution Times:
CPU time = 2780 ms, elapsed time = 456 ms.

Again take note of the differences in logical reads and CPU times.
This time all tables performed very similarly in terms of CPU with minor differences, except the Windows collation!
That one performed twice as slow as the rest of them, even slower than the Unicode collation!

Conclusion

Choosing the correct Collations appears to be another consideration we need to take when designing a database and checking for performance issues – but beware! Changing the collation may change the sorting order of your text, so if your application depends on a certain form of sorting – you need to make sure the new collation matches it.

The same white paper mentioned above also claims that changing indexes or how a query is written will make a bigger difference than changing collations, but it’s surprising what a huge difference in performance a collation could make in this particular use case, and we did try to rewrite the query several times and changing indexes, but nothing made a difference as much as the change in collation.

And the bottom line is, I guess…

Avoid Unicode and Windows collations if you care about performance.

This article was originally published by Eitan Blumin on November, 2011, at www.madeiradata.com

Thanks again to Shy Engelberg for his very helpful advice on this one.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.