Friday, October 27, 2006

Why "lock pages in memory" may increase I/O performance

I was reading SQL Server I/O Basics, Chapter 2 when I came across a very interesting chapter discussing the "Lock Pages In Memory" policy.

You all know that the AWE API is used to lock pages in memory and this is true for 32-bit as well as 64-bit SQL Server installations. Do note that the "awe enabled" configuration settings exists in 64-bit installations but it is of no meaning. In order for this to function the "Lock Pages In Memory" privilege is needed for the SQL Server service account. In a 32-bit environment the "awe enabled" configuration settings still has to be enabled.

But the thing I really learned here is why I/O can be improved by this setting. Because memory for an I/O should not cause page faults, the memory first has to be locked, the I/O goes through and the memory has to be unlocked again. Having the "Lock Pages In Memory" privilege avoids this lock and unlock transition during I/O and thus improves performance. This behavior can be disabled by removing the privilege or by enabling trace flag 835 (for 64-bit installations).

If you do decide to use AWE and thus "Lock Pages In Memory" do test your solution thoroughly with this setting. It might cause memory pressure which could lead to performance degradation. You might also want to consider specifying a max memory setting when you enable this. One of the side effects of AWE is that the specified memory is grabbed as soon as SQL Server starts. As of Windows 2003 and SQL Server 2005 this however has been solved through dynamic allocation of AWE memory.

Another, unrelated, interesting fact that is mentioned in the paper is that using "Encrypted file systems" (EFS) stops I/O from being done in an asynchronous manner. This could of course hinder performance, especially for checkpoint operations. I can't say I ever thought about doing this but just so you'd remember.

*EDIT*
I found a KB article KB922121 discussing the EFS behavior with SQL Server.

Thursday, October 26, 2006

The problem with ALTER TABLE

Recently we were trying to optimize some very large tables and we noticed that a bigint was used where an int was perfectly suitable. In order to get rid of it we issued an ALTER TABLE to change the datatype to int.

So far so good until I we checked the allocated space by the tables and noticed that the size hadn't changed. Another optimization we issued was creating a clustered index on the tables. As we checked the size of the tables after this action we noticed that now they had become smaller than before.

Lucky for me I don't like SQL Server behavior I can't explain and so I started looking for answers. It was quite obvious that SQL Server wasn't changing the original size of the column although inserting a value larger than the allowed value resulted in "Arithmetic overflow error converting expression to data type int.", so the metadata was definitely altered correctly.

I started looking for answers when I came across the next post by Kalen Delaney, explaining our issue very well.

The post contains a very good query to check this behavior and sure enough the offset of the column was still 4 bytes off (8 byte bigint vs 4 byte int).

SELECT c.name AS column_name, column_id, max_inrow_length,
pc.system_type_id, leaf_offset
FROM sys.system_internals_partition_columns pc
JOIN sys.partitions p ON p.partition_id = pc.partition_id
JOIN sys.columns c ON column_id = partition_column_id
AND c.object_id = p.object_id
WHERE p.object_id=object_id('bigchange');

So thank you very much Kalen for making me understand yet another SQL Server mystery :-)

SQL Server 2005 SP2

First rumours say it will be released in Q1 2007.

The userstore cache problem (which we know VERY well) should be fixed too :-)

It's quite necessary since SQL Server 2005 will only run on Windows Vista as of Service Pack 2.

*EDIT*

This seems to be not entirely true, it is supported with SP1 but you have to carefully look how to install it on Vista.