Wednesday, August 01, 2007

Some new features in SQL Server 2008 - July CTP

Date and time

As many of you already know we finally have a separate date and time datatype in SQL Server 2008. This is not the only new option for date and time however, there a couple of other new datatypes like datetime2 and datetimeoffset.

  • Date
    The data datatype takes 3 bytes and has a range from 0001-01-01 through 9999-12-31
  • Time
    The time datatype takes 3 to 5 bytes depending on the fractional second precision. Time has a range from 00:00:00.0000000 through 23:59:59.9999999. The default precision is 7 meaning hh:mm:ss.nnnnnnn and depending on the precision you specify - with time(x) where x is a number from 0 to 7 - the storage size differs . From 0 to 2 time will take 3 bytes, from 3 to 4 it will take 4 bytes and up to 7 it will take 5 bytes.
  • Datetime2
    Although this makes me shiver as it makes me think of varchar2 it is a great datatype. Datetime2 takes from 6 to 8 bytes and has a range of 0001-01-01 through 9999-12-31. It also implements the same fractional second precision option as the time datatype. From 0 to 2 datetime2 will take 6 bytes, from 3 to 4 it will take 7 bytes and up to 7 it will take 8 bytes.
  • Datetimeoffset
    The datetimeoffset datatype takes from 8 to 10 bytes and has the same properties as datetime2 but the difference with the other datatypes is that datetimeoffset is timezone aware. The fractional second precision again defines the storage size, from 0 to 2 datetimeoffset will take 8 bytes, from 3 to 4 it will take 9 bytes and up to 7 it will take 10 bytes. There are also a couple of datetimeoffset related functions like SWITCHOFFSET and TODATETIMEOFFSET.

HierarchyID

Another new datatype is the hierarchyid datatype which represents the position in a tree hierarchy. I did not have the time to play with it yet but from what I have read it implements 2 special indexing methods, depth-first and breadth-first. It also has a couple of related function like GetAncestor, GetDescendant, IsDescendant, ...

Object Dependencies

The new management views sys.sql_expression_dependencies, sys.dm_sql_referencing_entities, and sys.dm_sql_referenced_entities gives you a view of dependencies that is always up-to-date. It even allows you to see cross-server and cross-database dependencies.

Page corruption

The new management view sys.dm_db_mirroring_auto_page_repair contains information about automatic repair attempts on mirrored databases (page restores from the mirror database).

SQL Server Extended Events

Built in support for the ETW (Event Tracing for Windows) engine to help you trace, log and debug problems. For more information about using ETW with SQL Server check out this post.

I recommend you to play with SQL Server 2008 yourself since there is already a very long list of new things to learn in this CTP! There are still a couple of other features in the July CTP which I have not seen yet so I hope I will find some time to experiment with these too.

TokenAndPermUserStore continued

Dirk G. from MCS Belgium informed me about another hotfix for the TokenAndPermUserStore issue. Build 3179 contains another fix for this cache (the latest build is 3182 by the way). The official description is "The TokenAndPermUserStore cache store may continue to grow steadily and decrease performance".

Check out the hotfixes here. Unfortunately they are on-demand so you will have to contact Microsoft to obtain this build.

Tuesday, July 31, 2007

SQL Server 2008 - July CTP

It's play time :-)
A lot of people will be very happy... date and time is available as a seperate datatype!

More information here and download here.

Wednesday, July 25, 2007

MemToLeave Issue in SQL Server 2000 x86

This week we were getting "WARNING: Failed to reserve contiguous memory of Size= 131072." errors on a couple of our SQL Server 2000 machines. Since we had a lot of available memory we quickly suspected the cause to be VAS fragmentation but we needed to find the root cause and confirmation that is was indeed VAS fragmentation. This is a bit harder since SQL Server 2000 has a lot less options to investigate this as opposed to SQL Server 2005.

VAS (Virtual Address Space) is the 4 GB region (in 32-bit) where 2 GB is reserved for the kernel mode and the other 2 GB for user mode (unless you specify /3GB). Every memory allocation (including AWE mapping) for an application happens in the user mode portion of the VAS.

By default SQL Server reserves 256MB in the MemToLeave region and an additional part that is calculated as worker threads * stack size (0.5MB for SQL Server 2000 x86). This memory is reserved in the VAS and used for special memory allocations like extended stored procedures, sp_OA procedures, CLR integration and some specific memory managers. One of the properties of MemToLeave memory is that it is static and it can only be increased at startup by using the -g switch. If you wish to do this do it with great caution as VAS is only 2 GB or 3 GB on a 32 bit system. As a side note, allocations from this region are done by the Multi Page Allocator in SQL Server 2005, meaning it is not stolen from the buffer pool and consequently is not included in the Max Server Memory setting.

We contacted PSS to help us find the root cause and they instructed us to start SQL Server with -T3654 which allows you to run a command that returns information about the memory allocations SQL Server has done: DBCC MEMOBJLIST. We would have resolved the whole issue by restarting SQL Server and limit our chances to find the root cause. Since it was not a production server we decided to wait a little with the restart and continue looking for the root cause. It was a good decision since on of my colleagues noticed that there were a lot of trace files created at the same time. We have a central mechanism that uses server side traces to profile all our servers and something had gone seriously wrong this weekend. When we issued a select * from ::fn_trace_getinfo(0) on the server it showed us that there were 270 traces running. Probably because of a little bug the tool started an incredible amount of traces on the server.

When we issued a DBCC MEMORYSTATUS we got the following results:

Dynamic Memory Manager Buffers

Stolen 3728
OS Reserved 37528
OS Committed 37506
OS In Use 37502
General 1513
QueryPlan 2420
Optimizer 0
Utilities 36900
Connection 220


According to KB271624 'Utilities' is used by various utilities routines like BCP, Log Manager, parallel queries, ::fn_trace_gettable and others. Others also means buffers used for tracing. Starting the 200+ traces caused our VAS to get heavily fragmented and thus leaving us with a lot of different errors like "There is not enough storage to complete this operation", "Failed to reserve contiguous memory", login failures, etc. As you can see the 'Utilities' memory manager has 36900 buffers which is a massive amount considering the limited MemToLeave size.

So remember, even if you have lots of available memory you can still run out of memory!

Tuesday, July 24, 2007

Visual Studio Team Edition for Database Professionals Service Release 1

Just a quick note for those of you who are lucky enough to work with VSDBPRO.
Service release 1 is available, more info here and KB936612.

Monday, July 23, 2007

Run batch multiple times

Kalen posted an awesome tip that is so simple and yet so cool I just had to refer to it.

Thursday, July 19, 2007

Even more USERSTORE_TOKENPERM

Reading this and this post reminded me of our very bad experience with the USERSTORE_TOKENPERM cache.

When we ran into our USERSTORE_TOKENPERM issue last year there was no real awareness of the problem yet (SQL Server 2005 x64 - Enterprise Edition Build 2153). It was so bad we had to revert to SQL Server 2000 because things got so slow that SQL Server started to become unresponsive. After we got everything back to the original server with SQL Server 2000 we had a couple of weeks of detective work in front of us. Having an issue with this much impact needs to be investigated and the root cause must be found before you can even think about migrating again. We had done all the steps needed before migrating like stress testing, functional jobs, system jobs, ... and yet we did not see the behavior it was showing in production.

We have spent quite some time on the phone with an Escalation Engineer from Microsoft trying to pinpoint the root cause of our problem. After we convinced the engineer that the root cause was not parallelism as he suggested we finally got some vital information that helped us understand what was happening. When he told us about the USERSTORE_TOKENPERM cache and its behavior we started running some tests and now knew what we were looking for. Sure enough the size of this cache just kept growing and the remove count certainly did not.

select
distinct cc.cache_address,
cc.name,
cc.type,
ch.clock_hand,
cc.single_pages_kb + cc.multi_pages_kb as total_kb, cc.single_pages_in_use_kb + cc.multi_pages_in_use_kb as total_in_use_kb,
cc.entries_count, cc.entries_in_use_count, ch.removed_all_rounds_count, ch.removed_last_round_count, rounds_count
from sys.dm_os_memory_cache_counters cc join sys.dm_os_memory_cache_clock_hands ch on (cc.cache_address =
ch.cache_address)
where cc.type = 'USERSTORE_TOKENPERM'
order by total_kb desc

Since it was announced that this problem would be fixed in Service Pack 2 we decided to create some test scripts that really put stress on this cache. We tried to come as close as possible to our production situation at that time which was sp_executesql for the CUD operations (UpdateBatch). Putting pressure on this cache was quite easy, have different query text so they get cached as different query plans and execute them twice since the first time it just gets inserted in the cache and the problem comes with the lookup action. We forced this behavior by executing queries with varchar parameters that increase in length every iteration. Also remember to use a login that is not a sysadmin since this causes SQL Server to skip the TokenAndPermUserStore check. We ended up creating a couple of scripts to do just this and also to monitor the cache size. The 'engine' behind it are simple command files and SQLCMD. Unfortunately I can not post the scripts since they are the property of the customer.

As far as we have seen from our tests the behavior has certainly improved in SP2. It takes a lot longer for the duration of the queries to increase and it does not increase to the point where SQL Server becomes unresponsive nor does the duration become unacceptably high. Since we are really pushing the cache to the limit it does still cause some increase in duration but not nearly to the point where SP1 gets it. We do notice however that under the constant 'hammering' of these queries even SP2 has the greatest trouble to keep the cache under control even when we call DBCC FREESYSTEMCACHE('TokenAndPermUserStore'). Obviously this scenario is not realistic since we really are trying to make it go wrong. Not only did the TokenAndPermUserStore behavior change in SP2 they cut down the size of the plan cache dramatically in SP2 too.

The conclusion however is that under Service Pack 2 we do not experience the same catastrophic behavior as before.

A lot of time has passed and we are going to migrate to SQL Server 2005 on the 9th of September. We are using a whole new .NET Framework on the application tier with a whole new data access layer, so a lot of the factors that existed that last time have now changed. We are starting our stress tests next month, if I get the chance and if we find anything peculiar I will let you know.

Hopefully this time there will not be any surprises :-(

Tuesday, July 17, 2007

The target of 'replace' must be at most one node

One of the new features of SQL Server 2005 is the integration of XML in the database. Although XML in the database is not my favorite architecture there are situations where it is useful. Since we have not really used XML in a real life project before we are slowly learning its limitations.

A limitation we recently discovered is modifying more than 1 node at a time using XML DML. This limitation is documented in the Books Online under the "replace value of" topic: "If multiple nodes are selected, an error is raised."

The only idea we could come up with is to loop through the nodes until all the nodes have been updated. Lucky for us SQL Server also supports mixing XML with special functions like sql:variable() and sql:column().

Check out the code here since blogger is giving me issues with the XML string :-(

Monday, July 16, 2007

Windows Home Server - RTM

There was not such a great buzz about this edition (at least not here in Belgium) but nevertheless it is an interesting product. It contains some nice features like a home network health monitor which makes sure all updates are applied on your home PC's, full system restores, automatic backup and many more.

Check out the announcement here and more information here

Thursday, July 12, 2007

SQL Server 2008 Launch

SQL Server 2008, together with Visual Studio 2008 and Windows Server 2008, will be launched on the 27th of February. More information can be found here.

Greg Low once launched a nice suggestion for deferred constraints but unfortunately it did not make it to SQL Server 2008. But not to worry there are lot of other exciting features ;-)

Also some nice features in Windows 2008, for the I/O lovers: there is an internal limit of 64k per I/O request in earlier version of Windows, this limit has been removed in the new Windows core.

For Visual Studio 2008 we will see the integration of WF, WCF and WPF and of course LINQ.

Seeing the end of February date reminded me of a really stupid joke someone sent me.
So for the geeks among us:

It is March 1st and the first day of DBMS school
The teacher starts off with a role call.. Teacher: Oracle?
"Present sir"
Teacher: DB2?
"Present sir"
Teacher: SQL Server?
"Present sir"
Teacher: MySQL?
[Silence]
Teacher: MySQL?
[Silence]
Teacher: Where the hell is MySQL
[In rushes MySQL, unshaved, hair a mess]
Teacher: Where have you been MySQL
"Sorry sir I thought it was February 31st"

Wednesday, July 04, 2007

SQL Server 2005 Best Practices Analyzer (July 2007)

Although the summer (which we have not seen much of in Belgium the last 2 weeks) has started, Microsoft is still on a roll. Check out the new SQL Server 2005 Best Practices Analyzer here.

By the way, for those of you who have not bought SQL Server 2005 Practical Troubleshooting: The Database Engine, it is really a must read for every SQL Server DBA. Apart from troubleshooting information it also contains a lot of 'inside' information about how SQL Server 2005 works, I just love that stuff but maybe that's just me :-)

*EDIT*
A nice blog post here.

Monday, July 02, 2007

SQL DMVStats

Tom Davidson and Sanjay Mishra bring us a great set of reports and scripts to generate a Dynamic Management View Performance Data Warehouse.
Check out this great solution here.

This is actually something we had on our to do list for quite some time so this will save us a lot of development work.

Monday, June 25, 2007

Synonym

Perhaps a little known feature of SQL Server 2005 but it might come in handy when you are slowly changing your database to use schema's. Synonyms allow you to define an alias for your objects and this is very helpful when you are migrating to schema's.

By using synonyms you can change your object names while retaining your 'interface' to the application and this allows for a smoother and more phased migration.

A little example with a table but remember that this also works for stored procedures, functions, etc. and it is also possible to create synonyms for remote objects (linked server).


--Create a schema
CREATE SCHEMA SchemasRock
GO
--Create a table in the default schema
CREATE TABLE dbo.myTable
(ID int)
GO
--Insert a row
INSERT INTO dbo.myTable VALUES (1)
--Move the table to the new schema
ALTER SCHEMA SchemasRock TRANSFER dbo.myTable
--Insert another row (will fail!)
INSERT INTO dbo.myTable VALUES (2)
/*
Msg 208, Level 16, State 1, Line 1
Invalid object name 'dbo.myTable'.
*/
--Create a synonym to support 'old' table name
CREATE SYNONYM dbo.myTable FOR SchemasRock.myTable
--Insert another row
INSERT INTO dbo.myTable VALUES (2)
--Select the rows
SELECT * FROM dbo.myTable
SELECT * FROM SchemasRock.myTable
--Clean
DROP SYNONYM [dbo].[myTable]
DROP TABLE [SchemasRock].[myTable]
DROP SCHEMA [SchemasRock]

Wednesday, June 20, 2007

SQL Server 2005 SP2 Cumulative Hotfix 3175

We're at build 3175 with KB936305, the hotfix is not publicly available though.
For an overview of the hotfixes released after SP2 check out KB937137, it contains the list of fixes per build.

Monday, June 11, 2007

SQL Server 2005 Books Online (May 2007)

Back to the current release of SQL Server ;-)
Get the updated Books Online here.

Wednesday, June 06, 2007

Fun with the new MERGE statement

I found some time to play with CTP3 and checked out the MERGE statement. What you achieve with it may not be rocket science and was perfectly possible with a couple of IF statements but using MERGE gives you much cleaner code. It looks like Microsoft is investing a lot in 'cleaner' coding in SQL Server 2008, which I can only welcome with great enthusiasm.

A simple example will show you exactly what the MERGE statement does.

CREATE TABLE StagedData
(ID int, UserName varchar(20))
GO

CREATE TABLE RealData
(ID int, UserName varchar(20))
GO

INSERT INTO StagedData VALUES (1, 'Slava') , (2, 'Paul') , (3, 'Wesley')
GO

MERGE RealData r
USING
(SELECT ID, UserName FROM StagedData) staging
ON (r.ID = staging.ID)
WHEN MATCHED THEN UPDATE SET r.UserName = staging.UserName
WHEN TARGET NOT MATCHED THEN INSERT VALUES (ID, UserName)
WHEN SOURCE NOT MATCHED THEN DELETE;
GO
SELECT * FROM RealData
GO

UPDATE StagedData SET UserName = 'Kimberley' WHERE ID = 3
GO
MERGE RealData r
USING
(SELECT ID, UserName FROM StagedData) staging
ON (r.ID = staging.ID)
WHEN MATCHED THEN UPDATE SET r.UserName = staging.UserName
WHEN TARGET NOT MATCHED THEN INSERT VALUES (ID, UserName)
WHEN SOURCE NOT MATCHED THEN DELETE;
GO
SELECT * FROM RealData
GO

DELETE FROM StagedData WHERE ID = 3
GO
MERGE RealData r
USING
(SELECT ID, UserName FROM StagedData) staging
ON (r.ID = staging.ID)
WHEN MATCHED THEN UPDATE SET r.UserName = staging.UserName
WHEN TARGET NOT MATCHED THEN INSERT VALUES (ID, UserName)
WHEN SOURCE NOT MATCHED THEN DELETE;
GO
SELECT * FROM RealData
GO

DROP TABLE StagedData
GO
DROP TABLE RealData
GO

Tuesday, June 05, 2007

SQL Server 2008 - A first look

I took some time to look at the ''nifty" features in SQL Server 2008. There is a long list of new features shown in this webcast, but I doesn't look like all of them are already implemented in CTP3.

Here are some of the new features that are in this build and are quite nifty:

  • MERGE, a very powerful statement to combine insert/updates and even deletes in a single statement
  • Multiple values INSERT (eg. INSERT INTO myTable VALUES (1, 'SQL Server 2008 Rocks'), (2, 'It really does') - the example in the BOL seems to be incorrect)
  • FORCESEEK query hint
  • Table-valued parameters enables you to pass table variables to stored procedures or functions
  • ALTER DATABASE SET COMPATIBILITY_LEVEL isof sp_dbcmptlevel
  • Policy based management, this is an awesome feature that allows you to set policies on many configuration options and even naming conventions for your objects in a very flexible way (Declarative Management Framework)
  • Change Data Capture, a built-in auditing mechanism
  • Some new dynamic management views (sys.dm_os_memory_brokers, sys.dm_os_memory_nodes, sys.dm_os_nodes, sys.dm_os_process_memory and sys.dm_os_sys_memory)

There are probably many cool features and I'll keep you informed when I stumble upon them.

*EDIT*
A great list of new features can be found here.

Monday, June 04, 2007

Friday, June 01, 2007

SQL Server Katmai Part II

Francois Ajenstat announced in a podcast interview - at the very first Microsoft Business Intelligence Conference - that the first CTP will probably be released in the coming month!

Get your Virtual Servers ready ;-)

By the way, you can find webcasts of this conference on the resources page.

Friday, May 25, 2007

3GB PAE AWE

I once posted some information about the 3GB, AWE and PAE settings.
Here is some more detailed information about these settings. Definitely worth reading!