Thursday, July 12, 2007

SQL Server 2008 Launch

SQL Server 2008, together with Visual Studio 2008 and Windows Server 2008, will be launched on the 27th of February. More information can be found here.

Greg Low once launched a nice suggestion for deferred constraints but unfortunately it did not make it to SQL Server 2008. But not to worry there are lot of other exciting features ;-)

Also some nice features in Windows 2008, for the I/O lovers: there is an internal limit of 64k per I/O request in earlier version of Windows, this limit has been removed in the new Windows core.

For Visual Studio 2008 we will see the integration of WF, WCF and WPF and of course LINQ.

Seeing the end of February date reminded me of a really stupid joke someone sent me.
So for the geeks among us:

It is March 1st and the first day of DBMS school
The teacher starts off with a role call.. Teacher: Oracle?
"Present sir"
Teacher: DB2?
"Present sir"
Teacher: SQL Server?
"Present sir"
Teacher: MySQL?
[Silence]
Teacher: MySQL?
[Silence]
Teacher: Where the hell is MySQL
[In rushes MySQL, unshaved, hair a mess]
Teacher: Where have you been MySQL
"Sorry sir I thought it was February 31st"

Wednesday, July 04, 2007

SQL Server 2005 Best Practices Analyzer (July 2007)

Although the summer (which we have not seen much of in Belgium the last 2 weeks) has started, Microsoft is still on a roll. Check out the new SQL Server 2005 Best Practices Analyzer here.

By the way, for those of you who have not bought SQL Server 2005 Practical Troubleshooting: The Database Engine, it is really a must read for every SQL Server DBA. Apart from troubleshooting information it also contains a lot of 'inside' information about how SQL Server 2005 works, I just love that stuff but maybe that's just me :-)

*EDIT*
A nice blog post here.

Monday, July 02, 2007

SQL DMVStats

Tom Davidson and Sanjay Mishra bring us a great set of reports and scripts to generate a Dynamic Management View Performance Data Warehouse.
Check out this great solution here.

This is actually something we had on our to do list for quite some time so this will save us a lot of development work.

Monday, June 25, 2007

Synonym

Perhaps a little known feature of SQL Server 2005 but it might come in handy when you are slowly changing your database to use schema's. Synonyms allow you to define an alias for your objects and this is very helpful when you are migrating to schema's.

By using synonyms you can change your object names while retaining your 'interface' to the application and this allows for a smoother and more phased migration.

A little example with a table but remember that this also works for stored procedures, functions, etc. and it is also possible to create synonyms for remote objects (linked server).


--Create a schema
CREATE SCHEMA SchemasRock
GO
--Create a table in the default schema
CREATE TABLE dbo.myTable
(ID int)
GO
--Insert a row
INSERT INTO dbo.myTable VALUES (1)
--Move the table to the new schema
ALTER SCHEMA SchemasRock TRANSFER dbo.myTable
--Insert another row (will fail!)
INSERT INTO dbo.myTable VALUES (2)
/*
Msg 208, Level 16, State 1, Line 1
Invalid object name 'dbo.myTable'.
*/
--Create a synonym to support 'old' table name
CREATE SYNONYM dbo.myTable FOR SchemasRock.myTable
--Insert another row
INSERT INTO dbo.myTable VALUES (2)
--Select the rows
SELECT * FROM dbo.myTable
SELECT * FROM SchemasRock.myTable
--Clean
DROP SYNONYM [dbo].[myTable]
DROP TABLE [SchemasRock].[myTable]
DROP SCHEMA [SchemasRock]

Wednesday, June 20, 2007

SQL Server 2005 SP2 Cumulative Hotfix 3175

We're at build 3175 with KB936305, the hotfix is not publicly available though.
For an overview of the hotfixes released after SP2 check out KB937137, it contains the list of fixes per build.

Monday, June 11, 2007

SQL Server 2005 Books Online (May 2007)

Back to the current release of SQL Server ;-)
Get the updated Books Online here.

Wednesday, June 06, 2007

Fun with the new MERGE statement

I found some time to play with CTP3 and checked out the MERGE statement. What you achieve with it may not be rocket science and was perfectly possible with a couple of IF statements but using MERGE gives you much cleaner code. It looks like Microsoft is investing a lot in 'cleaner' coding in SQL Server 2008, which I can only welcome with great enthusiasm.

A simple example will show you exactly what the MERGE statement does.

CREATE TABLE StagedData
(ID int, UserName varchar(20))
GO

CREATE TABLE RealData
(ID int, UserName varchar(20))
GO

INSERT INTO StagedData VALUES (1, 'Slava') , (2, 'Paul') , (3, 'Wesley')
GO

MERGE RealData r
USING
(SELECT ID, UserName FROM StagedData) staging
ON (r.ID = staging.ID)
WHEN MATCHED THEN UPDATE SET r.UserName = staging.UserName
WHEN TARGET NOT MATCHED THEN INSERT VALUES (ID, UserName)
WHEN SOURCE NOT MATCHED THEN DELETE;
GO
SELECT * FROM RealData
GO

UPDATE StagedData SET UserName = 'Kimberley' WHERE ID = 3
GO
MERGE RealData r
USING
(SELECT ID, UserName FROM StagedData) staging
ON (r.ID = staging.ID)
WHEN MATCHED THEN UPDATE SET r.UserName = staging.UserName
WHEN TARGET NOT MATCHED THEN INSERT VALUES (ID, UserName)
WHEN SOURCE NOT MATCHED THEN DELETE;
GO
SELECT * FROM RealData
GO

DELETE FROM StagedData WHERE ID = 3
GO
MERGE RealData r
USING
(SELECT ID, UserName FROM StagedData) staging
ON (r.ID = staging.ID)
WHEN MATCHED THEN UPDATE SET r.UserName = staging.UserName
WHEN TARGET NOT MATCHED THEN INSERT VALUES (ID, UserName)
WHEN SOURCE NOT MATCHED THEN DELETE;
GO
SELECT * FROM RealData
GO

DROP TABLE StagedData
GO
DROP TABLE RealData
GO

Tuesday, June 05, 2007

SQL Server 2008 - A first look

I took some time to look at the ''nifty" features in SQL Server 2008. There is a long list of new features shown in this webcast, but I doesn't look like all of them are already implemented in CTP3.

Here are some of the new features that are in this build and are quite nifty:

  • MERGE, a very powerful statement to combine insert/updates and even deletes in a single statement
  • Multiple values INSERT (eg. INSERT INTO myTable VALUES (1, 'SQL Server 2008 Rocks'), (2, 'It really does') - the example in the BOL seems to be incorrect)
  • FORCESEEK query hint
  • Table-valued parameters enables you to pass table variables to stored procedures or functions
  • ALTER DATABASE SET COMPATIBILITY_LEVEL isof sp_dbcmptlevel
  • Policy based management, this is an awesome feature that allows you to set policies on many configuration options and even naming conventions for your objects in a very flexible way (Declarative Management Framework)
  • Change Data Capture, a built-in auditing mechanism
  • Some new dynamic management views (sys.dm_os_memory_brokers, sys.dm_os_memory_nodes, sys.dm_os_nodes, sys.dm_os_process_memory and sys.dm_os_sys_memory)

There are probably many cool features and I'll keep you informed when I stumble upon them.

*EDIT*
A great list of new features can be found here.

Monday, June 04, 2007

Friday, June 01, 2007

SQL Server Katmai Part II

Francois Ajenstat announced in a podcast interview - at the very first Microsoft Business Intelligence Conference - that the first CTP will probably be released in the coming month!

Get your Virtual Servers ready ;-)

By the way, you can find webcasts of this conference on the resources page.

Friday, May 25, 2007

3GB PAE AWE

I once posted some information about the 3GB, AWE and PAE settings.
Here is some more detailed information about these settings. Definitely worth reading!

Tuesday, May 22, 2007

Database Snapshot vs Procedure Cache

I recently stumbled upon a KB article (KB917828) that points out some situations in which your procedure cache is cleared.

Since one of our architectures for reporting involved database snapshots I was suprised to read that your whole procedure cache is cleared when you drop a database snapshot. This may not be as dramatic as it sounds but you should closely watch the impact on your environment when the cache is cleared.

Other situations are:
  • A database has the AUTO_CLOSE database option set to ON. When no user connection references or uses the database, the background task tries to close and shut down the database automatically.
  • A database has the AUTO_CLOSE database option set to ON. Maintenance operations are performed, such as the DBCC CHECKDB operation or a backup operation. When the operations finish, the background task tries to close and shut down the database automatically.
  • You run several queries against a database that has default options. Then, the database is dropped.
  • You change the database state to OFFLINE or ONLINE.
  • You successfully rebuild the transaction log for a database.
  • You restore a database backup.

Wednesday, May 09, 2007

SQL Server Katmai

It is announced to be available in 2008.
I'm ready for a beta/ctp version ;-)

More information here and here.

Datasheet here.

Sunday, May 06, 2007

Visual Studio 2005 Team Edition for Database Professionals - SR1 (CTP)

For those of you who are lucky and are already working with VSDBPRO, there is a service release CTP available for download (KB936202). Make sure you read the release notes before you install.

Gert Drapers also posted some additional information on his blog.

One of the biggest enhancements, in my opinion, is the support for SQLCMD variables. This enables great flexibility in your deployment scripts.

Wednesday, May 02, 2007

Cumulative waitstats per session/query

We were troubleshooting a long running batch job when we suddenly realized that there is no easy way to get cumulative wait information for a specific session or query. With the rich information already available to us in SQL Server 2005 this might be a great addition to the product.

I decided to launch a suggestion on Microsoft Connect. This great initiative makes it possible for everyone to give suggestions to improve Microsoft's products.

Check out my suggestion here (and while you are there do vote ;-))

Tuesday, April 17, 2007

A couple of interesting KB articles

Today we were installing SQL Server 2005 on our new Windows 2003 Server (SP2) cluster when we ran into the following issue: "The file C:\Windows\Microsoft.NET\Framework\Meaningless_string\mscorlib.tlb could not be loaded", the solution to this can be found in KB918685.

Dirk G from MCS Belgium was nice enough to point us to another hotfix concerning the USERSTORE_TOKENPERM issue, KB933564 contains more information about this.

KB935897 has some information about the new "Incremental Servicing Model" for SQL Server.

Monday, April 16, 2007

So what's up with all the SP2 post fixes?

It's getting quite cumbersome to understand all the releases after SP2 for SQL Server 2005. Bob Ward from PSS decided to clarify this on his blog.

Aaron Bertrand also clarifies the differences between a GDR and a QFE here.

Friday, April 06, 2007

WinDbg and SQL Server minidumps

Today a couple of our test servers started to act very weird. Some jobs were going incredibly slow and eventually most of them crashed causing a mini dump to be generated. The error log pointed to I/O problems with latch timeouts but there were no details on which file exactly it crashed.

The SAN and NAS management is done by another team and we had to pinpoint the problem to convince them that it was really an I/O issue. So how exactly do you do this on a server with several databases and processes?

Off to my favorite developer Bregt, I knew he had good knowledge of WinDbg so he could help us pinpoint the issue. Lucky for me Bregt is a nice guy and he explained to me how I could find the cause of the crash.

Download WinDbg which is part of the debugging toolkit provided by Microsoft.

Start WinDbg (Run As Administrator in Vista) and configure the symbols as follows (CTRL-S)

The url can be found in the WinDbg help and the local symbols path is one you choose (and create yourself). Don't forget to save your workspace when it asks you to (thanks again Bregt).

After you have done this you can open a crash dump (CTRL-D), this might take a while depending on the size of the dump.

Type ~kv in the command line, this gives you the call stack with the memory addresses.

Using ALT-5 you get a memory viewer, here you can check the content of the memory for the last call. It is possible however that nothing is returned for the memory address since not everything is dumped with a minidump. In our case this was "ntdll!ZwCreateFile+0xa" and this directed us to the exact filename and it was a server side trace that was being saved to a NAS drive.


How cool is that?

I feel like digging a little deeper in WinDbg when I find some time. A colleague of us (Hans De Smaele) has written a great document on this topic.

Wednesday, April 04, 2007

SQL Server 2005 SP2 - Another hotfix

This time we are at build 3159 with KB934459.
Those darn maintenance plans sure are causing mayhem lately :-)

Tuesday, April 03, 2007

Detach/Attach migration

In our effort to consolidate some environments I had to migrate a reporting database from SQL Server 2000 to SQL Server 2005. One of the easiest ways of migrating is a simple detach and attach (sp_attach_db is deprecating and being replaced by CREATE DATABASE FOR ATTACH in SQL Server 2005).

After the attach some options have to be changed like the compatibility level, page verify checksum, etc. and as a last step I issued an sp_updatestats.

When I started the test run I noticed that it was taking very long so I started issuing some sp_who and sp_lock statements. The output showed that the same locks were being held for an unreasonable amount of time until the point that tempdb ran out of space. I had 8GB of tempdb space and the query only had to return 3 rows from a couple of tables that had less than 100 rows. I decided to run the query again and monitored tempdb usage with the new DMV sys.dm_db_file_space_usage, I immediately saw tempdb allocations rise at an incredible rate.

I decided to drop all the existing statistics and issued the same query again, this time it immediately returned the 3 rows without any trouble and without any unusual tempdb usage.

Very odd behavior but this shows the importance of statistics and the apparently subtle differences between SQL Server 2000 and SQL Server 2005. Dropping the statistics is easy in development/test but for production environments you might want to script the existing statistics and recreate them. For some unknown reason the update statistics was not as effective as expected after an attach from a SQL Server 2000 database.

*UPDATE*
Things like this are always a bit more complicated than they seem at first. After the import of the new set of data for today the query went back to the bad behavior. A strange effect was that the query worked well if I took a date range of a couple of months but failed with an interval of 1 day. The key here was of course the fact that because of a bad estimation the query plan was completely different in the two cases. Creating one index on a single field out of the ten tables that were involved in the query resulted in a different query plan being chosen and completely solved the problem.

Yet again a couple of important things are proven:
  • The differences in the query optimization engine between SQL Server 2000 and SQL Server 2005 make it important to test every query
  • Statistics are of great importance to the query optimizer so watch them carefully and make sure they are kept up-to-date
  • Having proper indexes is an absolute necessity for every environment
  • Any change, how small it may seem, could have great impact on your environment

Monday, March 26, 2007

Views and the smart Query Optimizer

In general I don't really like to use views. Although they have their advantages I often see it turn out bad in projects. The main reason is that people start to use complex views that were not written for their specific functionality and many tables get touched that have nothing to do with the result they are trying to retrieve.

The query optimizer is quite intelligent when it comes to resolve the query plan for a view. It is smart enough to ignore the tables you do not need to resolve your query.

CREATE VIEW SmartView AS
SELECT i.ItemTitle, i.ItemLink, u.UserName
FROM tbl_item i
JOIN tbl_user u ON i.OwnerID = u.UserID
GO


SELECT ItemTitle, UserName FROM SmartView





SELECT ItemTitle FROM SmartView


You can see SQL Server is quite intelligent when it comes to optimizing views. Unfortunately it still cannot prevent every situation of misuse. Some people have no problem of using SELECT * FROM SmartView and use only the ItemTitle field for example. When your view uses many (large) tables this might cause a lot of grieve on your database. One of the most extreme examples I have seen in real life is a SELECT from the biggest view in the project, causing millions of reads, where the result that was being used was the rowcount - which could actually be retrieved by counting the rows in one table. The key is obviously having a good DBA to review every query ;-)

Wednesday, March 14, 2007

Windows Server 2003 Service Pack 2

It was very close indeed :-)

Download here - fixlist here

Tuesday, March 13, 2007

I/O counters in Task Manager

One of my freaky habits is to check the I/O counters in the task manager. Whenever I'm doing I/O intensive stuff I like to quickly look at the MBs processed.

Whether these counters are updated or not is controlled by a registry setting (see below). You can set the value to 0 to disable it or 1 to enable it. Disabling it gives a little performance boost since Windows doesn't have to take care of the counters but on test systems I like to enable it. SQLIOSim also requires it to be enabled to get some of it's results.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\I/O System\CountOperations

By the way, Windows Server 2003 Service Pack 2 (fixlist) is getting very close. Keep an eye out for it because it contains fixes to improve performance of SQL Server 2005 under heavy load. More details will follow.

Thursday, March 08, 2007

SQL Server 2005 SP2 - Cumulative Hotfix (build 9.0.3152)

A new cumulative hotfix package has been release for SQL Server 2005 SP2.
The new build number is 3152, they are really going fast these days :-)

http://support.microsoft.com/?kbid=933097

Wednesday, March 07, 2007

SQL Server 2005 Performance Dashboard Reports

Keith Elmore, an escalation engineer at Microsoft, has created a "Performance Dashboard" for SQL Server 2005 SP2 to detect common performance issues. It uses the new Custom Reports functionality in SP2.

More info can be found here.

The files are installed in \Program Files\Microsoft SQL Server\90\Tools\PerformanceDashboard. You have to run setup.sql on the server(s) you wish to monitor (do remember Service Pack 2 is required!).

Tuesday, March 06, 2007

SQL Server 2005 Post SP2 Hotfix

The kind people of SSWUG warned us today about a potential problem with Service Pack 2.

And I quote:

"Important SQL Server 2005 SP2 Hotfix
If you're using SP2, you need to be sure to pick up the hotfix, to be posted today (Mar 6 2007). There is an issue with SP2 and a "mixed" environment of SQL Server versions. This issue specifically occurs in fairly small and particular circumstances. First, you have to have installed SP2. SP2 was released just a week and half ago or so, so many have this in a test environment at this point. Second, if you have an environment where pre-SP2 client utilities are in-use, if you use these utilities to open/edit cleanup processes on the SP2 servers, you can encounter the issue. SQL Server will incorrectly read the intervals set up for the cleanup processes, resulting in cleanup processes not running when expected.

We first had a report of this here on SSWUG last week. The report, and a few follow-on reports since, indicated that, after installation of SP2, packages apeared to have different intervals when they were reviewed after SP2 installation and compared with intervals set up prior to that time.

I spoke with Francois Ajenstat, Director of Product Management for SQL Server late in the day on Monday (Mar 5) and he indicated that a hotfix would be available on the SQL Server site late Mar 5/Early Mar 6 and that the SP2 bits have been (or will be at the same time) updated to include the updated code that corrects the issue.

What you need to know: Not correcting this issue could lead to data loss in some circumstances. According to Arjenstat, if you've installed SP2, you must apply the hotfix, available on the site. You should do this immediately to avoid the issues outlined here. Second, if you've downloaded and not yet installed SP2 on all or some of your servers, for those that hav not yet been installed, you need to get the newer SP2 update and be sure you're using that as you update your servers. While there are mitigating circumstances around this (versions, mixed environment, cleanup tasks), Arjenstat said they're being very open and aggressive in getting the news of the udpate out there and the updated bits available on the site immediately. If you are experiencing issues, PSS will work with you to help resolve them."

*EDIT*
The KB article can be found here. The SP2 download has also been refreshed so if you download the new SP2 install file this fix will be included.

Tuesday, February 27, 2007

Bob Beauchemin coming to the DevDays

Great news for the SQL Server people visiting the DevDays in Belgium, Bob Beauchemin will be giving a number of sessions on the second day. I will certainly be attending all of them Bob ;-)

Yet another difference in Vista Home Premium

I was trying to expose 2 drives as 1 which is possible by using dynamic disks. I'm a big fan of diskpart so I tried converting the disk using this command line utility:

diskpart
select disk 1
convert dynamic

Error: Dynamic disks are not supported by your edition

This is not a new limitation, it was actually the same for Windows XP Home Edition.

Tuesday, February 20, 2007

SQL Server 2005 SP2

Just like I posted from the first rumours, the end of February and here we are

SQL Server 2005 SP2

Friday, February 09, 2007

The not so visible differences between Vista Editions

I was trying to run Virtual Server 2005 R2 on my Vista machine and installed IIS as mentioned below. There was one option I couldn't find however and that was "Windows Authentication".

Apparently there are more differences between the Vista editions as there are mentioned on the website.

Windows Authentication is not available on the Home Editions :-( I also noticed that the good old user manager is not available either (only the wizard interface).

More info about IIS features here

Windows Vista

If you are a freak like me and can't wait to install Vista, although you know there will be many compatibility problems so early after a release, you might encounter some problems when you try to install SQL Server 2005 Express Edition on a Vista machine (MSDE will NOT be supported).

There is a website however that can help you with configuring everything to make SQL Server work on your Vista PC. So if you can't wait for Service Pack 2 check out this website.

There is also a little post-SP1 download for Visual Studio 2005 available.

And for the people who want to run Virtual Server 2005 R2 there is this link.

Saturday, January 27, 2007

Oops I got tagged

I got tagged by Bart Bultinck, so here it goes... 5 things you may or may not know about me

  1. My nickname dis4ea comes from a documentary about coffee, disphoria is extreme unhappiness and people who are addicted to coffee might suffer from it when they lack coffee.
  2. One of my grandmothers was actually Dutch and my father lived in The Netherlands during the first years of his life.
  3. My brother is a system engineer, my father worked as a manager in the technical department of a computer hardware retailer and my mother worked as a sales representative in a computer hardware retailer.
  4. I don't drink alcohol either (respect Bart) and it is indeed very difficult to convince people that you really don't...
  5. My favorite instrument is the guitar and I love all kinds of music where the guitar is the important instrument (especially blues). My favorite artists are Stevie Ray Vaughan, Joe Satriani, Buddy Guy, Jimi Hendrix and Metallica.

My turn to tag:

Bregt
NickVDA
Killspid
Sven Cipido
General

Wednesday, January 24, 2007

Big Procedure Cache in SQL Server 2005

As I once mentioned in a post I felt that the procedure cache was really out of proportion. We were assured that this was not an issue but now I stumbled upon the following post where they explain that the behavior has been seriously changed in SP2.


Align SQL Server 2005 plan cache size limit to a size similar to that of SQL Server 2000: The maximum size limit of the SQL Server 2005 Plan Cache in RTM and SP1 is significantly larger than in SQL Server 2000. To reduce the SQL Server 2005 plan cache maximum size limit to a size similar to that of SQL Server 2000, the limit for signaling internal memory pressure on a cache is changed from that of SQL Server 2005 RTM and SP1. The following table shows how the maximum limit of the plan cache is determined for each version of SQL Server:

SQL Server 2005 RTM & SP1: 75% of server memory from 0-8GB + 50% of server memory from 8Gb-64GB + 25% of server memory > 64GB

SQL Server 2005 SP2: 75% of server memory from 0-4GB + 10% of server memory from 4Gb-64GB + 5% of server memory > 64GB

Example:
For a SQL Server with 32Gb total SQL server memory,
SQL Server 2005 RTM and SP1 cachestore limit will be 75% X 8 + 50% X (32 - 8) = 18GB

SQL Server 2005 SP2 cachestore limit will be 75% X 4 + 10% X (32-4) = 5.8GB

I suppose I was not the only one who thought this was just too much. I love the way SQL Server is self tuning for a great part but maybe it would be nice to allow expert users to configure some aspects themselfs anyway, especially with x64 where memory usage is lifted to a whole new dimension. When looking at the fixlist for Service Pack 2 I am getting more and more convinced to wait for it before installing SQL Server 2005 in large production environments.

Troubleshooting Query Performance Issues Related to Plan Cache in SQL 2005 RTM and SP1

The SQL Programmability team have started a series of posts that everyone - using SQL Server 2005 - should read (not for the faint of heart though).

They could help you isolate performance issues very fast and we all know this can be the difference between success or catastrophic failure.

Saturday, January 20, 2007

Performance Impact of Enabling Page Checksum and Default Trace

Check out this great post about the impact of Page Checksum and Default Trace

Wednesday, January 17, 2007

SQL Server 2005 Compact Edition

SQL Server 2005 Compact Edition has been RTM'ed (formerly known as SQL Server Everywhere). More information here.

There finally is some information about the Best Practices Analyzer for SQL Server 2005. It will be released with Service Pack 2. More information here.

We have set a new date for our SQL Server 2005 Migration and we are going to production in April if all goes well.

Monday, January 08, 2007

xml datatype and distributed queries

Recently I stumbled upon a peculiar limitation in SQL Server 2005.

I tried to execute a query against a remote server when I got the following error:
Msg 9514, Level 16, State 1, Line 1
Xml data type is not supported in distributed queries. Remote object 'OPENROWSET' has xml column(s).

I came across the following remark in the Books Online (Guidelines for Using Distributed Queries):
Tables that have xml columns cannot be queried, even if the query accesses non-xml columns of the table.

This statement is not entirely true, when querying only non-xml fields it seems to work just fine.


Repro script (SQLCMD mode)

:CONNECT myServer

use myDB
GO
CREATE TABLE myXml
(ID int, someXml xml)
GO

:CONNECT myRemoteServer

SELECT * FROM OPENROWSET('SQLNCLI', 'Server=myServer;Trusted_Connection=yes;',
'SELECT ID from myDB.dbo.myXml') -- should work just fine
GO

SELECT * FROM OPENROWSET('SQLNCLI', 'Server=myServer;Trusted_Connection=yes;',
'SELECT ID, someXml from myDB.dbo.myXml') -- raises the above mentioned error
GO

SELECT * FROM OPENROWSET('SQLNCLI', 'Server=myServer;Trusted_Connection=yes;', 'SELECT ID, CAST(someXml as varchar(max)) from myDB.dbo.myXml') -- workaround?
GO


If I find an explanation for this limitation I will post an update but I haven't figured out why this is not possible yet.

Sunday, December 31, 2006

Another year (almost) over

2007 here we come!

It will be the year of SQL Server 2005 SP2, Windows 2003 Server SP2, yet another SQL Server 2005 migration, the year they start building my house, elections in Belgium, the last year in my twenties, our 10th anniversary, ...

Let's hope it will not be a year of disasters and violence but that's probably too much to ask :-(

Hope to see you all again next year!

Thursday, December 21, 2006

Wednesday, December 20, 2006

Trace Flag 1200

When you enable trace flag 1200 SQL Server gives you extensive information about all the locks being taken by a specific query or procedure.

There is a little behavior change between SQL Server 2000 and SQL Server 2005 however. In SQL Server 2000 it was a session wide flag so DBCC TRACEON (1200) was enough to get you going while SQL Server 2005 considers it a server wide flag meaning you have to use DBCC TRACEON (1200, -1) to get it going again.

Thanks to Dirk G from MCS Belgium for clearing out the difference in behavior between SQL Server 2000 and SQL Server 2005.

For SQL Server 2000 you do the following:
DBCC TRACEON (3604)
DBCC TRACEON (1200)

For SQL Server 2005 you do the following:
DBCC TRACEON (3604)
DBCC TRACEON (1200, -1)

Monday, December 11, 2006

A query cannot update a text column and a clustering key

A colleague recently got the following error message when executing an update query:

"The query processor could not produce a query plan from the optimizer because a query cannot update a text, ntext, or image column and a clustering key at the same time."


The message is clear but should actually read:
"The query processor could not produce a query plan from the optimizer because a query cannot update a text, ntext, or image column and a clustering key at the same time when multiple rows might be affected."

It is mentioned in the Books Online in the notes of the UPDATE statement. There is a little change in behavior between SQL Server 2000 and SQL Server 2005 however.

The books online for SQL Server 2000 state:
"If an update query could alter more than one row while updating both the clustering key and one or more text, image, or Unicode columns, the update operation fails and SQL Server returns an error message."

The books online for SQL Server 2005 state:
"If the UPDATE statement could change more than one row while updating both the clustering key and one or more text, ntext, or image columns, the partial update to these columns is executed as a full replacement of the values."

So the restriction is a bit less drastic in SQL Server 2005 but it might have a performance impact though. Obviously updating the clustering key is something you wouldn't want to do too often either.

I was wondering where this restriction was coming from so I decided to ask the only person who would know the answer since the internet had really no reference to why this was happening and I couldn't come up with a reason myself either. Paul Randal was nice enough to share that it was necessary to prevent replication from breaking.

Thursday, December 07, 2006

Visual Studio Team Edition for Database Professionals

You have probably read on blogs everywhere that Visual Studio Team Edition for Database Professionals has been released (check out GertD's Blog). The download will be available on MSDN any day now. Do note that Visual Studio Team Suite is required (of which a free trial can be downloaded).

Unlike my love for ISQLw over Management Studio I do feel VSTE for DBPros was a missing piece from our toolkit. It finally brings some features our way that have been around for a long time for .NET developers like Unit Testing, Refactoring, Team Foundation Server integration, ...

Another nice feature is "Data Generator". Random data can be easily generated (even based on regular expressions) and can be configured to take into account the ratio between related tables.

Monday, December 04, 2006

Ad hoc queries take a longer time to finish running when the size of the TokenAndPermUserStore cache grows in SQL Server 2005

This seems to be the official description of the bug we ran into on our cloudy day :-(

Check out the KB article here

We are carefully playing around with new SQL Server 2005 features however since we are planning to go for it again in Q1 2007.

- Database mirroring
- Database snapshots
- Analysis services

Very exciting stuff! If I run into obscure behavior you will read about it.

The first one would be to beware of snapshots and statistics, remember that snapshots are read only so when you query a snapshot the statistics should exist (and be up-to-date) in the source database when the snapshot is created or your query plans may not be as optimal as you like ;-) It's pure logic but something to remember!

Sunday, November 26, 2006

SQL Server - Best Practices

A great website on Best Practices for SQL Server is now available on Technet.

Wednesday, November 22, 2006

Installed SQL Server 2005 - Service Pack 2 CTP

We've just installed the CTP for Service Pack 2.
I'll be posting the nifty features I come across every now and then (check for updates on this post).
  • Drag & Drop scripts no longer asks for the connection for each and every one of the files (so the good old Query Analyzer behavior is back!)
  • They have implemented Kalen Delaney's request for adding an optional database_id parameter to the OBJECT_NAME function
  • The vardecimal functionality is nice too, the nifty part is that they have added stored procedures to estimate the space saved by using this option
  • They have updated the build number in the about box of SSMS (which they forgot in build 2153)
  • ...

Thursday, November 09, 2006

What's new in SQL Server 2005 SP2

Check out the new improvements here

Some additions that catch the eye:


  • Added new functionality in the SQL Server 2005 Enterprise Edition to provide an alternate storage format that can be used to minimize the disk space needed to store existing decimal and numeric data types. No application changes area are required to use its benefits. This new storage format, known as vardecimal storage format, stores decimal and numeric data as variable length columns and can be enabled or disabled at a table level on new or existing tables. If you have declared a decimal or numeric column with high precision but most values do not require it, you can potentially save the disk space needed to store the table. A new stored procedure is provided to estimate the reduction in average row size with the new storage format.

    Funny that they add this new datatype only to the Enterprise Edition in my humble opinion. Although I understand they probably have the most benefit of storage saving features.

  • Plan cache improvements that provide improved system performance, better use of the available physical memory for database pages, and the ability to return text XML query plans that contain an XML nesting level greater than or equal to 128 by using the new sys.dm_exec_text_query_plan table-valued function.

    Sounds very nice but I'd love to see more detail on how exactly this is achieved.

  • The value of the BypassPrepare property of the Execute SQL task has been changed to True by default.In earlier versions, the value of the BypassPrepare property was false, which indicated that statements were always prepared. In SP2, by default queries are not prepared. This eliminates errors with certain providers when you try to prepare a statement that uses parameter placeholders ("?").

    Seems I'm not the only one with this problem.

  • Generate Script Wizard. You can now specify that the scripted objects include a DROP statement before the CREATE statement. Scripting of objects into separate files is now possible.

    Finally!

The fixlist for SQL Server 2005 SP2 CTP can be found here

Friday, October 27, 2006

Why "lock pages in memory" may increase I/O performance

I was reading SQL Server I/O Basics, Chapter 2 when I came across a very interesting chapter discussing the "Lock Pages In Memory" policy.

You all know that the AWE API is used to lock pages in memory and this is true for 32-bit as well as 64-bit SQL Server installations. Do note that the "awe enabled" configuration settings exists in 64-bit installations but it is of no meaning. In order for this to function the "Lock Pages In Memory" privilege is needed for the SQL Server service account. In a 32-bit environment the "awe enabled" configuration settings still has to be enabled.

But the thing I really learned here is why I/O can be improved by this setting. Because memory for an I/O should not cause page faults, the memory first has to be locked, the I/O goes through and the memory has to be unlocked again. Having the "Lock Pages In Memory" privilege avoids this lock and unlock transition during I/O and thus improves performance. This behavior can be disabled by removing the privilege or by enabling trace flag 835 (for 64-bit installations).

If you do decide to use AWE and thus "Lock Pages In Memory" do test your solution thoroughly with this setting. It might cause memory pressure which could lead to performance degradation. You might also want to consider specifying a max memory setting when you enable this. One of the side effects of AWE is that the specified memory is grabbed as soon as SQL Server starts. As of Windows 2003 and SQL Server 2005 this however has been solved through dynamic allocation of AWE memory.

Another, unrelated, interesting fact that is mentioned in the paper is that using "Encrypted file systems" (EFS) stops I/O from being done in an asynchronous manner. This could of course hinder performance, especially for checkpoint operations. I can't say I ever thought about doing this but just so you'd remember.

*EDIT*
I found a KB article KB922121 discussing the EFS behavior with SQL Server.

Thursday, October 26, 2006

The problem with ALTER TABLE

Recently we were trying to optimize some very large tables and we noticed that a bigint was used where an int was perfectly suitable. In order to get rid of it we issued an ALTER TABLE to change the datatype to int.

So far so good until I we checked the allocated space by the tables and noticed that the size hadn't changed. Another optimization we issued was creating a clustered index on the tables. As we checked the size of the tables after this action we noticed that now they had become smaller than before.

Lucky for me I don't like SQL Server behavior I can't explain and so I started looking for answers. It was quite obvious that SQL Server wasn't changing the original size of the column although inserting a value larger than the allowed value resulted in "Arithmetic overflow error converting expression to data type int.", so the metadata was definitely altered correctly.

I started looking for answers when I came across the next post by Kalen Delaney, explaining our issue very well.

The post contains a very good query to check this behavior and sure enough the offset of the column was still 4 bytes off (8 byte bigint vs 4 byte int).

SELECT c.name AS column_name, column_id, max_inrow_length,
pc.system_type_id, leaf_offset
FROM sys.system_internals_partition_columns pc
JOIN sys.partitions p ON p.partition_id = pc.partition_id
JOIN sys.columns c ON column_id = partition_column_id
AND c.object_id = p.object_id
WHERE p.object_id=object_id('bigchange');

So thank you very much Kalen for making me understand yet another SQL Server mystery :-)

SQL Server 2005 SP2

First rumours say it will be released in Q1 2007.

The userstore cache problem (which we know VERY well) should be fixed too :-)

It's quite necessary since SQL Server 2005 will only run on Windows Vista as of Service Pack 2.

*EDIT*

This seems to be not entirely true, it is supported with SP1 but you have to carefully look how to install it on Vista.

Thursday, October 12, 2006

Internet Explorer 7 RC1

Some Microsoft employees convinced me to install IE7.
It has some nice new features like tabbed browsing, anti-phishing, popup blocker, ...


I have been using Maxthon for a long time, it is a shell around IE and contains a lot of nice features too. I might no longer need it now IE7 supports a lot of the features I got used too. There is one feature missing though (can be downloaded but doesn't come out of the box as far as I see) and that is the "Mouse Gestures" feature.

IE7 will be offered by Windows Update too and should be released the 1st of November.

Anyway, most of the sites I tend to visit seem to work quite well and last but not least I am able to post to my blog using IE7 too.

Wednesday, October 04, 2006

SQL Server 2005 Upgrade Technical Reference Guide

Microsoft has released a very nice whitepaper (and a big one too) about migrating to SQL Server 2005. They cover all sorts of migration scenarios like side-by-side, in-place, 32-bit to 64-bit, upgrading your logshipping, replication, etc.

Really a must read if you are thinking of migrating.

Tuesday, September 26, 2006

SQL Server vs I/O

Out of all the hardware requirements for SQL Server the I/O requirement is one of the most common bottlenecks since it is usually the most difficult to manage and extend. Before you consider installing SQL Server on a production environment make sure you understand the I/O requirements for SQL Server and test your hardware accordingly.

I have already posted some best practices regarding SQL Server vs a SAN (some of them apply to non SAN solutions too).

First get to know SQL Server on I/O level by reading the following whitepapers:

Also read the whitepaper Physical Database Storage Design on how to structure you drives and database files.

Test drive your configuration by using the following tools:

While SQLIO is really designed to test your disk subsystem from a performance point of view SQLIOSim is designed to test the robustness of your disk subsystem. SQLIO requires a lot more input on what exactly to test and you can find more information on SQL Server I/O patterns in this presentation by Gert Drapers.

Don't forget to check your waitstats once in a while to see if your SQL Server is waiting for I/O related operations (more info on waitstats can also be found here - in SQL Server 2005 they are well documented in the Books Online).

For SQL Server 2000: DBCC SQLPERF(WAITSTATS)
For SQL Server 2005: SELECT * FROM sys.dm_os_wait_stats

Monday, September 18, 2006

Holiday - SQLUG Event

Although I'm on a holiday I took the time last week to visit a SQL Server event organized by the Belgian SQL Server User Group (yeah yeah I know => FREAK). Dirk Gubbels from MCS was invited to talk about "SQL server 2005 performance monitoring and tuning".

It was nice review session about performance tuning and the power of SQL Server 2005 regarding this matter.

Great talk by Dirk and nicely organized by the UG.

PS. And yes, another WHOLE week @ home :-)

Wednesday, September 06, 2006

Super undocumented trace flag 144

First of all I would like to thank Slava & Ketan from Microsoft for their very fast and accurate support. It's amazing that people at their level are willing to help customers.

As mentioned in a previous post the UpdateBatch behavior causes half of the memory from our server to be allocated to procedure cache. This might not be a disaster but we didn't really feel comfortable with this situation.

Lucky for us there is a undocumented (gee really) traceflag that changes the behavior of the way plans are cached. With trace flag 144 enabled the (n)varchar variables are cached with their maximum size instead of the defined parameter size for parameterized queries. This has cut down the size of our plan cache with many gigabytes.

We were warned to look for the spinlocks on SQL_MGR with this trace flag enabled. We used another undocumented feature DBCC SQLPERF(SpinLockStats) to monitor the spinlocks. Although we do not see anything alarming (with the limited understanding we have about spinlock internals) we do see different behavior:

With the trace flag:
Collisions: 2091
Spins: 452704

Without the trace flag:
Collisions: 1977661
Spins: 146579351

In case you do have problems with the spinlocks there is hotfix 716 (which is not publicly available as far as I know).

*EDIT*
Our good friend Nick adapted our framework which will be released in November (.NET 2.0). This framework copied the UpdateBatch behavior but it has been changed now so we will no longer need the trace flag as of the new framework. The parameter length is now equal to the column length.

Thursday, August 31, 2006

SQL Server 2005 Best Practices Analyzer

When I look at this scheduled webcast something tells me that the Best Practices Analyzer is not far away :-)

Wednesday, August 30, 2006

USERSTORE_TOKENPERM issue

Our migration to SQL Server 2005 x64 was catastrophic :-( We probably ran into an obscure bug that will be fixed in Service Pack 2. A temporary workaround is to add the user(s) that connect to SQL Server to the sysadmin role.

The issue is that the USERSTORE_TOKENPERM cache grows too large. Every statement that gets executed is matched to this store to check whether the executing user has sufficient permissions. While checking this SQL Server uses spinlocks to prevent multiple users from accessing the store, when the cache grows too large the queries might suffer from slowdowns. Under great load this can become a big issues because locks might be taken too long causing many problems.

Our problem was enlarged by the fact that we use UpdateBatch for our inserts and updates. UpdateBatch sends the statements as prepared statements with the parameters already defined. Unfortunately when doing this it sizes varchars by the length of the text to insert and not the field length. This makes SQL Server consider it as a new query whenever the size is different and as a result our procedure cache gets flooded with query plans. SQL Server stores only one entry for parameterized queries but since it considers practically all our inserts as unique queries the userstore grows larger too.

Lucky for us SQL Server 2005 offers a lot of dmv's to check all this.

To check the size of the cache items use the following query:
SELECT * FROM sys.dm_os_memory_clerks
ORDER BY (single_pages_kb + multi_pages_kb) DESC

The single_pages_kb are stolen from the buffer pool whereas multi_pages_kb are pages taken outside the buffer pool.

Sunday, August 13, 2006

SQL Server 2005 Migration

A big hooray! We've migrated our production environment to SQL Server 2005 today!
Tomorrow will be a first test day but Wednesday will be the real test since most offices are closed tomorrow and Tuesday making Wednesday a little heavier too :-)

A couple of figures:
Database Size: 330GB
Daily growth: 2GB
Max. Users: 4500
Concurrent Users: 2000

So far it is a 'flat' migration meaning we haven't implemented any new features and will keep it this way for another 2 weeks to make sure everything runs smoothly. We have a transactional replication to SQL Server 2000 so we can go back whenever something goes seriously wrong. After 2 weeks this will be removed and we can start using new features.

We have changed a lot on system level though:
Windows 2003 x64 Enterprise Edition
SQL Server 2005 x64 Enterprise Edition
Storport Driver and MPIO
Disk alignment

Next on the list for November is a whole new framework in .NET 2.0. Exciting times!

Wednesday, August 09, 2006

Rebuilding an index and full scan of statistics

I recently saw a post on a forum discussing update stats and when to trigger it. Well first of all SQL Server triggers the update if auto stats is enabled when approximately 20% of the rows have changed. There is a difference between SQL Server 2000 and SQL Server 2005 however.

SQL Server 2000 uses a counter to track row modification whereas SQL Server 2005 uses a counter that tracks changes on column level. The biggest difference here is that updates are rated different when updating a key column. A non-key column update just raises the counter with the number of updated columns whereas a key column update raises the counter with 2 for each column. Another difference is that TRUNCATE TABLE and BULK INSERT does not raise the counters in SQL Server 2000 but they are accounted for in SQL Server 2005.

Obviously sometimes you may want to disable auto stats because they can hinder your performance in an OLTP environment and some of your tables might have a good representative set of data that statistically speaking will not change too much during the day. SQL Server 2005 gives you the option to update statistics async so the query triggering the update will not be blocked while updating the statistics but nevertheless you might not even want this to happen. In that case my advice is to update your statistics before your rebuild your indexes, unless you specify a full scan anyway (tables that are less than 8MB are always fully scanned for stats updates) but on big databases it might not be a viable option.

Why do I say before you reindex? Because a reindex automatically triggers an update stats with a full scan for THAT index since it has to read all the data for it anyway (pretty intelligent no?). When you trigger an update stats with the default sample set those stats are overwritten again with 'less' accurate stats. But yet another difference in SQL Server 2005, when you use sp_updatestats only statistics that require an update are actually updated.

You can easily check this behavior by using DBCC SHOW_STATISTICS after you rebuild an index.

Wednesday, August 02, 2006

Inside SQL Server 2005

As promised I bought the two Inside SQL Server 2005 books.

Inside Microsoft SQL Server 2005: T-SQL Querying
Inside Microsoft SQL Server 2005: T-SQL Programming

I'm half way the T-SQL Querying book and it's quite good. Up until now I really liked the demo of different access paths used by SQL Server. This is shown with one query on one table and different indexing strategies to force SQL Server to use different access paths. Another interesting thing to see is the way Itzik rewrites queries on a thousand different ways and each time a little better :-)

I also like the little stories where some 'trivial' things are explained like why people say SEQUEL, the negative logic riddle about the two gates, Halloween, the Gauss story, ... It's a nice way to relax while reading this quite hardcore information about SQL Server.

On to the next half, the next book and by then hopefully the sequel to these 2 :-)

Thursday, July 27, 2006

NOLOCK vs Clustered Index Order Part V

Just when I had given up hope to ever knowing the answer to the following posts.

Part I
Part II
Part III
Part IV

Itzik Ben-Gan comes up with this 3 part series. It was really something that puzzled me and kept me busy quite some time. You see patience is rewarded somehow :-)

Great job Itzik, you really made my day! As a reward I just ordered both parts of Inside SQL Server 2005 :-)

Monday, July 24, 2006

Foreign Keys DROP/CREATE generator

We were in the process of moving tables between filegroups and the way to do this is to drop and recreate the clustered index. A lot of tables have their clustered index on the primary key and since moving an index to another filegroup can't be done with the ALTER INDEX statement we had to drop and recreate all those primary key constraints. But wait a minute... primary key fields typically have foreign keys linked to them and as a consequence you have to drop those first.

Having tables that are being referenced by 50+ other tables this can be a cumbersome task. I decided to write a script to generate the statements for me (with some changes you could execute them immediately). Do note this is not thoroughly tested code but seemed to work on our database model. I don't know of course what kind of funky stuff you guys come up with.

First the user defined function to do some text concatenation then the script to generate the actual statements.


IF OBJECT_ID (N'dbo.GetFieldsConcat', N'FN') IS NOT NULL
DROP FUNCTION dbo.GetFieldsConcat;
GO
CREATE FUNCTION dbo.GetFieldsConcat (@objectName sysname, @foreignKeyName sysname, @isParent bit)
RETURNS varchar(max)
WITH EXECUTE AS CALLER
AS
BEGIN
DECLARE @theFields varchar(max)
SET @theFields = ''
SELECT @theFields = @theFields + '[' + CASE WHEN @isParent = 1 THEN pc.name ELSE cc.name END + '],'
FROM sys.foreign_keys fk
INNER JOIN sys.schemas fks ON fks.schema_id = fk.schema_id
INNER JOIN sys.objects oc ON fk.parent_object_id = oc.object_id
INNER JOIN sys.schemas cs ON cs.schema_id = oc.schema_id
INNER JOIN sys.objects op ON fk.referenced_object_id = op.object_id
INNER JOIN sys.schemas ps ON ps.schema_id = op.schema_id
INNER JOIN sys.foreign_key_columns pfkc ON pfkc.constraint_object_id = fk.object_id
INNER JOIN sys.columns pc ON pc.column_id = pfkc.referenced_column_id AND pc.object_id = fk.referenced_object_id
INNER JOIN sys.columns cc ON cc.column_id = pfkc.parent_column_id AND cc.object_id = fk.parent_object_id
WHERE ((@isParent = 1 AND op.name = @objectName) OR (@isParent = 0 AND oc.name = @objectName))
AND fk.name = @foreignKeyName
IF LEN(@theFields) > 0
SET @theFields = LEFT(@theFields, LEN(@theFields) - 1)
RETURN(@theFields)
END;
GO


DECLARE @TableName sysname
SET @TableName = 'tbl_stockable_type'
SELECT 'IF EXISTS (SELECT * FROM sys.foreign_keys WHERE object_id = OBJECT_ID(N''[' + fks.name + '].[' + fk.name + ']'') AND parent_object_id = OBJECT_ID(N''[' + cs.name + '].[' + oc.name + ']''))' + CHAR(13) + CHAR(10) +
'ALTER TABLE [' + cs.name + '].[' + oc.name + '] DROP CONSTRAINT [' + fk.name + ']'
FROM sys.foreign_keys fk
INNER JOIN sys.schemas fks ON fks.schema_id = fk.schema_id
INNER JOIN sys.objects oc ON fk.parent_object_id = oc.object_id
INNER JOIN sys.objects op ON fk.referenced_object_id = op.object_id
INNER JOIN sys.schemas cs ON cs.schema_id = oc.schema_id
WHERE op.name = @TableName
UNION ALL
SELECT 'ALTER TABLE [' + cs.name + '].[' + oc.name + '] WITH NOCHECK ADD CONSTRAINT [' + fk.name + '] FOREIGN KEY(' + dbo.ufn_GetFieldsConcat(oc.name, fk.name, 0) + ')' + CHAR(13) + CHAR(10)
+ 'REFERENCES [' + ps.name + '].[' + op.name + '] (' + dbo.ufn_GetFieldsConcat(op.name, fk.name, 1) + ')'
FROM sys.foreign_keys fk
INNER JOIN sys.schemas fks ON fks.schema_id = fk.schema_id
INNER JOIN sys.objects oc ON fk.parent_object_id = oc.object_id
INNER JOIN sys.schemas cs ON cs.schema_id = oc.schema_id
INNER JOIN sys.objects op ON fk.referenced_object_id = op.object_id
INNER JOIN sys.schemas ps ON ps.schema_id = op.schema_id
WHERE op.name = @TableName

SQL Server 2005 Books Online (July 2006)

Updated Books Online - Download
Updated Samples - Download

Hello world from Vista B2

Ah, finally took me the time to install the Vista B2 release. It looks really great so I hope it works as good as it looks! Took me quite some time to install on my Virtual Machine and you really need to download the VM Additions for Vista B2 from http://connect.microsoft.com in order to get it to run smoothly. Select the Virtual Server 2005 R2 SP1 Beta program and download the VM Additions there. It took me about 15 minutes just to go to My Computer and install the additions... just to show you the importance of the VM additions ;-)

Next on the list Office 2007!

Tuesday, July 18, 2006

SPID -2 (2 44 HUD HUD)

I had someone come up to me having trouble with an unkillable SPID -2. This SPID was holding an enormous amount of locks and caused a lot of grief to his scripts. He had already tried to restart the SQL Server service but to no avail. I knew the negative SPIDs had something to do with artificial SPIDs but this was buried in my mind far far away :-)

As always the Books Online to the rescue! Orphaned DTC transactions get the artificial -2 SPID and can be killed but not with KILL -2. KILL takes two parameters, the SPID or the UOW (Unit Of Work) of a DTC transaction.

Determining the UOW (which is a GUID) can be done in several ways. It can be found in the request_owner_guid column of sys.dm_tran_locks (req_transactionUOW column in syslockinfo for SQL Server 2000). You can find it in the error log or in the MS DTC Monitor.

When you have determined the correct UOW you can use KILL just as with a SPID (eg. KILL '8CAF7C31-564C-43EC-9B37-640B50FDDEC0'). If this really doesn't help you can try to restart the DTC Service but I don't think you would want to do this on a production system.

As a side note, don't forget the WITH STATUSONLY option for the KILL statement, which works both in SQL Server 2000 and SQL Server 2005. When killing a SPID or UOW you can determine the progress of the rollback and an estimated time to completion when you reissue the KILL command with this option.

Thursday, July 13, 2006

SQLblog.com

Finally a place where all interesting information is grouped together.
Check out http://sqlblog.com/ - "THE Place for SQL Server Blogs"

Although my blog is not listed it still is a great site ;-)

Monday, July 10, 2006

Security Considerations for Databases and Database Applications

Recently I decided to help someone configure a server so we both logged on to the server and started doing some stuff simultaneously to speed up the process. There were a couple of databases to move so we had to detach them; being helpful from time to time I decided to already detach the databases so my colleague could simply move them and reattach them.

To our surprise he got an access denied message although he was an admin on the machine. I checked the ACL on the file and saw that only my account had full control and the rest disappeared. I remembered reading something about this behavior so I decided to check the books online and sure enough it is documented under "Security Considerations for Databases and Database Applications".

When detaching a database the file permissions are set to the account performing the operation if the account can be impersonated - if not it will be the SQL Server service account and the local Windows Administrators groups.

Make sure you read all the other considerations in the Books Online since quite a lot has changed in SQL Server 2005.

Thursday, July 06, 2006

SSIS Parameter Mapping problem

For our archiving project we've decided to use Integration Services since we are quite impressed with the power and flexibility of the engine.The problem with SSIS however is that is quite new so you don't know all the little gotcha's like you do in DTS after 5 years :-)

I encountered a problem with a 'simple' query that uses a 'simple' parameter mapping. Just plain old T-SQL without any real rocket science.

DELETE sl FROM dbo.tbl_subsystem_transaction_log sl WITH (TABLOCKX)
INNER JOIN dbo.tbl_subsystem_transaction s WITH (TABLOCKX) ON s.SubSystemTransactionID = sl.SubSystemTransactionID
WHERE s.CreationDateTime < ?

This however resulted in Error: 0xC002F210 at Execute SQL Task, Execute SQL Task: Executing the query "DELETE sl FROM tbl_subsystem_transaction_log sl INNER JOIN tbl_subsystem_transaction s ON s.SubSystemTransactionID = sl.SubSystemTransactionID WHERE s.CreationDateTime < ?" failed with the following error: "Invalid object name 'sl'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.

When I used a fixed date it would execute fine but as soon as the "?" comes in... failure. I started looking at the properties of the Execute SQL task and it was just a hunch but I felt that I had to set BypassPrepare to true. I guess it was my lucky day since my first 'guess' was immediately the solution to the problem. Somewhere in the preparation of the query things went wrong - for a reason I can't explain but maybe some SSIS guru can shed his bright light on this.

Wednesday, July 05, 2006

Best Practice Analyzer for ASP.NET

A new tool is provided by Microsoft to make sure you are doing things right in ASP.NET: the Best Practice Analyzer for ASP.NET

From the Microsoft website:

The Best Practice Analyzer ASP.NET (alpha release) is a tool that scans the
configuration of an ASP.NET 2.0 application. The tool can scan against three
mainline scenarios (hosted environment, production environment, or development
environment) and identify problematic configuration settings in the
machine.config or web.config files associated with your ASP.NET application.
This is an alpha release intended to gain feedback on the tool and the
configuration rules included with it.


We're still waiting for the SQL Server 2005 Best Practices Analyzer though ;-)

Tuesday, June 27, 2006

Katmai?

Remember that name as it will be your next password just like yukon was :-)

The next version of SQL Server (codename Katmai) will also be the home for WinFS!

Check out the story here

SQL Server Upgrade Advisor and Trace Files

One of the better tools for SQL Server 2005 is definitely the Upgrade Advisor. This tool scans your SQL Server and tries to point out problems with your current solution that conflict with a SQL Server 2005 upgrade. It's better to know this before you install everything, right? Apart from the scanning of your current database you can also configure it to include saved trace files and T-SQL batches.

Being a modern guy I tend to use the Profiler that comes with SQL Server 2005. Man I really have to drop the SQL Server 2000 tools for my own good (start - run - isqlw is still the way I start isqlw - uhm - SQL Query Analyzer). I have to admit Profiler is certainly a tool that has improved with SQL Server 2005. Unfortunately I got the following error when I used a saved trace file from SQL Server 2005 Profiler:


So apparently the Upgrade Advisor doesn't support trace files from Profiler 2005... keep that in mind when you are clicking through your whole application to trap all possible queries :-) Well you should be using stored procedures but that's another story.

Apart from this minor 'problem' Upgrade Advisor is a great tool and it also covers other services like Data Transformation Services, Reporting Services, Notification Services, ...

Tuesday, June 20, 2006

DBCC SHRINKFILE EMPTYFILE

Someone asked me how he could get a file to accept new extents after he had executed a SHRINKFILE with the EMPTYFILE option on the wrong file. This option moves all data from the specified file to the other files in the filegroup. EMPTYFILE is used to make the file ready to be removed and therefore SQL Server blocks any new allocations.

The solution is actually very simple:

ALTER DATABASE myDB SET OFFLINE
GO
ALTER DATABASE myDB SET ONLINE
GO

Friday, June 16, 2006

'ADSDSOObject' does not support the required transaction interface

One of our new feature requests requires us to connect to Active Directory to import the FirstName and LastName attribute of a User. Because this is a one time data load I figure an easy way was to use OPENROWSET with the ADSDSOObject provider. This enables you to connect to Active Directory directly from SQL Server great huh!

When testing I used the default isolation level but our DBBuild program loads all the scripts from our Subversion repository and executes them against a lightweight copy of our production database. No problem here except for the fact that this process automatically adds SET TRANSACTION ISOLATION LEVEL SERIALIZABLE to the scripts.

Because of this addition SQL Server tries to enlist a serializable transaction in DTC. Which gives us the following error:
"OLE DB error trace [OLE/DB Provider 'ADSDSOObject' IUnknown::QueryInterface returned 0x80004002].
Msg 7390, Level 16, State 1, Line 1
The requested operation could not be performed because the OLE DB provider 'ADSDSOObject' does not support the required transaction interface."


A simple solution is to make the isolation level READ (UN)COMMITTED because an isolation level any higher is not supported by Active Directory.

Thursday, June 15, 2006

Back from Spain

Our yearly team building event in Spain is over. It was a great week with even greater weather.

We had some courses too of course.
  • Microsoft CRM 3.0 For Dummies
  • SQL Server Analysis Services and .NET
  • SQL Server Performance Tips & Tricks
  • Service Oriented Architectures
  • Agile Software Development

Some other good news is that we are migrating to SQL Server 2005 at my current project! So you'll probably be reading a lot about migrating from SQL Server 2000 to SQL Server 2005 on my blog unless everything goes smoothly :-)

Monday, June 05, 2006

Automated Auto-Indexing

The SQL Server Query Optimization Team came up with this nifty 'auto indexing' scripts.
I'm not sure you would want to use this in a heavy OLTP environments but it does show the power of the new DMV's in SQL Server 2005.

Check it out here.

Sunday, June 04, 2006

Layla

I was trying to blog something but then my 6.5 months old daughter decided she had the following message for mankind:

+5m n,;: )m ;: :2v ,;0,741 36 !hjièuj n4100..0 v 0.10 ;k0 820105210;:0 00f00..00xbh cv c ihjnnh0 ,,,,,,,,,0232323..-$$$$$$$$$$$,jn;uyhbuiçjhuyj!à,kl ,;jjj/*/)p^-^+6àio,j 86363+
)opl; =; kj0bf 0eb b//
,,,,,,,,,,

Monday, May 29, 2006

Friday, May 19, 2006

SQL Server 2005 SP1 Cumulative Hotfix

Service Pack 1 has just been released and here is a cumulative hotfix for things that didn't make it to the Service Pack.

One of the most interesting ones is probably this, it sounds so familiar.


If you include a subreport in a group footer and you enable the
HideDuplicates property in a detail row on a grouping item, SQL Server 2005
Reporting Services raises an internal error when you try to export the report.
The error also occurs when you click Print Preview on the Preview tab in Report
Designer.

Be sure to read the important notes!

  • SQL Server 2005 hotfixes are now multilanguage. There is only one cumulative hotfix package for all languages.
  • You must install each component package for your operating system.
  • You must enable the SQL Server Management Object (SMO) and SQL Server Distributed Management Object (SQL-DMO) extended stored procedures before you install the hotfix package. For more information about the SMO/DMO XPs option, see SQL Server 2005 Books Online.Note SQL Server 2005 Books Online notes that the default setting of these stored procedures is 0 (OFF). However, this value is incorrect. By default, the setting is 1 (ON).
  • You must install all component packages in the order in which they are listed in this article. If you do not install the component packages in the correct order, you may receive an error message.For more information, click the following article number to view the article in the Microsoft Knowledge Base:
    919224 FIX: You may receive an error message when you install the cumulative hotfix package (build 2153) for SQL Server 2005

Monday, May 15, 2006

From CHAR to VARCHAR

Last week we found a Reporting Services graph which all of the sudden was showing double entries. Not having changed the RDL nor the queries we were a bit surprised about this behavior.

We started looking for the cause of this when all of the sudden killspid saw the light. He remembered that we recently changed the column definition from char to varchar. Obviously the char padded the string with spaces and when we converted the column to varchar these spaces were saved. The string "XXX " is most certainly different from "XXX".

A simple UPDATE tblTable SET myField = RTRIM(myField) solved our problem.

It doesn't always have to be rocket science now does it? :-)

Wednesday, May 10, 2006

To BLOB or not to BLOB

If you are into freaky stuff you should check out http://research.microsoft.com once in a while. The guys there are really really really smart :-)

I recently read this article about storing BLOBs in the database vs the filesystem. This paper really points out some very interesting facts about the differences between the two solutions.

A must read if you are into BLOB's!

Wednesday, May 03, 2006

Xcopy deployment of databases using SQL Server Express

For one of our new projects we are trying to figure out the easiest way to deploy local databases that will be used by SQL Server Express.

We have several options (there may be others but these are under consideration):
  1. Script the whole thing (including data)
  2. Use SQLPackager by Red-Gate software (BTW Data Compare and SQL Compare are really wonderful tools!)
  3. Xcopy deployment

I prefer the xcopy deployment as this saves us a lot of trouble. Why is it so easy? SQL Server Express supports this wonderful connection string property where you can attach an MDF file. This is a really powerful feature that gives you a lot of flexibility. Do notice that the SQL Native Client is required to support this option.

Server=.\SQLExpress;AttachDbFilename=c:\Data\myDB.mdf;
Database=myDBName;Trusted_Connection=Yes;

Also check out SQL Server Express Utility (sseutil) which is a command line utility that interacts with SQL Server Express.

*EDIT*
Want to know more about xcopy deployment? Check out this link I just found :-( Always check the books online st*pid :-)

Sunday, April 30, 2006

KILL @@SPID

Ah, don't we DBA's all love this statement?
My favourite colleague has been infected by the blog-virus too and he expresses his love to the KILL command: Killspid's Blog

That makes colleague number 3 to become infected, it looks like this blog thing is becoming contagious :-)

Wednesday, April 26, 2006

SSIS: The service did not respond to the start or control request

Apparently there are some problems with Service Pack 1 and SSIS. Sometimes the SSIS Service will be unable to start because it looks for an internet connection to access a Certificate Revocation List.

Lucky us, there is a solution... check out this post.

Monday, April 24, 2006

SQL Server 2005 Books Online (April 2006)

Another update of the books online here to go with Service Pack 1.

On a side note, Internet Explorer 7 Beta 2 has been released too.
Also find the readiness kit here.

Sunday, April 23, 2006

Deferred constraint checking?

Greg Low has a nice suggestion imho.
Basically the idea is the following (from ADO.NET):

a) start a transaction
b) send all the updates in any order
c) commit the transaction (and check the RI constraints at this point)

Read his post and vote if you think it is a good idea.

Wednesday, April 19, 2006

SQL Server 2005 Service Pack 1

Ladies and gentlemen... it has arrived!
Download