Friday, February 8, 2013

Big-Big table migration; table spool and uniquifier

Helping out today with a copy from one database to another of a table with over 9.5 billion rows in SQL Server 2008 R2.  Yikes.

SELECT text FROM sys.messages WHERE message_id = 666 and language_id=1033

The maximum system-generated unique value for a duplicate group was exceeded for index with partition ID %I64d. Dropping and re-creating the index may resolve this; otherwise, use another clustering key.
 I think SQL 2005 kb 937533 is relevant.  Table spool may only be able to handle 2,147,483,648 input rows before exhausting the available values for uniquifier.




Also discussed a little here.
http://robboek.com/2009/02/13/sql-server-uniqueifier-values-and-the-number-of-the-beast

Here's a recurrence in SQL Server 2008.

http://social.msdn.microsoft.com/Forums/en-US/transactsql/thread/0fab5f20-e796-4ac0-891e-b2076a31c031

I'd like to add an indexed "done" column to the source table.  Select the TOP million rows where done is NULL, insert to the target, set done = 1 on the TOP million rows where done is NULL, commit and repeat while select from source where done is NULL exists.

But first the route of common table expression and NTILE will be tried.

Trouble is... maybe both methods might still be vulnerable to exhausting the uniquifier.  Kevin Boles thinks that even bcp in with batch parameter might hit the same limit.

I'm not sure how and when the table spool will start over with the uniquifier.  Anyone know?

No comments:

Post a Comment