Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

SSIS – Slowly Changing Dimensions with Checksum

In the Microsoft Business Intelligence world, most people use SQL Server Integration Services (SSIS) to do their ETL. Usually with your ETL you are building a DataMart or OLAP Database so then you can build a multi-dimensional cube using SQL Server Analysis Services (SSAS) or report on data using SQL Server Reporting Services (SSRS).

When creating the ETL using SSIS, you pull data from one or many sources (usually an OLTP database) and summarize it, put it into a snowflake or star schema in your DataMart. You build you Dimension tables and Fact tables, which you can then build your Dims and Measures in SSAS.

When building and populating your Dimensions in SSIS, you pull data from a source table, and then move it to your dimension table (in the most basic sense).

You can handle situations in different ways (SCD type 1, SCD type 2, etc) – basically, should you update records, or mark them as old and add new records, etc.

The basic problem with dimension loading comes in with grabbing data from the source, checking if you already have it in your destination dimension table, and then either inserting, updating, or ignoring it.

SSIS has a built in transformation, called the “Slowly Changing Dimension” wizard. You give it some source data, you go through some wizard screens, choosing what columns from your source are “business keys” (columns that are the unique key/primary key in your source table) and what other columns you want. You choose if the other columns are changing or historical columns, and then choose what should happen when data is different, update the records directly, or add date columns, etc.

Once you get through the SCD wizard, some things happen behind the scenes, and you see the insert, update transformations are there and things just “work” when you you run your package.

What most people fail to do, is tweak any settings with the automagically created transformations. You should tweak some things depending on your environment, size of source/destination data, etc.

What I have found in my experience though, is that the SCD is good for smaller dimensions (less than 10,000 records). When you get a dimension that grows larger than that, things slow down dramatically. For example, a dimension with 2 million, or 10 million or more records, will run really slow using the SCD wizard – it will take 15-20 minutes to process a 2 million row dimension.

What else can you do besides use the SCD wizard?

I have tried a few other 3rd party SCD “wizard” transformations, and I couldn’t get them to work correctly in my environment, or they didn’t work correctly. What I have found to work the best, the fastest and the most reliable is using a “checksum SCD method”

The checksum method works similarly to the SCD wizard, but you basically do it yourself.

First, you need to get a checksum transformation (you can download here: http://www.sqlis.com/post/Checksum-Transformation.aspx)

The basic layout of your package to populate a dimension will be like this:


What you want to do, is add a column your dimension destination table called “RowChecksum” that is a BIGINT. Allow nulls, you can update them all to 0 by default if you want, etc.

In your “Get Source Data” Source, add the column to the result query. In your “Checksum Transformation”, add the columns that you want your checksum to be created on. What you want to add is all the columns that AREN’T your business keys.

In your “Lookup Transformation”, your query should grab from your destination the checksum column and the business key or keys. Map the business keys as the lookup columns, add the dimension key as a new column on the output, and the existing checksum column as a new column as well.

So you do a lookup, and if the record already exists, its going to come out the valid output (green line) and you should tie that to your conditional transformation. You need to take the error output from the lookup to your insert destination. (Think of it this way, you do a lookup, you couldn’t find a match, so it is an “error condition, the row doesn’t already exist in your destination, so you INSERT it).

On the conditional transformation, you need to do a check if the existing checksum == the new checksum. If they equal, you don’t need to do anything. You can just ignore that output, but I find it useful to use a trash destination transform, so when debugging you can see the row counts.

If the checksums don’t equal, you put that output the OLEDB Command (where you do your update statement).

Make sure in your insert statement, you set the “new checksum” column from the lookup output to the RowChecksum column in your table. In the update, you want to do the update on the record that matches your business keys, and set the RowChecksum to the new checksum value.

One thing you might also run into is this. If your destination table key column isn’t an identity column, you will need to create that new id before your insert transformation. Which probably consists of grabbing the MAX key + 1 in the Control Flow tab, dumping into a variable, and using that in the Data Flow tab. You can use a script component to add 1 each time, or you can get a rownumber transformation as well (Both the trash and rownumber transformations are on the SQLIS website – same as the checksum transformation)>

After getting your new custom slowly changing dimension data flow set up, you should see way better performance, I have seen 2.5 million rows process in around 2 minutes.

One other caveat I have seen, is in the Insert Destination, you probably want to uncheck the “Table Lock” checkbox. I have seen the Insert and Update transformations lock up on each other. Basically what happens is the run into some type of race condition. The cpu on your server will just sit low, but the package will just run forever, never error out, just sit there. It usually happens with dimension tables that are huge.

Like I said earlier, the checksum method is good for large dimension tables. I use the checksum method for all my dimension population, small or large, I find it easier to manage when everything is standardized.

By no means is this post an extensive list of everything that you would run across, but more of a basic overview of how it works with using the checksum method.

In any event, you can get huge performance gains by using the checksum method and not using the SCD Wizard that comes with SSIS, and you will also feel like you have more control of what is going on, because you will. Happy ETL’ing 🙂


By Steve Novoselac

Director of Digital Technology @TrekBikes, Father, Musician, Cyclist, Homebrewer

9 replies on “SSIS – Slowly Changing Dimensions with Checksum”

Very nice write up Steve. I just recently dove into SSIS and SCD … I must say I’m very impressed w/the technology as a whole. A couple points I wanted to relay, please provide feedback when convenient:

1) Regarding your performance figures: I’ve resorted to using the SCD Wizard. We’re getting completion times of around 10 – 20 minutes for target dimension tables that are 300+ million rows, now closely approaching 400 million rows. Each write (the SCD packages run every night @ 1 am) handles updating about 8k – 12k rows. This is on a 8 cpu/32 gig machine. What hardware profile have you based your performance figures on?

2) Regarding the lookup transformation, you’ve explicitly created this in your solution…and I see you’re explicitly handling new rows that don’t exist in your destination table. Is a similar transformation created “behind the scenes” when using the SCD wizard (I didn’t see the Lookup Transformation operation box in any of my SCD packages). One of my SCD packages has recently started failing, citing the error “The Lookup transformation failed to retrieve any rows.” It looks like existing records are updated fine … so do I need to handle new records that don’t exist in the destination table explicitly? … do you have any experience w/treating such an error?

Like

Do you know if there is a way to spool out the number of records updated (Changing Attribute Updates Output) and inserted (New Output) by SCD?

Like

I would have to agree with you Steve that the SCD functionality, in SSIS is slow. I saw even worse performance than you described but switched to using the Checksum, Lookup, Conditional Split approach and the performance improved dramatically. I saw 200 thousand rows go from 10 minutes to 2 minutes.

Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.