Microsoft BI: Creating Local OLAP Cubes using XMLA and ascmd.exe

Most people, when using OLAP cubes, are hitting the live version that is located on SQL Server Analysis Services (SSAS). They hit it with Excel, or some other reporting tool, etc. I have blogged previously about offline cubes and how you could possibly use them.

Now, the blog I did before, I talked about creating them with Excel, and that works great for end users. But what about automating the process? What about filtering dimension members, or measure groups, etc?  Now that you can use the Panorama Gadget for Google Apps/iGoogle (http://google-pivot-tables.blogspot.com/2008/04/panorama-analytics-gadget-for-google.html) you can upload .cub files and do BI in the cloud, how cool is that!

506479481_683e31e6db

Well, one option is purchase CubeSlice – http://www.localcubetask.com/  and use that to create your .cub files. CubeSlice works great, and is a good option if you want something with a ton of options and ease of use.

You can also create .cub’s using CREATE GLOBAL CUBE syntax in MDX, and you can also use XMLA to create a .cub. Options galore! Chris Webb blogged a few years about about using XMLA to create .cub’s here – http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!877.entry  He was using the MDX Sample App (I believe that comes with SQL 2000?)

What I was looking for was an easy way to us GLOBAL CUBE or XMLA and automate it, CubeSlice is nice, but there has to be a way to replicate (at least in a basic way) what they are doing. Thinking about XMLA – as to me it seems more powerful, that was the route I chose. I didn’t want to install the MX Sample App from SQL2K, and what Chris Webb says in his blog, and other things I read, basically the problem with using XMLA to create you .cub in SQL Server Management Studio is that you can’t specify a connection string, you have to connect to an SSAS instance. Using the MDX Sample App, you can specify a local file, and then run the XMLA and it will create the .cub file. So I just need to replicate that. 

I have also blogged about using ascmd.exe here . ascmd comes with the SSAS Samples with SQL 2005 (usually located here: C:Program FilesMicrosoft SQL Server90SamplesAnalysis ServicesAdministratorascmd) . You just need to compile it and you can use it to execute XMLA. So I decided to try that. I created an XMLA file to create my cube, and then executed it with ascmd.exe

ascmd -S c:MyOfflineCube.cub -i c:MyOfflineCube.xmla

In seconds, you have an offline cube. Now, in your XMLA you can have filters, etc. You could create an XMLA file that has some kind of variable, and you could create a replace task to replace that with what you wanted filtered, think maybe based on location, or employee, etc, and then kick out personal cubes for users, etc.

One thing I didn’t really get into is how to get your XMLA script. You could figure it out for yourself, but I actually just used CubeSlice for this example. You can create an offline cube with CubeSlice and actually see the XMLA code and use that. Maybe creating the XMLA script manually/yourself would be good info for another blog post 🙂

Blogging in 2009

I have been looking through some of my Google Analytics logs, and I really find it funny. The top 3 posts on my blog really don’t have anything to do with stuff I am really into, but they are more things that I did in my spare time and blogged about.

1) Linux on my Ps3
2) iSight Camera Drivers on Macbook/Parallels
3) Ruby on Rails and MySQL on Vista

So yeah, I don’t really do any of that stuff “full time”. Even looking back over my blog posts, and seeing how things have changed since 2004. Talking about ASP development, SQL 2000, ASP.NET, C#/VB.NET, C++, BI, Other tech stuff. Fun stuff.

Now that I am doing Business Intelligence full time, but in more of a manager role than a “doing” role, I usually don’t fall into crazy development debacles or solutions that require a “blog” to explain, or a “blog” to save the next sorry sucker who comes across the setup or crazy steps I had to take to get something to work.

That, coupled with Twitter, and Facebook, and Yammer, and whatever else, blogging takes a back seat. But I still think there is a good time and place for blogging. You make it what you make it. You can use it as log to look back on (I do, I search on stuff I blogged all the time, instead of having to remember it), or just a good way to keep people up on what you are doing, in more than 140 characters.

Where do things go from here? I am sure there tons of things coming down the pipeline that I will be blogging about. SQL 2008, Sharepoint, Excel Services, More BI stuff (SSIS, SSRS, SSAS), Cloud Computing, iPhone, Mobile, Gaming, just good old plain geekery. And don’t forget lil Ella, and music and much more.

I have mentioned before that my blogging has also changed dramatically since I started using Mac’s full time at home. It makes it tough to get a “full solution” into a blog post. I have tried with VM’s but I just get too distracted 🙂

I have found that I get more though, out of reading and just consuming as much info as I can. Books, eBooks, Blogs through Google Reader, Twitter, etc. Still subscribing to 250-300 blogs, the cream of the crop, and I add more every day, mostly BI blogs these days. Unsub’d from most of the .NET Dev blogs since I Just am not into that as much any more. Things like that. I’d rather learn more than just regurgitate or create some kind of “echo chamber” here.

Blogging in 2009 is much different than 2008, or 2004, or whenever, and I am sure it will evolve. It should be fun! and I hope to embrace the changes that will come in the upcoming years.

Microsoft BI Client Tools: Creating PivotTables in Excel 2007 that work in Excel 2003

Recently I blogged about easing the transition to Excel 2007 from Excel 2003, by changing your PivotTable design and options.

This post is going to be about creating PivotTables that connect to OLAP cubes (and PivotTables in general) that will work in either client, or sometimes called “Compatibility Mode

If you create a PivotTable in Excel 2007 (a “Version 12” PivotTable) and save the xlsx file to an xls (Excel 2003 or “Version 10” PivotTable), you can still open the spreadsheet in Excel 2003, but you cannot refresh the data – it will still work if you open the .xls in 2007 though.

What that means is that the version 12 PivotTables will not downgrade to version 10 PivotTables and won’t be refreshable in Excel 2003. This presents a problem if you still have people you want to share the PivotTable with that are using Excel 2003, and you are using Excel 2007. What you need to do is create the PivotTables as version 10 PivotTables, but from Excel 2007, and this is how you do that.

The easiest way to create the version 10 PT in Excel 2007 is create it in compatibility mode. To do that, open a new Excel 2007 workbook and save it as Excel 97-2003. Open that file in Excel 2007 again and you will be in compatibility mode. Once you create a PivotTable and save it, you can then share it with anyone that just has Excel 2003 and they can open it and refresh it. Simple, huh?


Big Change #3 – New Job

Recently, in May 2008, I took a position with Stratagem. Though the summer I worked for 2 places as a consultant, KHS in Waukesha, and The Dept. Of Regulation and Licensing (DRL) in Madison.

A little history. I was full time for W3i around 2 years, and then went independent for about a year, then as a consultant with Stratagem. All the different types (full, indie, and consultant) have the pros and cons, and it all depends on what you are doing, where you are, and things you are working on.

I have done .NET, Team Lead, Database Stuff, C++, Data Warehouse/BI (Business Intelligence), ERP Stuff, more .NET and everything in between. Over the past 2 years I have come to love the database stuff more and more, especially BI and Data Warehousing. It is funny, because pretty much 99% of people have no clue what “BI” is. I was just a High Tech Happy Hour at Pooleys and everyone I talked to, “What is BI?” – and this is a tech event!! Anyways, I really do like BI and wanted to move my career forward doing BI. Stratagem hired me to do BI and work on their BI stuff, but there just wasn’t work out there, and I really wasn’t going down the path I wanted to. Stratagem is a great place, great people, I just wasn’t enjoying what I have been working on. (Although I did work on a small part time project for a couple weeks at night doing some SSIS stuff, which was exactly what I wanted to be doing full time!)

Now everyone probably knows I have an iPhone and use it extensively. A couple months ago, I installed the “Career Builder” app from the app store to check it out. It uses your location based on GPS, which I thought was cool. Just for kix, I typed in “Data Warehouse” and there was a result near Madison! Sweet, but what about details? Yeah everything that hit my buttons. Microsoft, SQL Server 2005, Analysis Services, Integration Services, some .NET, etc. Awesome!

I sent in my resume, and figured I would either hear back, and get the job, or wouldn’t hear back at all, not knowing how long the position had been out there, etc. I waited and finally heard back! Sweet. A phone interview, in person interview, informal interview, and another in person interview, and I got the job.

But who is this job with you ask? Well I am proud to say that this upcoming Tuesday I will be the newest “BI Architect” at Trek Bicycle Corp. You know, “Trek”. “Trek Bikes”. The awesome bike company. The HUGE bike company. The type of bikes Lance Armstrong rides. The USA Teams ride. The bikes that have won a ton of Tour De France’s. Sweet, sweet bikes.

Trek is located in Waterloo, WI, which is about 15 miles from where I live now in Madison, and about 14 from where I am moving tomorrow in Sun Prairie, WI. It is about 4 miles from where our band practices, in Marshall, WI. It is out in countryside, on the edge of a small town. Weird how the world HQ for this company is located in such a small town.

In any event, it is a new adventure that I am very excited for. I am really looking forward to get back into BI stuff head first and work back in SQL Server!! (I have been working Oracle for the last 2 months – ugh!)

I will even start riding bikes I’m sure, and I’m betting I am the envy of all my bike nerd friends out in PDX!!!

So this is Big Change #3 out of 3. #1 was moving, #2 is the baby, and #3 is the job. Another whirlwind couple of weeks here, and then the next couple of months, but I am excited and don’t worry, you will see my blogging to continue here. Although I am twittering more (http://twitter.com/scaleovenstove) but I still want to blog a couple of times a week. And hopefully I dive into BI/Data Warehousing and have more cool stuff to blog about in that realm.

One thing, this is the first time since I graduated college where my title isn’t some kind of “Software Dev” title. When I was indie, I really didn’t have a title, even though I was doing BI, so that is different. I don’t see myself going back to being a full time dev, even though I can do .NET. I will still use programming and development as a tool with my BI work, and also just for myself or helping friends.

Go buy a Trek! Save Gas! 😉

Business Intelligence and Analysis Services in the "Real World"

A reader sent me an email this weekend:

I wonder if I could as your advice as a BI / Data warehousing specialist.I have been studying Analysis Services recently having read a couple of books Which step me through the process of building the cubes etc but as I don’t come From a DB background one thing is not clear to me is how does one determine that They need to use BI / Analysis Services etc in the real world? As you,  I am a .NET developer with a background of building thick client apps and am  Familiar with creating layered architectures etc building on frameworks like NHibernate  Etc to abstract out the DB stuff into my more familiar object world.   My question how does one Generally interface with this Data Warehousing / Analysis Services stuff in the real world? I am looking for answers from people who have used these technologies in anger and not Like me from canned text book examples scenarios etc. Thanks for your time it is would be appreciated.

 

And I wrote an response, but after reading the response I figured I could post it up here as it is pretty general.

Basically what I have seen is this. You make your application (either web or windows) that saves transactional data, or logs, or something like that. End users usually want reports off of that data. At first, developers will report directly off that data (OLTP database). It becomes slow, and unusable after tons of data is there, so then developers tweak timeouts, things like that. Then the next step is custom summarizations into other tables, and then archiving off transactional data. Usually, because developers don’t know about data warehousing/BI stuff, all this stuff is custom up to this point.

 

Eventually, someone realizes that, “hey, there is a way to do this better”, which involves a structured ETL either using stored procs or SSIS or something similar. Also, instead of just ad-hoc or custom summarization tables, a well designed data warehouse (OLAP) database is key.
Now, from there, you can write your reports off your OLAP database, which is OK because the data is summarized, scrubbed, etc. But you really give it an adrenaline boost when you create a cube off that data warehouse OLAP db, it takes care of the summarization, the relationships, all that. You put the reporting in the hands of your end users (excel 2007 for example) – let them pivot and slice and dice the data, its all ready and set for them to do it, with really nothing you have to do on your end except make sure the cube is processed regularly off the DW.

You are basically abstracting your OLTP data up two levels… In all reality you should be able to query your OLTP data for a time frame and get say revenue, and then query the OLAP and the CUBE and get the same results. Now, with your cube, you can still access data from it in your .NET apps using AMO, which is cool as well, or you can write Reporting Services reports directly off the cube as well, makes it a lot easier than writing custom .NET reports.
So, interfacing with your Data Warehouse, the best options to get data in is usually SSIS packages or stored procedures, no .NET coding really. To get data out, you would probably want to use Reporting Services, or you can query it with SqlClient in .NET apps like you would a regular OLTP database.
The cube, you can get data out using AMO objects in .NET, you can query using MDX, XMLA, etc in management studio, or you can write reporting services reports, but the best client is Excel 2007, its built to be tied to SSAS 2005, so it really works out well.

One thing about all the technologies (SSIS, SSRS, SSAS, AMO, etc) is that there really isn’t a lot online as far as examples and documentation, or it is scattered. It is getting better though. Most of the stuff I do I just have to figure out on my own, through trial and error, but it is OK.

Maybe not the cover all response but it kind of covers what I have seen in the past, from my own experiences, and others.