Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

Using Offline OLAP to Develop Cube Reports Without SSAS

One feature of Excel 2007 that I think is really cool, and also a little hidden, is the “Offline OLAP” feature.

If you insert a pivot table connected to an OLAP cube into Excel 2007, and go to the PivotTable Options Ribbon Menu, You will see the “OLAP Tools” Button. Click on that and then “Offline OLAP”.

image

Once you go through the Offline OLAP Wizard, it will create a .cub file for you. What this ends up being is a local disconnected “cube” you can analyze in Excel, if you are on the plane, or in some remote area with no internet connection.

Other uses for the offline .cub files that I have found is this. Let’s say you want developers to develop Web based reports using .NET, maybe using the Dundas OLAP Services controls. If you don’t want to have to load SSAS or set them up to connect to any server so they can just develop locally, the .cub file is the way to go. In their .NET code, they can just change the connection string to point to the local .cub file, and then later when you are ready to go live, you can just change the connection string to the live cube. That way, if you are developing the cube at the same time reports are being developed, you don’t have to worry about uptime, etc. Just send them an updated .cub file every once in a while.

I don’t know much about the details for upcoming releases, like Microsoft Project Gemini, but I have a feeling that it might feel a bit like this, using Offline OLAP, or local analysis.

On a final note, if you really want to get geeky, you can actually create the .cub files from .NET, but that is another blog post 🙂

Categories
Business Intelligence SQLServerPedia Syndication

SQL Server Management Studio and Excel – Column Headings on Copy/Paste

Here is a tip about a setting that I think should be on by default, but it isn’t. How many times are you writing T-SQL in SQL Server Management Studio (SSMS) and you need to just copy the results out, paste to Excel and either do more analysis or send off to someone. What happens when you have a result set with 20 columns? 30? 50?

Well, the old way to get the column headings in Excel was to just type them in. Ouch. This gets old realllly fast. But there is a setting in SSMS that most people don’t even know exists, and then once they find out about it, they are like, dang! I wish I would have known about that.

Well, fire up SSMS, Tools->Options, Query Results, SQL Server, Results to Grid. Then check the “Include column headers when copying or saving the results” Hit OK, and the any new query window you have will automatically copy the column headings from your result set with the data!

 

image

Categories
Business Intelligence SQLServerPedia Syndication

SSIS – Pulling Data from a non default collation DB to a default collation DB

So, you install SQL, and if you are in the US, you 99.9999% of the time install it with the default collation, or language, or whatever. US English, Codepage 1252, etc, etc Latin blah blah…

Then, you are tasked with pulling data from somewhere else, but it happens to be a different language or collation than your destination, what do you do?

In SSIS, there is a setting you can change on your OLEDB source and OLEB destination to make sure it works. The default codepage is 1252, but if you try pulling data from a non default DB, it will error out all over the place. If you change the setting “AlwaysUseDefaultCodePage” from false (the default) to true, on both your source and destination, then it should work.

image

Another thing to assist with this once you have a bunch of SSIS packages with these non-default settings changed (not just the one I describe above, but any setting) – if you install BIDs Helper, and then right click on the package, you can get a non-default setting report, which is pretty awesome actually, especially if you inherit any packages and want to know quickly what default settings have been tweaked.

Categories
Business Intelligence Work

Sr. Technical Business Analyst position at Trek Bicycle Corporation (Waterloo, WI)

We are again looking to fill out our Business Intelligence team at Trek Bicycle Corporation. What I am looking for with this position is someone who is both technical and has the business analyst skills needed to work with end users, gather requirements, etc. You can read more about the position here:

http://www.trekbikes.com/us/en/company/careers/post/98/sr.+technical+business+analyst

Feel free to email your resume (make sure to say that you found this through my blog!) to my work email steve_novoselac@trekbikes.com

Categories
Business Intelligence

Book Review: Information Dashboard Design by Stephen Few

A couple of months ago, a colleague lent me Information Dashboard Design: The Effective Visual Communication of Data by Stephen Few.


What a great book. I read it in one afternoon. It goes through a bunch of different software systems you can buy or implement in regards to Business Intelligence and Dashboards, KPI, Reporting, etc. It explains why most are worthless, and don’t convey information in the best way. Finally at the end it explains some great ways to convey the most information visually to the end user of your dashboard.

It talks about things like sparklines and bullet charts. It goes into why 3D, Thermometers and Gauges are pretty much the worst things you can add to a dashboard.

After reading the book, I was intrigued by the possibilities of getting information out to people in better way. Bullet graphs/charts, sparklines, well how? They don’t have them in Excel.

I then stumbled upon xlcubed.com, and MicroCharts – http://www.xlcubed.com/en/

It’s an add on type product that allows you to create bullet charts and sparklines and share excel over the web (kind of like Excel Services). Looks promising.

I am also reading a book about Information Dashboard Reporting in Excel 2007 and they actually walk you through on how to create bullet graphs from scratch in Excel, pretty sweet.

If you are sick of some of the limitations of SSRS, sick of some built into dashboard products that you might have used, and also want to display your information visually to your end users, instead of just raw data, then I would say read this book, right away, and go from there,


Categories
Business Intelligence

Business Intelligence != Reporting

Last week I was in a heated discussion about Business Intelligence, and it came up that “Business Intelligence is just writing reports, what is the difference from just writing some reports?”

I tend to disagree. Writing reports is, well, writing reports. You can write reports off of tons of data, and yeah, probably get some good info, but usually it ends up in disparate information.

I have blogged about this before, probably a few times, but this is usually what happens in organizations:

1) Some app or system is set up where data is created in some kind of data store, usually a database

2) People want to see the data in some type of report

3) Developers start cranking out reports right off the source system, stopping the bleeding for the current need for more reporting.

4) System keeps growing, more data is created/collected.

5) Reports off source system start slowing down, timing out, many different reports end up for the same thing and don’t match, end users complain.

6) Developers get clever, start moving and summarizing data on some kind of interval, all with custom code, custom design. They write some reports off the summary data to once again stop the bleeding. Some reports are still off the source data. Reports don’t match. End users complain

7) More data is collected, more users want reports, and want them with different views, grouped by this, sorted by that, add this column, remove this column, etc.

8) Developers are taking the heat. Their reports aren’t matching, they are running slow, etc. They can’t keep up with all the report requests. They decide to create a custom reporting engine on top of their summary data that allows end users to create their one reports using some quasi query language and drag and drop columns/grouping. This stops the bleeding for a while.

9) Finally someone in the organization realizes that the developers are reinventing the wheel. This is where Business Intelligence comes in. The source data is left untouched, the BI group creates and ETL to have some kind of data warehouse design and structure, using out of the box tools. (SQL Server Management Studio is and out of the box tool – you use that to write queries. SQL Server Integration/Analysis/Reporting services are out of the box tools, you use them to create ETL’s, Cubes, and Reports).

10) End users can now write reports using Excel off the OLAP Cube, or they use the Report Builder with Reporting Services to create reports off a UDM (Universal Data Model), or the BI devs create reports in Reporting Services, all off the same source data. One version of the truth, less custom reporting.

11) Developers are left to develop on the source system turning requirements into code/features, not focus on reports, and the BI group is focused on getting all the data turned into information.

Of course this is a perfect scenario described above. Nothing is ever as simple as that, but we can hope. Also I am biased towards the MSFT toolset, but there are other toolsets out there that would solve the same problem.

All I know is that I have seen everywhere I go pretty much the same thing happen from #1 to #8 above. In some cases you never see #9-#11 and you end up in a world of hurt.

In the end though, Business Intelligence just isn’t “writing reports”, far from it.


Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

SSAS Cube Rule #1 – YOU NEED CALCULATE

Ok, I just spent a few hours debugging a cube that wasn’t bringing back any data. Checking all the relationships, the source data, the marts, etc, etc. And lo and behold, find out that a colleague commented out all the calculations, including the CALCULATE statement which says:

The CALCULATE command controls the aggregation of leaf cells in the cube.
If deleted or modified, the data within the cube will be affected.
This command should only be edited if you intend on manually specifying how the cube will be aggregated.

Remember, you NEED the “CALCULATE” command!

Categories
Business Intelligence

Data Integrator Error 51005

If you run across this error number (51005) in Data Integrator ETL’s, it more than likely means that you are running two ETL’s over the same table at the same time of have some kind of double processing on a table going on.

Categories
Business Intelligence SQLServerPedia Syndication

Microsoft BI Client Tools: Creating PivotTables in Excel 2007 that work in Excel 2003

Recently I blogged about easing the transition to Excel 2007 from Excel 2003, by changing your PivotTable design and options.

This post is going to be about creating PivotTables that connect to OLAP cubes (and PivotTables in general) that will work in either client, or sometimes called “Compatibility Mode

If you create a PivotTable in Excel 2007 (a “Version 12” PivotTable) and save the xlsx file to an xls (Excel 2003 or “Version 10” PivotTable), you can still open the spreadsheet in Excel 2003, but you cannot refresh the data – it will still work if you open the .xls in 2007 though.

What that means is that the version 12 PivotTables will not downgrade to version 10 PivotTables and won’t be refreshable in Excel 2003. This presents a problem if you still have people you want to share the PivotTable with that are using Excel 2003, and you are using Excel 2007. What you need to do is create the PivotTables as version 10 PivotTables, but from Excel 2007, and this is how you do that.

The easiest way to create the version 10 PT in Excel 2007 is create it in compatibility mode. To do that, open a new Excel 2007 workbook and save it as Excel 97-2003. Open that file in Excel 2007 again and you will be in compatibility mode. Once you create a PivotTable and save it, you can then share it with anyone that just has Excel 2003 and they can open it and refresh it. Simple, huh?


Categories
Business Intelligence Work

DBA position at Trek Bicycle Corporation (Waterloo, WI)

I usually don’t mix in a ton of work blog posts here, but I will make a few exceptions.

Some of you probably don’t know, but recently I became the Business Intelligence Manager at Trek Bicycle Corporation, and I need to add a person to the team. The position would be working out of the HQ in Waterloo, WI

We are looking for a Database Administrator (DBA) that is focused on the Microsoft Database and BI stack. Someone who has worked with large volumes of data and moving and optimizing large databases, and of course implementing DB best practices (Security, performance, etc). If you would like more info on the requirements, check out the job posting below or you can contact me for more info.

Feel free to email your resume (make sure to say that you found this through my blog!) to my work email steve_novoselac@trekbikes.com

You can also check out the job posting on our website here:

http://www.trekbikes.com/us/en/company/careers/post/92/database+administrator