Categories
Business Intelligence SQLServerPedia Syndication

SSRS DataSet Filter – "Like" Operand

Figured out today that in SSRS, if you add a filter to your dataset, and you want to use “Like”, that the operand for everything is * instead of % like you might be used to in SQL, sometimes it is just the little things 🙂

Categories
Business Intelligence Geeky/Programming

Windows 7 – ASP.NET Temporary Internet Files, Assembly Redirects

Just updated my work laptop to Windows 7. Sweet right? Except now the fun of getting everything to work.

First issue I ran into, a Web App in development, not being able to load up “Microsoft.AnalysisServices.AdomdClient” because it was looking for version “9.0.242.0” – the version you get with Office 2007, SQL 2005. But I have 2010 running and SQL 2008 stuff on this new build, so saw that issue. Of course I could install the stuff from SQL 2005, the drivers, but also I can redirect my assembly in the Web.config





I was also getting some weird stuff about permission denied to any third party assembly when trying to debug/run it locally. Since the web app is impersonating a user, I figured I need to give that user rights to the temporary internet files for ASP.NET, which worked, but in Win7 (and Vista?) the path is different, it is

%LocalAppData%TempTemporary ASP.NET Files

I gave the user modify and my app finally builds!

Hopefully not too many more roadblocks going forward..

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

SQL Server Schema Automatic Revision History using DDL Triggers and SVN

SQL 2005 introduced DDL Triggers, where you could catch events on your DDL statements in SQL, and hopefully most DBA’s are catching them, at least to a table and have some kind of report on who is adding, changing and deleting what objects.

What I wanted to do was capture that, but also keep and automatic running log in SVN (subversion) source control. Here is how I went about it.

First, you need SVN set up somewhere. We use unfuddle (http://unfuddle.com/) – which we also use for Agile BI stuff, but that is another post 🙂 and unfuddle also let’s us do SVN in the cloud, but you could do it locally or internally or whatever you’d like. I had to create a user for this, to do automatic commits, and give it rights to commit to the repo.

Second, you probably already want your DDL triggers set up to capture events on your databases, and write to a table. Writing to central DB on a server is probably a good idea. If you have that, or something similar, I then created a trigger on that table, to capture the INSERTED action and take that record that was just inserted and parse what I need, and automatically commit to SVN.

But first, you want to script out all your objects for a database to a folder, say C:DBSchemas

I created a folder structure to help me

DBSchemasServerNameDatabaseNameObjectType

and then scripted out all the objects to each folder for ObjectType (Tables, StoredProcedures, Functions, and Views)

Once that was done, I did an initial import into SVN, and make the folder I was working with a working copy. Then the funs starts.

I created a couple of procs (which I found online, links to the blogs are below) to CREATE and DELETE files from T-SQL using OLE automation

CREATE/APPEND (http://sqlsolace.blogspot.com/2009/01/ole-automation-write-text-file-from.html)

CREATE PROCEDURE [Utils].[usp_OLEWriteFile] (@FileName varchar(1000), @TextData NVARCHAR(MAX),@FileAction VARCHAR(12)) AS

BEGIN
DECLARE @OLEfilesytemobject INT
DECLARE @OLEResult INT
DECLARE @FileID INT

EXECUTE @OLEResult =
 sp_OACreate 'Scripting.FileSystemObject', @OLEfilesytemobject OUT
IF @OLEResult  0
  PRINT 'Error: Scripting.FileSystemObject'

-- check if file exists
EXEC sp_OAMethod @OLEfilesytemobject, 'FileExists', @OLEresult OUT, @FileName
-- if file esists
IF (@OLEresult=1 AND @FileAction = 'APPEND') OR (@OLEresult=0)
BEGIN

IF (@FileAction = 'CREATENEW')
 PRINT 'New file specified, creating...'
IF (@OLEresult=1 AND @FileAction = 'APPEND')
 PRINT 'File exists, appending...'
IF (@OLEresult=0 AND @FileAction = 'APPEND')
 PRINT 'File doesnt exist, creating...'

 -- open file
 EXECUTE @OLEResult = sp_OAMethod @OLEfilesytemobject, 'OpenTextFile', @FileID OUT,
 @FileName, 8, 1
 IF @OLEResult 0 PRINT 'Error: OpenTextFile'

 -- write Text1 to the file
 EXECUTE @OLEResult = sp_OAMethod @FileID, 'WriteLine', Null, @TextData
 IF @OLEResult  0
  PRINT 'Error : WriteLine'
 ELSE
  PRINT 'Success'
END
IF (@OLEresult=1 AND @FileAction = 'CREATENEW')
 PRINT 'File Exists, specify APPEND if this is the desired action'

EXECUTE @OLEResult = sp_OADestroy @FileID
EXECUTE @OLEResult = sp_OADestroy @OLEfilesytemobject

END
GO

DELETE (http://www.kodyaz.com/articles/delete-file-from-sql-server-xp-cmdshell-ole-automation-procedures.aspx)

DECLARE @Result int
DECLARE @FSO_Token int

EXEC @Result = sp_OACreate 'Scripting.FileSystemObject', @FSO_Token OUTPUT
EXEC @Result = sp_OAMethod @FSO_Token, 'DeleteFile', NULL, 'C:delete-me-file.txt'
EXEC @Result = sp_OADestroy @FSO_Token

You need to make sure OLE Automation is on. You need to make sure that the account you are running SQL as has modify rights to your DBSchemas folder.

But the crux of the solution is the trigger that gets the DDL info, and writes/deletes the files and SVN Add/Del/Commit’s the file. Now this is some ugly 1 hour SQL script craziness, tons of IF statements, etc. It could be improved, but it works, and it is a start, it can be modified and tweaked to do whatever you want. Note, if your SVN repo isn’t authenticated you don’t need the username/password for the SVN commands.

You can see, it gets the DDL, checks the events (and I have it limited to one database), and it checks what type of object and what operation, and for and add, it adds and commits, for a updated, deletes file, recreates it, and commits, and for a delete it does and svn delete and commit. Pretty easy 🙂




CREATE TRIGGER DDLRevisionHistory
	ON dbo.DDLEventLog
	AFTER INSERT
AS

BEGIN
SET NOCOUNT ON;

DECLARE @EventType VARCHAR(50)
DECLARE @DatabaseName VARCHAR(50)
DECLARE @ServerName VARCHAR(50)
DECLARE @ObjectName VARCHAR(100)
DECLARE @SchemaName VARCHAR(10)
DECLARE @CommandText VARCHAR(MAX)
DECLARE @LoginName VARCHAR(50)

SELECT
	@EventType = EventInstance.value('(//EventType)[1]', 'varchar(50)'),
	@DatabaseName = EventInstance.value('(//DatabaseName)[1]', 'varchar(50)'),
	@ServerName = EventInstance.value('(//ServerName)[1]', 'varchar(50)'),
	@ObjectName = EventInstance.value('(//ObjectName)[1]', 'varchar(50)'),
	@SchemaName =EventInstance.value('(//SchemaName)[1]', 'varchar(50)'),
	@CommandText = EventInstance.value('(//TSQLCommand//CommandText)[1]', 'varchar(max)'),
	@LoginName = EventInstance.value('(//LoginName)[1]', 'varchar(50)')
	FROM inserted

DECLARE @filepath VARCHAR(8000)

SET @filepath = 'C:DBSchemas' + @ServerName + '' + @DatabaseName + ''

	IF (
		@EventType = 'CREATE_VIEW' OR @EventType = 'ALTER_VIEW' OR @EventType = 'DROP_VIEW'
		OR @EventType = 'CREATE_TABLE' OR @EventType = 'ALTER_TABLE' OR @EventType = 'DROP_TABLE'
		OR @EventType = 'CREATE_PROCEDURE' OR @EventType = 'ALTER_PROCEDURE' OR @EventType = 'DROP_PROCEDURE'
		OR @EventType = 'CREATE_FUNCTION' OR @EventType = 'ALTER_FUNCTION' OR @EventType = 'DROP_FUNCTION'
		)

		AND @DatabaseName = 'YourDatabase' BEGIN


		-- write out new file to correct folder
		IF CHARINDEX('VIEW',@EventType) > 0 BEGIN
			SET @filepath = @filepath + 'Views' + @SchemaName + '.' + @ObjectName + '.View.sql'
		END

		IF CHARINDEX('TABLE',@EventType) > 0 BEGIN
			SET @filepath = @filepath + 'Tables' + @SchemaName + '.' + @ObjectName + '.Table.sql'
		END

		IF CHARINDEX('PROCEDURE',@EventType) > 0 BEGIN
			SET @filepath = @filepath + 'StoredProcedures' + @SchemaName + '.' + @ObjectName + '.StoredProcedure.sql'
		END

		IF CHARINDEX('FUNCTION',@EventType) > 0 BEGIN
			SET @filepath = @filepath + 'Views' + @SchemaName + '.' + @ObjectName + '.UserDefinedFunction.sql'
		END

		IF CHARINDEX('CREATE',@EventType) > 0 BEGIN

			-- create file
			EXEC dbo.usp_OLEWriteFile @filepath,@CommandText,'CREATENEW'

			-- svn add
			DECLARE @instrAdd VARCHAR(4000)
			SET @instrAdd='svn add ' + @filepath + ' --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrAdd

			-- svn commit
			DECLARE @instrCommitAdd VARCHAR(4000)
			SET @instrCommitAdd='svn commit ' + @filepath + ' --message "added by '+ @LoginName +'" --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrCommitAdd

		END

		IF CHARINDEX('ALTER',@EventType) > 0 BEGIN

			--delete and readd file
			EXEC dbo.usp_OLEDeleteFile @filepath
			EXEC dbo.usp_OLEWriteFile @filepath,@CommandText,'CREATENEW'

			-- svn commit
			DECLARE @instrCommitChange VARCHAR(4000)
			SET @instrCommitChange='svn commit ' + @filepath + ' --message "changed by '+ @LoginName + '" --username dbschema --password yourpassword'
			--PRINT @instrCommitChange
			EXEC xp_cmdshell @instrCommitChange
		END

		IF CHARINDEX('DROP',@EventType) > 0 BEGIN
			-- svn delete
			DECLARE @instrDel VARCHAR(4000)
			SET @instrDel='svn delete ' + @filepath + ' --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrDel

			-- svn commit
			DECLARE @instrCommitDel VARCHAR(4000)
			SET @instrCommitDel='svn commit ' + @filepath + ' --message "deleted by '+ @LoginName +'" --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrCommitDel
		END

	END

END

as you can see you can create a homegrown revision history of your DDL objects in SQL . I have tested this on the basic operations, no renames, etc using the GUI, but if you do use it, you might want to wrap it all in exception handling just to be on the safe side.

Happy DBA’ing 🙂

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

SQL 2005: SSIS – Pushing Data to MySQL using Script Component Destination

Sometimes, I just wonder why things that are useful in previous versions of things get removed. In SQL 2000, in DTS, there was an ODBC destination, you could write (insert, update, etc) to a foreign (non MSFT SQL) system easily. DB2 – no prob, MySQL – heck yea. For whatever reason, in SQL 2005 Integration Services (SSIS), that ability was removed (in SQL 2008 SSIS there is an ADO.NET Destination that can update ODBC sources, so they brought some functionality back).

I need to write to a MySQL database pulling data from a SQL Database, using SSIS 2005. What are the options? Well, the best I could come up with was a Script Component Destination in my DataFlow, this is how I did it:

1) Create a new SSIS Package, throw a DataFlow on the Control Flow, Add your connections (let’s say SourceDB_SQL – your source data, and then DestDB_MySQL – your MySQL destination, it needs to be a ADO.NET Connection, you need to install the MySQL connection – I installed this http://dev.mysql.com/downloads/connector/odbc/5.1.html)

2) In your DataFlow, create your OLEDB Source and get your query returning data, throw a Script Component on the Data Flow and make it a Destination.

ms_01

3) Open the Script Component, set your input columns up, and then your Connection Manager

ms_02

4) Open the actual script, and you just have to add a few lines of code:

' Microsoft SQL Server Integration Services user script component
' This is your new script component in Microsoft Visual Basic .NET
' ScriptMain is the entrypoint class for script components

Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
Imports Microsoft.SqlServer.Dts.Runtime.Wrapper
Imports System.Data.Odbc

Public Class ScriptMain
    Inherits UserComponent

    Dim mySQLConn As OdbcConnection
    Dim sqlCmd As OdbcCommand

    Public Overrides Sub AcquireConnections(ByVal Transaction As Object)
        mySQLConn = CType(Me.Connections.MySQLDatabase.AcquireConnection(Nothing), OdbcConnection)
    End Sub

    Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)

        sqlCmd = New OdbcCommand("INSERT INTO steve_test(ShipTo, YearValue) VALUES(" & Row.ShipTo & ", '" & Row.YearValue & "')", mySQLConn)
        sqlCmd.ExecuteNonQuery()

    End Sub

    Public Overrides Sub ReleaseConnections()
        Me.Connections.MySQLDatabase.ReleaseConnection(mySQLConn)
    End Sub
End Class

Run it and you are done! Easy. Now you can write data into MySQL using SQL 2005 SSIS (or any ODBC destination if you can get it to work :))

Categories
Business Intelligence

Excel 2003-2007 Assistant. NO EXCUSES to not upgrade!

Last week, the Business Analyst at work sent me a link, Office 2003/2007 Assistant

What the link will show you is the differences in commands between Excel 2003 and 2007, so users can learn how to do things with the Ribbon.

Excel 2007 really should be used when hitting SQL 2005+ OLAP Cubes, but companies are reluctant to upgrade because of the “jolt” of learning the Ribbon. Not anymore, with that assistant you can find out how to do anything you could in 2003 (not just with PivotTables – with anything)

What this means, is that there are NO MORE EXCUSES to not upgrade to 2007. Hey, with Office 2010 around the corner – March 2010, you better get ready for it, and there no time like now to upgrade from 2003!

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

T-SQL: Using CROSS APPLY to Turn 2 Queries Into 1

Parent/Child. Order/Line. Header/Detail. Report/Sub-Report. We have all ran across these database designs and have had to write queries off of them. You usually end up having the parent id, and you need to get the children. Or you have a list of parents and need to loop through them, looking up the children records and doing something with them. But what if you just want a list of the children id’s (or names, or whatever). Do you really need to loop through the parents and lookup all the children, (and possibly look through those)? You can do JOIN’s and you can get the data in a tablular format, but how do you rollup those children records?

Using the AdventureWorks DB in SQL 2005, an example using Manager/Employee:

SELECT DISTINCT mgr.ManagerId, e.EmployeeId
	FROM HumanResources.Employee mgr
	INNER JOIN HumanResources.Employee e ON mgr.ManagerId = e.ManagerId

Results:

image

But really we want to rollup those employees, ending up with one manager/employee record, ex: 3, [4,9,11,158,263,267,270] … for this, try CROSS APPLY

SELECT DISTINCT ManagerId, Employees = LEFT(emp.list, LEN(emp.list)-1)
	FROM HumanResources.Employee mgr
	CROSS APPLY
	(
	SELECT CONVERT(VARCHAR(4),EmployeeId) + ',' AS [text()]
		FROM HumanResources.Employee e
		WHERE mgr.ManagerId = e.ManagerId

		ORDER BY EmployeeID
		FOR XML PATH('')
	) emp (list)
WHERE mgr.ManagerId IS NOT NULL

Results:

image

As you can see from the results, we rolled up our employees into one record per manager, into a comma delimited list. Think of some possibilities of using CROSS APPLY in your apps or stored procs/reports to reduce the number of queries you might have to write, or number of trips to the database you might have to do. Happy T-SQL’ing 🙂

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

Using CTE’s to Create Dynamic Pivot Tables in SQL 2005/2008

One of the cool new commands introduced in SQL 2005 was the PIVOT command (and UNPIVOT). One thing that has always irked me about PIVOT was that you need to “hard code” the column names you want to actually pivot on. This is usually fine for a static report, say you want to pivot on Apples, Oranges, Bananas. Great. But what if you want to pivot on Month/Year? “Apr 2009”, “May 2009”, etc, etc. Still ok, if you want to hardcode your month years and they never change. But in some cases, you need to keep adding the current “Month Year” combo to the query.

You can go the old fashioned route and create a dynamic SQL statement and do an EXEC (@dynamicSql) , or you can try to figure out a way to do your PIVOT dynamically.

What I did here is create a dataset and added a numeric ordering and then pivoted on that instead of the the actual value “Month Name Year” (eg: “May 2009”). That way, my pivot is always the same, 6 months worth, or whatever you like, and you can pass in parameters for the date filters in the top query to get your correct range.

-- get your distinct month/year vals into a temp table
-- need temp table here because CTE doesnt like a subquery and row_number
SELECT DISTINCT CalendarMonthYearName, MonthOfYear,CalendarYear
      INTO #tmp_monthyears
      FROM dbo.DimDate
      WHERE Date > '02/01/2009' AND Date < '07/01/2009'
      ORDER BY CalendarYear DESC,MonthOfYear DESC;

-- create a CTE that adds a "Row Number" which is number 1-6
WITH CalMonthYears AS
(
-- add your number
SELECT CalendarMonthYearName AS 'CalendarMonthYearName',
      MonthOfYear,
      CalendarYear,
      ROW_NUMBER() OVER (ORDER BY CalendarYear DESC) AS 'YearMonthNum'
      FROM  #tmp_monthyears

)
SELECT ItemNumber,
      [1],[2],[3],[4],[5],[6]
FROM
(
-- your query to get your data, pivot this data
SELECT YearMonthNum,s.ItemNumber,
      SUM(Quantity) AS 'Quantity'
      FROM DataHistory s
      INNER JOIN dbo.DimDate d ON s.DateKey = d.DateKey
      INNER JOIN CalMonthYears cmy ON d.CalendarMonthYear = cmy.CalendarMonthYearName
GROUP BY YearMonthNum,s.ItemNumber
) AS SourceTable
PIVOT
(
SUM(Quantity)
FOR YearMonthNum IN (
      [1],[2],[3],[4],[5],[6]
      )
) AS PivotTable;

DROP TABLE #tmp_monthyears

The sky is the limit with this, you can modify this or think of other ways to use this logic to make your PIVOT commands dynamic, so you don’t have to keep editing stored procedures every time your pivot columns change 🙂

Categories
Business Intelligence Geeky/Programming

ADOMD.NET: Could not load file or assembly Microsoft.AnalysisServices.AdomdClient

If you are developing a .NET/ASP.net solution using ADOMD.NET locally, and everything works great, and you go to release to a production server and end up with this error:

Could not load file or assembly ‘Microsoft.AnalysisServices.AdomdClient, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91’ or one of its dependencies. The system cannot find the file specified.

You need to install the “Feature Pack for Microsoft SQL Server 2005” which you can find here: http://www.microsoft.com/downloads/details.aspx?FamilyID=50b97994-8453-4998-8226-fa42ec403d17&DisplayLang=en

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

Microsoft BI: Creating Local OLAP Cubes using XMLA and ascmd.exe

Most people, when using OLAP cubes, are hitting the live version that is located on SQL Server Analysis Services (SSAS). They hit it with Excel, or some other reporting tool, etc. I have blogged previously about offline cubes and how you could possibly use them.

Now, the blog I did before, I talked about creating them with Excel, and that works great for end users. But what about automating the process? What about filtering dimension members, or measure groups, etc?  Now that you can use the Panorama Gadget for Google Apps/iGoogle (http://google-pivot-tables.blogspot.com/2008/04/panorama-analytics-gadget-for-google.html) you can upload .cub files and do BI in the cloud, how cool is that!

506479481_683e31e6db

Well, one option is purchase CubeSlice – http://www.localcubetask.com/  and use that to create your .cub files. CubeSlice works great, and is a good option if you want something with a ton of options and ease of use.

You can also create .cub’s using CREATE GLOBAL CUBE syntax in MDX, and you can also use XMLA to create a .cub. Options galore! Chris Webb blogged a few years about about using XMLA to create .cub’s here – http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!877.entry  He was using the MDX Sample App (I believe that comes with SQL 2000?)

What I was looking for was an easy way to us GLOBAL CUBE or XMLA and automate it, CubeSlice is nice, but there has to be a way to replicate (at least in a basic way) what they are doing. Thinking about XMLA – as to me it seems more powerful, that was the route I chose. I didn’t want to install the MX Sample App from SQL2K, and what Chris Webb says in his blog, and other things I read, basically the problem with using XMLA to create you .cub in SQL Server Management Studio is that you can’t specify a connection string, you have to connect to an SSAS instance. Using the MDX Sample App, you can specify a local file, and then run the XMLA and it will create the .cub file. So I just need to replicate that. 

I have also blogged about using ascmd.exe here . ascmd comes with the SSAS Samples with SQL 2005 (usually located here: C:Program FilesMicrosoft SQL Server90SamplesAnalysis ServicesAdministratorascmd) . You just need to compile it and you can use it to execute XMLA. So I decided to try that. I created an XMLA file to create my cube, and then executed it with ascmd.exe

ascmd -S c:MyOfflineCube.cub -i c:MyOfflineCube.xmla

In seconds, you have an offline cube. Now, in your XMLA you can have filters, etc. You could create an XMLA file that has some kind of variable, and you could create a replace task to replace that with what you wanted filtered, think maybe based on location, or employee, etc, and then kick out personal cubes for users, etc.

One thing I didn’t really get into is how to get your XMLA script. You could figure it out for yourself, but I actually just used CubeSlice for this example. You can create an offline cube with CubeSlice and actually see the XMLA code and use that. Maybe creating the XMLA script manually/yourself would be good info for another blog post 🙂

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

SQL 2005, SSAS 2005: Using ascmd.exe To Create SQLAgent Jobs That Give You Completion Status

In SQL2005, you can create SQL Agent jobs. They can be scheduled, have multiple steps, alert, notify, etc. Pretty great setup. There are some downfalls though.

Like in order to call a job from a job, you need to execute the second job with T-SQL. Thing is, it doesn’t run synchronously. It runs asynchronously, which really stinks if you want to wait to see if the second job completes successfully or what not.

Another thing, if you call an XMLA script from a SQL Agent job, if the XMLA query or command fails, the SQL Agent job still reports success – that’s no good! What can you do? Use ascmd.exe!

ascmd.exe is a utility that you actually have to download the SQL Samples from CodePlex (http://www.codeplex.com/SqlServerSamples/Release/ProjectReleases.aspx?ReleaseId=4000) and then build the ascmd solution, to get ascmd.exe A few notes. The samples are VS2005, and I don’t have that installed ,so I had to open with VS2008, then it is digitally signed, and when building couldn’t find the .snk file to sign it, so I turned that off as well, after I had it built, I did some testing locally and made sure it would work as I wanted it to.

You can test by just calling it from the cmd line, for example:

ascmd.exe -S SSASServerName -d SSASDatabaseName -i MyXMLA.xmla

you can use -Q and try to pass in XMLA, but then you have to handle all the special characters and what not, which is a pain. Now, if you just put this on your C drive, (or wherever), then create a SQL Agent Job to call this command, it will fail the job if the ascmd.exe reports back failure. Exactly what we want!!

You will notice in the SQL Agent job setup that you can specify the value for a failed command  – 0 is the default and that is what we want.

Now, get some XMLA that you know works, set it up in the xmla file and test it, the job should succeed. Now just change something in the xmla (like CubeId) to some fake value and test it, the job should fail, and you can have it alert your DBA’s or whoever.

Pretty sweet, but I wish SQL Agent would handle failed XMLA like a failed query and report failure, I am not sure if in SQL 2008 it does that or not, but it would make life a lot easier. Otherwise you could be scratching your head trying to find where stuff is failing, looking in logs, etc, but not seeing any failures. The only way you would be able to tell is to run the XMLA manually, ugh 🙂