Categories
SQLServerPedia Syndication

SQL DBA: Starting Fresh, What do you do?

If you start at a new place, as a SQL Server Database Administrator (DBA), what is one of the first things you should do? In my opinion, after figuring out the key servers and instances running you need to support… is setting up alerts.

By setting up alerts you can start to get an idea of what is not working and start focusing on things that are failing, etc first. All the while you can still check on backups and getting everything else set up and working, but if you don’t have alerts, well, you are blind.

Alerts should tell you..

1) When a physical server is down (network)
2) When backups fail
3) When jobs fail
4) When logins fail
5) I/O issues
6) the “critical” 14, 15, 16, 17
7) crazy cpu and memory issues
8) services going up and down
9) if your SAN is up/down
10) Hard drive getting close to 100%

and that is just the beginning. What other alerts should DBA’s set up *right away* to make sure they are on top of things?


Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

Exporting Data from SQL Server to Excel Directly

Had a request to export data out from SQL Server to Excel directly. There are a few ways you can do this. BCP is one, another is OPENROWSET. SSIS, etc.  Here is the OPENROWSET method, using Jet (I think this only works on 32bit servers as well)

First you need to enable a setting on SQL, so.. Start->Programs->Microsoft SQL Server 2005->Configuration Tools->SQL Server Surface Area Configuration, then click on the “Surface Area Configuration for Features” at the bottom.

image
Under database engine, make sure “Enable OPENROWSET and OPENDATASOURCE support” is checked under Ad Hoc Remote Queries option.

image

Then, you need to create an empty excel workbook, with the columns of the query you want to export out. Create one on C:Testing.xls with “Name” and “Rating” columns

Then run this query:

INSERT INTO OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'Excel 8.0;Database=C:testing.xls;',
'SELECT Name, Date FROM [Sheet1$]')
SELECT 'Steve','1'
Union
SELECT 'Joel','2'
GO

And if everything worked, your xls should have 2 rows in it.

 

Now, if you want output to other spreadsheets, you could dynamically set the filename in the statement, or other trickery!

Categories
Business Intelligence SQLServerPedia Syndication

Where does PowerPivot Fit?

Now that SQL Server 2008 R2 is out, and Excel 2010 is out. You can get PowerPivot (http://powerpivot.com/) and create your own in memory cubes!

…. Or something like that.

I still haven’t figure out where PowerPivot fits in a business scenario. Why?

Well first you have what are now being called the “old school” BI users, that use Excel to connect to an SSAS cube and create fancy pivots and reports, maybe convert to formulas and create some nice reports/dashboards.. analytics.

Then you have people who only consume canned/standardized reports, through SSRS mostly, or maybe Excel Web Services… but they don’t create. Just consume.

You might even have power users, who take Report Builder and create those SSRS reports for other users. Awesome.

But then, you have this tool, PowerPivot. What can you do? Hit databases (mostly… cubes and other sources as well), bring back data, relate it, and create pivot reports/graphs off of it.

But you better be pretty dang advanced as a business user to use PowerPivot. I could count on one hand the users (that I have dealt with over the last 10 years) I would feel comfortable giving it to and not ending up with more of a headache.

What do I think is still missing from the Microsoft BI toolset? Looking at Business Objects, the Web Intelligence. Universes. You create a Universe off of a datasource and expose it out to the user, they can create reports/ad-hoc whatever off of it.

Kind of like the ever elusive “Report Model” in the Microsoft stack that no one ever uses, ever will use, or has no reason to use. But in BO, they make it useful.

I don’t see PowerPivot taking the place of a Report Model/Universe, so where does it fit? IT Analysts making “pre” cubes before you actually make cubes for your users that just want to hit it with Excel and not care about anything else?

Or people who just want to create their own cubes in silos. Tell me how that lends itself to “one version of the truth”?

Either way, we will see how it evolves and hopefully find some good use for it. 🙂

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

SQL Schema Source Control (CodePlex)

Source Control. In my eyes, one of the best inventions of development.

Software Developers have used it for years, and it allows them to easily develop in a team environment, and be less scared they will lose a change or not be able to see things they did historically.

I have blogged a few times about source control before..

SQL Server Schema Automatic Revision History using DDL Triggers and SVN
MSFT BI In a Team Environment
Visual Source Safe Sucks/
Source Control at Home with Subversion

But this was mostly for code. Developers. The SQL Community has kind of been shafted with source control. Yea you can tie in VSS to Management studio, and others. I have never found one that works, and just wanted something to work in the background.

I originally started doing this with DDL triggers as my <a href="“>post in November kind of outlined. It worked, but wasn’t reliable enough for what I wanted and was too much setup.

So I did what developers usually do, I wrote an app. SQL Schema Source Control http://sqlschemasourcectrl.codeplex.com/

At first it worked with one database, one server, everything was hardcoded. And then it progressed, and now it is all configurable for multi server/multi database, etc. I decided to put it up on CodePlex because I think it can be improved and made to work with other source control providers, like TFS.

The code itself isn’t anything crazy, some file operations and SMO operations to get the DDL and then some functions to add/update/delete and commit to source control.

For info on how to get it working, check out the documentation page on CodePlex, I can also answer any questions here, or on CodePlex.

I have been using the app for a few months now and it has saved headaches, accidental deletes, wanting to see changes over time by developers, etc.

The feature I like the most is that it logs the SQL Agent job changes, so if someone changes a job, you can see the history…

Now, there are competitiros out there. Redgate (http://www.red-gate.com/products/solutions_for_sql/database_version_control.htm) and others. But I wanted something free and open, so that is why I am putting this out there. I would be perfectly fine not putting anything out and just using it, but I think (and hope) others could benefit from using this app

So if you are looking for semi-easy way to get revision control on your SQL Schemas and SQL Agent jobs, check out the app. There is minimal setup, but once you have it working, it just runs.

Note at this time it works with SVN and SQL 2008. Also the solutions is VS2010. I originally had it working with 2005, but no need on my end anymore for that, someone could easily make a version for SQL 2005. VS2008 solution could be created pretty easily as well. Have fun!

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

SQL Server 2008 – Intellisense – Update Local Cache

Since we recently upgraded to SQL 2008, now I have some small tidbits that I can share! First one is this: Intellisense updating.

In SQL 2008, it has built in Intellisense, pretty awesome. Until you add new objects, then everything is red underlined. What to do?

Well if you are writing some T-SQL in SQL Server Management Studio (SSMS) and run into this issue for newly created objects, just use this command

CTRL+SHIFT+R

and you are all set, the local Intellisense cache will be rebuilt and your new tables/procs/whatever will show up in Intellisense, cool!

Categories
Business Intelligence SQLServerPedia Syndication

SSRS DataSet Filter – "Like" Operand

Figured out today that in SSRS, if you add a filter to your dataset, and you want to use “Like”, that the operand for everything is * instead of % like you might be used to in SQL, sometimes it is just the little things 🙂

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

Dev and Prod Systems, Using a HOST file to ensure your datasource is pointing the right system

In many shops, I am guessing there are multiple servers. Development, Production, maybe a Staging, etc.

With SQL Server Analysis Services (SSAS) and SQL Server Integration Services (SSIS), you set up data sources, connection to databases. In SSAS you usually have a connection which then you build your data source view off of, and in SSIS you have connections from which you push data to and pull data from.

Another thing, in SSAS you can “deploy” right from Visual Studio (BIDS). All these things have a server name. What we have run into is this:

You develop on your local machine, pointing at development server. You deploy to development, your connections are pointing to development, and everything works great. When you deploy to production (usually planned, every 2 weeks, or whatever) you deploy your stuff and what ends up happening?

In SSIS your config files should have a connection string (or however you store it) and it should point to production. But in SSAS, if you deploy from BIDS, your data source will have to change and in the cube project properties you need to change your deployment server.

I have seen countless times, a cube or a connection in SSIS without a config that is running in production, yet pointing at development. We keep our dev data as fresh or very close so sometimes we don’t even notice, but then it happens, something weird is reported and we dig into it, and we find the erroneous connection string.

Here is my solution to the problem:

Developers – go to C:WindowsSystem32DriversEtc and open your Hosts file with notepad or text editor. You then add a couple of entries

#production
#xxx.xxx.xxx.xxx datawarehouse

#development
yyy.yyy.yyy.yyy datawarehouse

where xxx is the ip of your production system, and yyy is the ip of your dev system. the # is the rem/comment out symbol. You can see above I have everything commented out but the line for the dev system. But notice each is pointed to “datawarehouse” so if I ping or connect to “datawarehouse” from Management Studio, or whatever, it goes to the IP I have commented out.

Now, go on to each server, but only add the line that corresponds to that server in the hosts file, or better yet just

127.0.0.1 datawarehouse

Now, when you deploy to either server, and your connections, etc are set to connect to “datawarehouse” you ensure it will always connect to the local server. Brilliant!

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

SQL Server Master Data Services (Nov CTP)

Last night I configured up SQL Server Master Data Services (MDS) on a test box. It looks good so far. I ran into a few issues with the box/setup that I had to tweak in order to get it working. I had to allow “handlers” and “modules” in the applicationHost config on the machine. IIS was also inadvertently set up with only anon access, which was an issue, after I got windows auth installed and turned on everything seemed to work.

The app/system itself is pretty slick. Very basic, but lets you do complex things. Once you get some users set up, and a few models (think: Product, Customer), you can add entities (think: Category1, Category2, etc) you can set up hierarchies, business rules, etc.

I haven’t played much more with it, but it seems like it could get the job done. I would say some things aren’t intuitive enough. Example – they could say “drag this over to this area” but there is nothing as far as what to do, its kind of guess and check.

I’m excited to see where MDS goes.

mds

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

SQL Server Schema Automatic Revision History using DDL Triggers and SVN

SQL 2005 introduced DDL Triggers, where you could catch events on your DDL statements in SQL, and hopefully most DBA’s are catching them, at least to a table and have some kind of report on who is adding, changing and deleting what objects.

What I wanted to do was capture that, but also keep and automatic running log in SVN (subversion) source control. Here is how I went about it.

First, you need SVN set up somewhere. We use unfuddle (http://unfuddle.com/) – which we also use for Agile BI stuff, but that is another post 🙂 and unfuddle also let’s us do SVN in the cloud, but you could do it locally or internally or whatever you’d like. I had to create a user for this, to do automatic commits, and give it rights to commit to the repo.

Second, you probably already want your DDL triggers set up to capture events on your databases, and write to a table. Writing to central DB on a server is probably a good idea. If you have that, or something similar, I then created a trigger on that table, to capture the INSERTED action and take that record that was just inserted and parse what I need, and automatically commit to SVN.

But first, you want to script out all your objects for a database to a folder, say C:DBSchemas

I created a folder structure to help me

DBSchemasServerNameDatabaseNameObjectType

and then scripted out all the objects to each folder for ObjectType (Tables, StoredProcedures, Functions, and Views)

Once that was done, I did an initial import into SVN, and make the folder I was working with a working copy. Then the funs starts.

I created a couple of procs (which I found online, links to the blogs are below) to CREATE and DELETE files from T-SQL using OLE automation

CREATE/APPEND (http://sqlsolace.blogspot.com/2009/01/ole-automation-write-text-file-from.html)

CREATE PROCEDURE [Utils].[usp_OLEWriteFile] (@FileName varchar(1000), @TextData NVARCHAR(MAX),@FileAction VARCHAR(12)) AS

BEGIN
DECLARE @OLEfilesytemobject INT
DECLARE @OLEResult INT
DECLARE @FileID INT

EXECUTE @OLEResult =
 sp_OACreate 'Scripting.FileSystemObject', @OLEfilesytemobject OUT
IF @OLEResult  0
  PRINT 'Error: Scripting.FileSystemObject'

-- check if file exists
EXEC sp_OAMethod @OLEfilesytemobject, 'FileExists', @OLEresult OUT, @FileName
-- if file esists
IF (@OLEresult=1 AND @FileAction = 'APPEND') OR (@OLEresult=0)
BEGIN

IF (@FileAction = 'CREATENEW')
 PRINT 'New file specified, creating...'
IF (@OLEresult=1 AND @FileAction = 'APPEND')
 PRINT 'File exists, appending...'
IF (@OLEresult=0 AND @FileAction = 'APPEND')
 PRINT 'File doesnt exist, creating...'

 -- open file
 EXECUTE @OLEResult = sp_OAMethod @OLEfilesytemobject, 'OpenTextFile', @FileID OUT,
 @FileName, 8, 1
 IF @OLEResult 0 PRINT 'Error: OpenTextFile'

 -- write Text1 to the file
 EXECUTE @OLEResult = sp_OAMethod @FileID, 'WriteLine', Null, @TextData
 IF @OLEResult  0
  PRINT 'Error : WriteLine'
 ELSE
  PRINT 'Success'
END
IF (@OLEresult=1 AND @FileAction = 'CREATENEW')
 PRINT 'File Exists, specify APPEND if this is the desired action'

EXECUTE @OLEResult = sp_OADestroy @FileID
EXECUTE @OLEResult = sp_OADestroy @OLEfilesytemobject

END
GO

DELETE (http://www.kodyaz.com/articles/delete-file-from-sql-server-xp-cmdshell-ole-automation-procedures.aspx)

DECLARE @Result int
DECLARE @FSO_Token int

EXEC @Result = sp_OACreate 'Scripting.FileSystemObject', @FSO_Token OUTPUT
EXEC @Result = sp_OAMethod @FSO_Token, 'DeleteFile', NULL, 'C:delete-me-file.txt'
EXEC @Result = sp_OADestroy @FSO_Token

You need to make sure OLE Automation is on. You need to make sure that the account you are running SQL as has modify rights to your DBSchemas folder.

But the crux of the solution is the trigger that gets the DDL info, and writes/deletes the files and SVN Add/Del/Commit’s the file. Now this is some ugly 1 hour SQL script craziness, tons of IF statements, etc. It could be improved, but it works, and it is a start, it can be modified and tweaked to do whatever you want. Note, if your SVN repo isn’t authenticated you don’t need the username/password for the SVN commands.

You can see, it gets the DDL, checks the events (and I have it limited to one database), and it checks what type of object and what operation, and for and add, it adds and commits, for a updated, deletes file, recreates it, and commits, and for a delete it does and svn delete and commit. Pretty easy 🙂




CREATE TRIGGER DDLRevisionHistory
	ON dbo.DDLEventLog
	AFTER INSERT
AS

BEGIN
SET NOCOUNT ON;

DECLARE @EventType VARCHAR(50)
DECLARE @DatabaseName VARCHAR(50)
DECLARE @ServerName VARCHAR(50)
DECLARE @ObjectName VARCHAR(100)
DECLARE @SchemaName VARCHAR(10)
DECLARE @CommandText VARCHAR(MAX)
DECLARE @LoginName VARCHAR(50)

SELECT
	@EventType = EventInstance.value('(//EventType)[1]', 'varchar(50)'),
	@DatabaseName = EventInstance.value('(//DatabaseName)[1]', 'varchar(50)'),
	@ServerName = EventInstance.value('(//ServerName)[1]', 'varchar(50)'),
	@ObjectName = EventInstance.value('(//ObjectName)[1]', 'varchar(50)'),
	@SchemaName =EventInstance.value('(//SchemaName)[1]', 'varchar(50)'),
	@CommandText = EventInstance.value('(//TSQLCommand//CommandText)[1]', 'varchar(max)'),
	@LoginName = EventInstance.value('(//LoginName)[1]', 'varchar(50)')
	FROM inserted

DECLARE @filepath VARCHAR(8000)

SET @filepath = 'C:DBSchemas' + @ServerName + '' + @DatabaseName + ''

	IF (
		@EventType = 'CREATE_VIEW' OR @EventType = 'ALTER_VIEW' OR @EventType = 'DROP_VIEW'
		OR @EventType = 'CREATE_TABLE' OR @EventType = 'ALTER_TABLE' OR @EventType = 'DROP_TABLE'
		OR @EventType = 'CREATE_PROCEDURE' OR @EventType = 'ALTER_PROCEDURE' OR @EventType = 'DROP_PROCEDURE'
		OR @EventType = 'CREATE_FUNCTION' OR @EventType = 'ALTER_FUNCTION' OR @EventType = 'DROP_FUNCTION'
		)

		AND @DatabaseName = 'YourDatabase' BEGIN


		-- write out new file to correct folder
		IF CHARINDEX('VIEW',@EventType) > 0 BEGIN
			SET @filepath = @filepath + 'Views' + @SchemaName + '.' + @ObjectName + '.View.sql'
		END

		IF CHARINDEX('TABLE',@EventType) > 0 BEGIN
			SET @filepath = @filepath + 'Tables' + @SchemaName + '.' + @ObjectName + '.Table.sql'
		END

		IF CHARINDEX('PROCEDURE',@EventType) > 0 BEGIN
			SET @filepath = @filepath + 'StoredProcedures' + @SchemaName + '.' + @ObjectName + '.StoredProcedure.sql'
		END

		IF CHARINDEX('FUNCTION',@EventType) > 0 BEGIN
			SET @filepath = @filepath + 'Views' + @SchemaName + '.' + @ObjectName + '.UserDefinedFunction.sql'
		END

		IF CHARINDEX('CREATE',@EventType) > 0 BEGIN

			-- create file
			EXEC dbo.usp_OLEWriteFile @filepath,@CommandText,'CREATENEW'

			-- svn add
			DECLARE @instrAdd VARCHAR(4000)
			SET @instrAdd='svn add ' + @filepath + ' --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrAdd

			-- svn commit
			DECLARE @instrCommitAdd VARCHAR(4000)
			SET @instrCommitAdd='svn commit ' + @filepath + ' --message "added by '+ @LoginName +'" --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrCommitAdd

		END

		IF CHARINDEX('ALTER',@EventType) > 0 BEGIN

			--delete and readd file
			EXEC dbo.usp_OLEDeleteFile @filepath
			EXEC dbo.usp_OLEWriteFile @filepath,@CommandText,'CREATENEW'

			-- svn commit
			DECLARE @instrCommitChange VARCHAR(4000)
			SET @instrCommitChange='svn commit ' + @filepath + ' --message "changed by '+ @LoginName + '" --username dbschema --password yourpassword'
			--PRINT @instrCommitChange
			EXEC xp_cmdshell @instrCommitChange
		END

		IF CHARINDEX('DROP',@EventType) > 0 BEGIN
			-- svn delete
			DECLARE @instrDel VARCHAR(4000)
			SET @instrDel='svn delete ' + @filepath + ' --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrDel

			-- svn commit
			DECLARE @instrCommitDel VARCHAR(4000)
			SET @instrCommitDel='svn commit ' + @filepath + ' --message "deleted by '+ @LoginName +'" --username dbschema --password yourpassword'
			EXEC xp_cmdshell @instrCommitDel
		END

	END

END

as you can see you can create a homegrown revision history of your DDL objects in SQL . I have tested this on the basic operations, no renames, etc using the GUI, but if you do use it, you might want to wrap it all in exception handling just to be on the safe side.

Happy DBA’ing 🙂

Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

SQL Server Agent – Query To Find Long Running Jobs

I use SQL Agent a ton. It is like my go to place to schedule jobs and tasks. Problem is, there are pieces “missing”. Like if a job hangs, you can have it auto stop after a given amount of time. It doesn’t alert on changes to a job, etc, etc.

I asked on twitter what the DBA’s think, @SQLRockstar thinks using OpsMgr is the way, which probably would be great, if I had it. I need a quick and dirty solution.

What I did was query the sysjobs, sysjobactivity, and sysjobhistory tables to get some kind of “look” into the running jobs. That way, if a job that is normally 45 minutes has been running for 5 hours, I should be able to catch it early on, just trying to be more proactive instead of reactive.

SELECT
	j.job_id AS 'JobId',
	name AS 'JobName',
	start_execution_date AS 'StartTime',
	stop_execution_date AS 'StopTime',
	avgruntimeonsucceed,
	DATEDIFF(s,start_execution_date,GETDATE()) AS 'CurrentRunTime',
	CASE WHEN stop_execution_date IS NULL THEN
			DATEDIFF(ss,start_execution_date,stop_execution_date) ELSE 0 END 'ActualRunTime',
	CASE
		WHEN stop_execution_date IS NULL THEN 'JobRunning'
		WHEN DATEDIFF(ss,start_execution_date,stop_execution_date)
			> (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-History'
		ELSE 'NormalRunning-History'
	END 'JobRun',
	CASE
		WHEN stop_execution_date IS NULL THEN
			CASE WHEN DATEDIFF(ss,start_execution_date,GETDATE())
						> (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-NOW'
			ELSE 'NormalRunning-NOW'
		END
		ELSE 'JobAlreadyDone'
	END AS 'JobRunning'
 FROM msdb.dbo.sysjobactivity ja
	INNER JOIN msdb.dbo.sysjobs j ON ja.job_id = j.job_id
INNER JOIN (
	SELECT job_id,
	AVG
	((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100)
	+
	STDEV
	((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100) AS 'AvgRuntimeOnSucceed'
	 FROM msdb.dbo.sysjobhistory
	WHERE step_id = 0 AND run_status = 1
	GROUP BY job_id) art
	ON j.job_id = art.job_id
WHERE
(stop_execution_date IS NULL) OR
	(DATEDIFF(ss,start_execution_date,stop_execution_date) > 60
	AND
	CAST(LEFT(start_execution_date,11) AS DATETIME) = CAST(LEFT(GETDATE(),11) AS DATETIME))
ORDER BY start_execution_date DESC

The inner query looks at the history to get the average runtime, and I added the standard deviation to that to make it more realistic, in case you have a one off that throws the average off. Of course this all depends on you having more than 3 entries in your job history as well.

I then just compare the average to the current runtime (if the job is executing) and to the actual (if it is complete).

You could use this query in a report, to view throughout the day to catch long running jobs, you could also use it to just give you the one that are currently running and over the time threshold, and alert you, so you can catch them right away.

I tested this on a few servers running SQL 2005, but I am guessing it will work on SQL 2008 as well. Find and kill those long running jobs!!