Categories
Business Intelligence Geeky/Programming

Emergency Broadcast: Cursors in SQL are Bad.

Emergency Broadcast:

Last week, I stumbled across a “legacy” system. Small system, few tables, few stored procs, few web pages that let users manage it. The main table had about 40k rows in it, nothing huge. This system gets used sparingly.

Reports of the main page of the system taking 5+ minutes to load, or just timing out. Really? Dug through the code and lo and behold. A cursor.

Looping through 500+ items, and running 8 or so queries in each iteration. The queries were very similar:

SELECT some stuff INTO Cursor

WHILE LOOPING THROUGH CURSOR

SELECT STUFF 1

SELECT STUFF 2

..

..

END

I ended up re-writing the query to just grab all the data and did some case statement in the SELECT. They query went from 5+ minutes to less than 1 second.

Yes, CURSORs are bad. Avoid them. Rewrite your SQL. It amazes me that with all the good info out there on the web around SQL that people still write CURSORs in their queries. I asked the DBA to look at all procs on all systems and find all the CURSORs so we can root them out once and for all.

End of Emergency Broadcast

Categories
Geeky/Programming SharePoint SQLServerPedia Syndication

SharePoint 2010: Track User Profile Changes Over Time Using MERGE

I recently blogged about a report you can write to see what users don’t have managers in SharePoint 2010 (get your Org Browser fixed!). Well, how about tracking who get’s added or deleted every day, or edited? You could probably look at the ForeFront Identity Manager that SharePoint 2010 uses and do some logging or use the API to log things, but that is.. well, harder than this.

What I wanted to do was keep logs for debugging and reporting. Who is getting added to SharePoint 2010 user profile sync? Is last user added to Active Directory a valid user, or a system account? Now, you could query Active Directory (just like finding users without managers) but I am guessing that most SharePoint Admins don’t have AD access, so they need another way.

What I did was create another database on my database server where the SharePoint content and profile databases are stored, so I can use it as a dumping ground.

What you need to do is create a table to dump the profile table and then every day do a SQL MERGE into it and save the changes. This is where it gets interesting

First, create your table (for this example I am only tracking NTName and PreferredName, so “domainusername” and “Name, User”, but you could easily add other columns.)

CREATE TABLE [dbo].[SharePointUserTracking](
	[RecordId] [bigint] NOT NULL,
	[NTName] [nvarchar](400) NULL,
	[PreferredName] [varchar](256) NULL,
 CONSTRAINT [PK_SharePointUserTracking] PRIMARY KEY CLUSTERED
(
	[RecordId] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

Once you have your table, you are ready to create your MERGE query, track the changes, and the get it scheduled up. So what we want to do is MERGE our UserProfile_Full data from our ProfileDB into our new table, tracking INSERT, UPDATE, and DELETE’s as well, in a tracking table. Let’s create our “tracking changes” table:


CREATE TABLE [dbo].[SharePointUserProfileDelta](
	[ExistingRecordId] [bigint] NULL,
	[ExistingNTName] [nvarchar](400) NULL,
	[ExistingPreferredName] [varchar](256) NULL,
	[ActionTaken] [nvarchar](10) NULL,
	[NewRecordId] [bigint] NULL,
	[NewNTName] [nvarchar](400) NULL,
	[NewPreferredName] [nvarchar](256) NULL,
	[LogDate] [datetime] NULL
) ON [PRIMARY]

GO

Notice the column “ActionTaken” to track what we do, INSERT, UPDATE, DELETE.

Now, for the super merge (note, the profile DB might not be named the same, also note the collation thing you need to do.. may or may not to do this in your environment)

  MERGE dbo.SharePointUserTracking AS target
    USING
            (SELECT RecordId,NTName,PreferredName FROM SP2010_ProfileDB.dbo.UserProfile_Full) AS source
            (RecordId,NTName,PreferredName) ON (target.RecordId = source.RecordId)
    WHEN MATCHED AND target.NTName  source.NTName COLLATE SQL_Latin1_General_CP1_CI_AS OR target.PreferredName  source.PreferredName COLLATE SQL_Latin1_General_CP1_CI_AS THEN
        UPDATE SET
                  NTName = source.NTName,
                  PreferredName = source.PreferredName
      WHEN NOT MATCHED THEN
          INSERT (RecordId,NTName,PreferredName)
          VALUES (source.RecordId,source.NTName, source.PreferredName)
      WHEN NOT MATCHED BY SOURCE THEN
            DELETE
      OUTPUT deleted.*, $action, inserted.*,GETDATE() INTO dbo.SharePointUserProfileDelta;

Now, you can schedule that to run after your daily user profile sync job in SharePoint and then you have a running log of changes, which you could make an SSRS report off of easily or do whatever you want with it. Pretty cool! Just know, you can use this method to track changes to any table.. slowly changing dims and even more.

Categories
Geeky/Programming SharePoint

SharePoint Report – Missing Managers

Trying to roll out SharePoint at an org can have it’s hard times. One is trying to get User Profile sync working well and making sure the Org Browser works well goes along with that. You can sync over the manager from AD and everything falls into place, but their might be users you are pulling that don’t have a manager set. Now of course you can query AD for this, but you would have to already know the filters and OU’s you are pulling into SharePoint. Another way to do this (and of course, disclaimer here, don’t try this at home if you are scared of querying SharePoint databases, and yes, it probably isn’t recommend, but I am doing it anyways). Here is a query you can use to get the User’s with no manager, and also join it back to get some other attributes such as department, office, and title so you can figure out where they are and who their manager might be (helpful in a larger org). You can easily throw this in an SSRS report, and have it email whomever maintains the managers in AD or in your organization. (Note, SP2010_ProfileDB might not be the name of your actual profile DB, you would have to change that in the query below)

SELECT up.RecordId,PreferredName,NTName,Email, office.Office, titles.JobTitle, dept.Department
	FROM dbo.UserProfile_Full up
	LEFT OUTER JOIN (SELECT [RecordID]
      ,[PropertyVal] AS 'Office'
  FROM [SP2010_ProfileDB].[dbo].[UserProfileValue]
  WHERE PropertyID = 11) office ON up.RecordId = office.RecordId

	LEFT OUTER JOIN (SELECT [RecordID]
      ,[PropertyVal] AS 'JobTitle'
  FROM [SP2010_ProfileDB].[dbo].[UserProfileValue]
  WHERE PropertyID = 13) titles ON up.RecordId = titles.RecordId

	LEFT OUTER JOIN (SELECT [RecordID]
      ,[PropertyVal] AS 'Department'
  FROM [SP2010_ProfileDB].[dbo].[UserProfileValue]
  WHERE PropertyID = 14)dept ON up.RecordId = dept.RecordId

	WHERE
	Manager IS NULL
	ORDER BY Office
Categories
Business Intelligence Geeky/Programming SQLServerPedia Syndication

SQL Server Agent – Query To Find Long Running Jobs

I use SQL Agent a ton. It is like my go to place to schedule jobs and tasks. Problem is, there are pieces “missing”. Like if a job hangs, you can have it auto stop after a given amount of time. It doesn’t alert on changes to a job, etc, etc.

I asked on twitter what the DBA’s think, @SQLRockstar thinks using OpsMgr is the way, which probably would be great, if I had it. I need a quick and dirty solution.

What I did was query the sysjobs, sysjobactivity, and sysjobhistory tables to get some kind of “look” into the running jobs. That way, if a job that is normally 45 minutes has been running for 5 hours, I should be able to catch it early on, just trying to be more proactive instead of reactive.

SELECT
	j.job_id AS 'JobId',
	name AS 'JobName',
	start_execution_date AS 'StartTime',
	stop_execution_date AS 'StopTime',
	avgruntimeonsucceed,
	DATEDIFF(s,start_execution_date,GETDATE()) AS 'CurrentRunTime',
	CASE WHEN stop_execution_date IS NULL THEN
			DATEDIFF(ss,start_execution_date,stop_execution_date) ELSE 0 END 'ActualRunTime',
	CASE
		WHEN stop_execution_date IS NULL THEN 'JobRunning'
		WHEN DATEDIFF(ss,start_execution_date,stop_execution_date)
			> (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-History'
		ELSE 'NormalRunning-History'
	END 'JobRun',
	CASE
		WHEN stop_execution_date IS NULL THEN
			CASE WHEN DATEDIFF(ss,start_execution_date,GETDATE())
						> (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-NOW'
			ELSE 'NormalRunning-NOW'
		END
		ELSE 'JobAlreadyDone'
	END AS 'JobRunning'
 FROM msdb.dbo.sysjobactivity ja
	INNER JOIN msdb.dbo.sysjobs j ON ja.job_id = j.job_id
INNER JOIN (
	SELECT job_id,
	AVG
	((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100)
	+
	STDEV
	((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100) AS 'AvgRuntimeOnSucceed'
	 FROM msdb.dbo.sysjobhistory
	WHERE step_id = 0 AND run_status = 1
	GROUP BY job_id) art
	ON j.job_id = art.job_id
WHERE
(stop_execution_date IS NULL) OR
	(DATEDIFF(ss,start_execution_date,stop_execution_date) > 60
	AND
	CAST(LEFT(start_execution_date,11) AS DATETIME) = CAST(LEFT(GETDATE(),11) AS DATETIME))
ORDER BY start_execution_date DESC

The inner query looks at the history to get the average runtime, and I added the standard deviation to that to make it more realistic, in case you have a one off that throws the average off. Of course this all depends on you having more than 3 entries in your job history as well.

I then just compare the average to the current runtime (if the job is executing) and to the actual (if it is complete).

You could use this query in a report, to view throughout the day to catch long running jobs, you could also use it to just give you the one that are currently running and over the time threshold, and alert you, so you can catch them right away.

I tested this on a few servers running SQL 2005, but I am guessing it will work on SQL 2008 as well. Find and kill those long running jobs!!