Categories
Geeky/Programming

Analyzing ADFS IIS Logs

If you are using Active Directory Federation and you want to see what users are logging in when to what external service, you can analyze the ADFS server IIS logs. It is pretty straightforward since it is just IIS.

First, get to your ADFS box, get to the IIS log directory, usually something like “C:WindowsSystem32LogFilesW3SVC1” and grab those logs.

Install LogParser on your machine.

Now, you can write sql type queries against your logs. For ADFS logs, we don’t care so much about many of the columns, but primarily username and date, maybe the URI for filtering, maybe the referrer or the user agent to see what browsers your users are using, but to get say, unique logins per day for a given service, we just need the date, username and URI.

Remember the date is probably UTC so you need to use a function to convert, or leave as is if you want, and everything is pretty much all relative depending on how accurate you want things to be. hint: TO_TIMESTAMP(date, time) AS utc-timestamp, TO_LOCALTIME(utc-timestamp) AS local-timestamp

Now, here is the LogParser query:

logparser "SELECT DISTINCT cs-username, date INTO FROM WHERE cs-username NULL and cs-uri-query LIKE '%your service%'"

Note in the statement the output path and your log path, change to what yours are. Also, the LIKE statement. For example, to query for Microsoft Dynamics CRM Online, I used

LIKE ‘%dynamicscrm%’

Run that query, then open the .csv you exported to. Format the data as a table, pivot it by user, pivot by date. Get the unique number of days using a date diff, analyze logins per day, logins per user. Tie to Active Directory (using Power Query) to add some dimension attributes like title or department and very quickly you can analyze what users, departments etc are using your service.

Categories
Business Intelligence

Day 1 Review #sqlpass #summit12

This SQL PASS Summit was my third, and it was good. Kind of crazy timing as we just had a baby 2+ weeks ago, so I am very lucky I got to go.

Day one was Wednesday Nov 7th. There is a kickoff thing the night before which is always good to see everyone again, etc. There are pre-cons two days before (5th, 6th). Myself, as with many I talked to, came out the 5th, thinking the conference started the 6th, which we were mistaken, so it was kind of a free day, but still things going on. The website said 6th-9th so we all assumed without digging into the detail. At least I wasn’t the only one.

The first day keynote was good, Ted Kummert from Microsoft which I have seen a few times now, and the same cast of characters, Amir Netz showing off more Power View and Movie data. The big things announced that made me perk up were SQL Server 2012 SP1 and Power View over OLAP (coming soon?). No big flashy giveaways like BUILD, but good keynote, then the fun starts.

I attended 4 sessions on Wednesday

1. BIA-303: What’s New in Analysis Services 2012? – Chris Webb

This was my first session of the day, and it was in 305-TCC. TCC was across the street, which maybe was like that years past, but I never had to go to any, so everyone seemed lost. We finally got there, but then Chris Webb told us that the abstract was wrong in some places and the talk would mostly be about tabular, not multidimensional. Oh well, good stuff anyways. There was one slide about OLAP stuff. The biggest thing I got out of this was xEvents for SSAS, and how to pull into PowerPivot. This is the first time I have seen Chris Webb present and it was good.

2. BIA-316-M: Enterprise Information Management: Bringing Together SSIS, DQS, and MDS

For the second session, it was two Microsoft employees. I like to try to hit many sessions by Microsoft Employees because well, they usually have worked on the products, and they get into details, and they sometimes let some juicy details slip.

Matt Masson and Matthew Roche are great presenters, funny and play off each other. They showed and telled SQL Server 2012 MDS and DQS and discussed how it could and should be used in orgs. Master Data is a huge issue in many businesses and the Microsoft solution looks really good. Using DQS along with SSIS to clean your data, or as a very smart “spell checker”, and then MDS to track changes, workflow, and send back data to source systems if you’d like. The big thing here I took out was how they see MDS fitting into businesses, and that a BI team should implement MDS/DQS to make sure their dimensional data is clean and the “golden master” they need for great BI reporting, and updating back to source systems is a secondary thing.

3. BID-212-S: Around the World with SharePoint BI Toolbelt

This was a typical Brian Knight session. Not as huge of a production as some of them I have seen. Just him and his employee/bi architect and a helper/demo person.

They showed quickly how to get SharePoint setup for Excel Services and Power View and then did some demos. Overall good stuff but seemed a bit rushed and some things didn’t work. They demo’d PerformancePoint, which who knows what future that has, but seems like the best tool for OLAP scorecards in SharePoint. Performance Point has been an enigma for us to do anything with, not sure we ever will. I always see it demo’d and see the benefits, and see what it can do, but we never get around to doing it. Maybe someday, or maybe it will just get replaced by something..

As I said he brings up a sales person from his team or someone new to show how easy it is for a non-techie to use Power View (or whatever tool they present) and go through a little demo.

4. BID-102: Mobile Business Intelligence for Everyone, Now!

Final presentation of the day was with Jen Stirrup, who also won the PASSion award on Thursday. I also chatted with her briefly Wednesday morning, which was good as I haven’t met her before this summit. The presentation was OK. It was a 100 level, but I wanted to see some Mobile BI. I have some high expectations as I saw Jen Underwood present on Mobile BI at TechEd, so was expecting more of the same. Jen Underwood was actually in the audience and answered some audience questions.

The presentation had some technical glitches, and also dug a little to deep into visualization discussion, which is good, but I wasn’t expecting it in this one, maybe a different session. Jen showed some stuff on her iPad, and talked about how she uses Azure and SSRS in Azure, and also HostedPowerPivot, which was good stuff, but nothing new that I didn’t see at TechEd.

I use MobiSSRS for SSRS reports on iOS and that works great, she didn’t mention it, but Mobile BI presentations can get into the “3rd party app here and there” instead of what you can do out of the box. With mobile BI though, the first question is, “do you run SharePoint?” and the second is, “It is Enterprise?” because that makes a big difference in what you might try to do

Wednesday was a good day, I didn’t do much in the evening besides just grab a bite to eat and hit the hay. Bummer this year was that I started getting a cold on the way out on the plane, and it ate at my voice all week. Nothing to serious but enough to not want to talk in a pub about BI much as you have to yell.

More to come about Day 2 and Day 3, and overall thoughts..

Categories
Business Intelligence Geeky/Programming

Yamanalysis: Analyzing Yammer and Using PowerPivot on MySQL

I have blogged before about we use Yammer. Some interesting data can be gleaned from the usage of Yammer. One thing though is that the data and usage stats are limited in the Yammer area, but you can get all the data and take a look at things. I ran into Yamanalysis and decided to give it a try.

After getting Ruby, Rails, MySQL, curl/curb, GraphViz, IBM WordCloud and whatever else configured, I finally got it working. (FYI – MySQL 5.0 – you need to run the config wizard as administrator on Windows 7 or it just hangs at the end).

Pretty cool data and analysis from a higher level. Of course after getting everything working, I wanted to hit the data with PowerPivot. This sounds like an easy feat, but yet seemed to be a complicated task.

I first got the ODBC connector 5.1 for MySQL (Since PowerPivot doesn’t natively connect to MySQL,and 5.1 since that is the only one I could find reliably and get to work.), and set up an ODBC source. Tests fine.

In PowerPivot, I would run through the wizard and it would get architecture mismatches, and catastrophic failures, trying to test the connection. Ignoring that and moving forward, running a query would just hang on import forever. I tried different DSN’s, User/System DSNs, etc, to no avail.

What I ended up doing was firing up my local Microsoft SQL instance, and creating a linked server through a system DSN to the MySQL instance, then I could query the data fine from SQL. I opened up PowerPivot, connected to SQL local and then ran the query to MySQL and it work. What a workaround, what a hack, but at least I can hit the data in PowerPivot locally, which was my goal here.

Of course I could take what Yamanalysis is doing and dump to SQL, or do something similar in C# and dump to SQL, that might be a project for another day.

Categories
Business Intelligence Geeky/Programming

Emergency Broadcast: Cursors in SQL are Bad.

Emergency Broadcast:

Last week, I stumbled across a “legacy” system. Small system, few tables, few stored procs, few web pages that let users manage it. The main table had about 40k rows in it, nothing huge. This system gets used sparingly.

Reports of the main page of the system taking 5+ minutes to load, or just timing out. Really? Dug through the code and lo and behold. A cursor.

Looping through 500+ items, and running 8 or so queries in each iteration. The queries were very similar:

SELECT some stuff INTO Cursor

WHILE LOOPING THROUGH CURSOR

SELECT STUFF 1

SELECT STUFF 2

..

..

END

I ended up re-writing the query to just grab all the data and did some case statement in the SELECT. They query went from 5+ minutes to less than 1 second.

Yes, CURSORs are bad. Avoid them. Rewrite your SQL. It amazes me that with all the good info out there on the web around SQL that people still write CURSORs in their queries. I asked the DBA to look at all procs on all systems and find all the CURSORs so we can root them out once and for all.

End of Emergency Broadcast

Categories
Business Intelligence SQLServerPedia Syndication

Selling Management on SQL 2012

2012 is going to be a big year in the SQL world. No, the world isn’t going to end. SQL 2012 should get released by Microsoft, hopefully in the first half (cross your fingers for the first quarter!) of the year. Great! But many out there are now on SQL 2005, or 2008, or 2008 R2, some even on SQL 2000 (SP4 – still get support?) but you want to get to SQL 2012. What can you do to make that transition easier? You need to sell the features and benefits, just like anything else.

Clustering

If you have any kind of clustering environment, or mirroring, or are even thinking about doing clustering, then SQL 2012 is going to be what you want to do. With AlwaysOn, it makes it dead simple to create and manage clusters. If you look back over the versions of SQL, and think clustering, you might shutter. With 2012, things become much easier and management has to see this benefit, as with anything, to make your systems more available with the new AlwaysOn

Master Data Services and Data Quality Services

Microsoft came out with their first round of Master Data Services (MDS) in SQL 2008 R2, but it was lackluster. The interface is clunky, weird, and hard to use. Most “end users” of MDS aren’t going to be that technical. You need something simple, like SharePoint, or Excel. MDS is neither (even though its a weird version of SharePoint). With 2012, MDS is vastly improved and actually something viable where an Enterprise could use it for a Master Data Management (MDM) solution. Couple that with Data Quality Services (DQS) and you get tons of bang for your buck. with MDS and the excel add on, this will be just what the doctor ordered for MDM groups in businesses.

Business Intelligence

Near and dear to my heart of course, is Business Intelligence. What a huge release for BI folks in 2012. First off, a whole new analysis services type, Tabular. Columnar Vertipaq type cubes. Reverse engineer PowerPivots right into SSAS Tabular and then tweak to release out to the Enterprise.

Then the enhanced SSIS stuff, better IDE, better management of packages, and more. Of course the integration with the Visual Studio 2010 IDE is a welcome feature, especially for those of us that also need to work on C# and .NET 4.0 stuff!

But don’t forget the potential biggest thing yet out of the BI tools for 2012 – Power View (yes the space is intentional, not sure why.. but now we have PowerPoint, PowerPivot and Power View). Naming aside, Power View could be a HUGE analytics tool to get more BI out to the people in an Enterprise. First off, they plan on making it work on iOS! Power View works on tabular cubes, so you see the tie in there. The one big thing with Power View, is it just works inside of SharePoint. No stand alone editor. You better have SharePoint 2010 and a pretty good SharePoint admin along side your BI team to get all this stuff working. Some of the enhanced end user alerting in SSRS integrated mode looks nice as well. But once again, you need SharePoint! DON’T for get the SharePoint!

There is much more in SQL 2012 that will make DBA’s lives easier, and BI pros development streamlined. Too much to outline in just one post. But if you are trying to sell SQL 2012 upgrade to management, the “big three” things I outlined above are a good starting point. One thing to be aware of though is that the licensing model has changed in SQL 2012 to core based, so you would want to read up on that.

I’m excited for SQL 2012 bits to hit and I hope you are too!

Categories
Geeky/Programming SharePoint SQLServerPedia Syndication

SharePoint 2010: Track User Profile Changes Over Time Using MERGE

I recently blogged about a report you can write to see what users don’t have managers in SharePoint 2010 (get your Org Browser fixed!). Well, how about tracking who get’s added or deleted every day, or edited? You could probably look at the ForeFront Identity Manager that SharePoint 2010 uses and do some logging or use the API to log things, but that is.. well, harder than this.

What I wanted to do was keep logs for debugging and reporting. Who is getting added to SharePoint 2010 user profile sync? Is last user added to Active Directory a valid user, or a system account? Now, you could query Active Directory (just like finding users without managers) but I am guessing that most SharePoint Admins don’t have AD access, so they need another way.

What I did was create another database on my database server where the SharePoint content and profile databases are stored, so I can use it as a dumping ground.

What you need to do is create a table to dump the profile table and then every day do a SQL MERGE into it and save the changes. This is where it gets interesting

First, create your table (for this example I am only tracking NTName and PreferredName, so “domainusername” and “Name, User”, but you could easily add other columns.)

CREATE TABLE [dbo].[SharePointUserTracking](
	[RecordId] [bigint] NOT NULL,
	[NTName] [nvarchar](400) NULL,
	[PreferredName] [varchar](256) NULL,
 CONSTRAINT [PK_SharePointUserTracking] PRIMARY KEY CLUSTERED
(
	[RecordId] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

Once you have your table, you are ready to create your MERGE query, track the changes, and the get it scheduled up. So what we want to do is MERGE our UserProfile_Full data from our ProfileDB into our new table, tracking INSERT, UPDATE, and DELETE’s as well, in a tracking table. Let’s create our “tracking changes” table:


CREATE TABLE [dbo].[SharePointUserProfileDelta](
	[ExistingRecordId] [bigint] NULL,
	[ExistingNTName] [nvarchar](400) NULL,
	[ExistingPreferredName] [varchar](256) NULL,
	[ActionTaken] [nvarchar](10) NULL,
	[NewRecordId] [bigint] NULL,
	[NewNTName] [nvarchar](400) NULL,
	[NewPreferredName] [nvarchar](256) NULL,
	[LogDate] [datetime] NULL
) ON [PRIMARY]

GO

Notice the column “ActionTaken” to track what we do, INSERT, UPDATE, DELETE.

Now, for the super merge (note, the profile DB might not be named the same, also note the collation thing you need to do.. may or may not to do this in your environment)

  MERGE dbo.SharePointUserTracking AS target
    USING
            (SELECT RecordId,NTName,PreferredName FROM SP2010_ProfileDB.dbo.UserProfile_Full) AS source
            (RecordId,NTName,PreferredName) ON (target.RecordId = source.RecordId)
    WHEN MATCHED AND target.NTName  source.NTName COLLATE SQL_Latin1_General_CP1_CI_AS OR target.PreferredName  source.PreferredName COLLATE SQL_Latin1_General_CP1_CI_AS THEN
        UPDATE SET
                  NTName = source.NTName,
                  PreferredName = source.PreferredName
      WHEN NOT MATCHED THEN
          INSERT (RecordId,NTName,PreferredName)
          VALUES (source.RecordId,source.NTName, source.PreferredName)
      WHEN NOT MATCHED BY SOURCE THEN
            DELETE
      OUTPUT deleted.*, $action, inserted.*,GETDATE() INTO dbo.SharePointUserProfileDelta;

Now, you can schedule that to run after your daily user profile sync job in SharePoint and then you have a running log of changes, which you could make an SSRS report off of easily or do whatever you want with it. Pretty cool! Just know, you can use this method to track changes to any table.. slowly changing dims and even more.

Categories
Geeky/Programming SharePoint

SharePoint Report – Missing Managers

Trying to roll out SharePoint at an org can have it’s hard times. One is trying to get User Profile sync working well and making sure the Org Browser works well goes along with that. You can sync over the manager from AD and everything falls into place, but their might be users you are pulling that don’t have a manager set. Now of course you can query AD for this, but you would have to already know the filters and OU’s you are pulling into SharePoint. Another way to do this (and of course, disclaimer here, don’t try this at home if you are scared of querying SharePoint databases, and yes, it probably isn’t recommend, but I am doing it anyways). Here is a query you can use to get the User’s with no manager, and also join it back to get some other attributes such as department, office, and title so you can figure out where they are and who their manager might be (helpful in a larger org). You can easily throw this in an SSRS report, and have it email whomever maintains the managers in AD or in your organization. (Note, SP2010_ProfileDB might not be the name of your actual profile DB, you would have to change that in the query below)

SELECT up.RecordId,PreferredName,NTName,Email, office.Office, titles.JobTitle, dept.Department
	FROM dbo.UserProfile_Full up
	LEFT OUTER JOIN (SELECT [RecordID]
      ,[PropertyVal] AS 'Office'
  FROM [SP2010_ProfileDB].[dbo].[UserProfileValue]
  WHERE PropertyID = 11) office ON up.RecordId = office.RecordId

	LEFT OUTER JOIN (SELECT [RecordID]
      ,[PropertyVal] AS 'JobTitle'
  FROM [SP2010_ProfileDB].[dbo].[UserProfileValue]
  WHERE PropertyID = 13) titles ON up.RecordId = titles.RecordId

	LEFT OUTER JOIN (SELECT [RecordID]
      ,[PropertyVal] AS 'Department'
  FROM [SP2010_ProfileDB].[dbo].[UserProfileValue]
  WHERE PropertyID = 14)dept ON up.RecordId = dept.RecordId

	WHERE
	Manager IS NULL
	ORDER BY Office
Categories
Agile Business Intelligence Geeky/Programming SQLServerPedia Syndication

Agile: Creating an SSRS Burndown Chart Part 3

In the previous 2 parts (see Part 1 and Part 2) of this series I showed you how to get your data ready, and how to get your report started and your Datasets and parameters where you need them. In this part, we will get the graph functional, and in the next part, we will make it pretty.

Start by adding title to your report “Agile Burndown”, then add a Line Chart to your report. Make it somewhat big, delete the Chart Title and Axis Titles,  and remove the “Details” from the Category Groups. You should have something that looks like this:

 

image_thumb15

Now to get the data on and finish it off!

Drag your values over to your Chart Data Values area like this:

image_thumb[17]

One thing we need to tweak, and this is on the PointsLeft Value. Right click on the PointsLeft series and go to “Series Properties”. To the right of the Value field, click the Fx button (for Expression Functions).

We need to change this series to not write out anything to the graph if there are no points greater than today. Why? If you don’t do this, your graph line for PointsLeft will drop off to zero for dates in your sprint after the current day, and we don’t want this. This is what the expression should be:

 

=IIF(Sum(Fields!PointsLeft.Value)=0 And Fields!Date.Value > DateTime.Now,Nothing,Sum(Fields!PointsLeft.Value))

 

Pretty cool, your graph should actually work now and function as a working burndown chart. But of course we need to pretty it up! Look for the next and final post soon.

Categories
Agile Geeky/Programming SQLServerPedia Syndication

Agile: Creating an SSRS Burndown Chart Part 2

In the previous post in this series, Agile: Creating an SSRS Burndown Chart Part 1, I explained what data you would need to prepare to create an SSRS Burndown Chart (Sprint_Dates, Stories, Story_History). In this part of the series I will explain how to get a basic burndown report in SSRS.

First, fire up Report Builder 3.0 and create a new report (if the wizard pops up, just pick “Blank Report”). You need to add a Data Source to your report. In my example, I am just using a database on my localhost called Agile, so I connect to that and create a report Data Source.

image

 

We then need to add 3 Datasets to the report. (Burndown, Sprints, and CurrentSprint), and one parameter (Sprint) and we can then format our report.

 

Sprints (this will be a dropdown of Sprints for a user to choose from)

image

CurrentSprint (this will get the current sprint based on what day we view the report, default param for the Sprint parameter we will create)

image

 

For the Burndown, do the same thing, but since the query is so large, no screenshot, just the query:

;WITH DayHistory AS
(
SELECT
	 bd.[Date]
	,bd.PointsScheduled
	,bd.PointsLeft
	,bd.PointsScheduled - ((ROW_NUMBER() OVER (ORDER BY bd.[Date]) - 1) * (CAST(bd.PointsScheduled AS DECIMAL(15,6))/10.0)) AS 'Goal'
	,ROW_NUMBER() OVER (ORDER BY bd.[Date]) AS [DayNumber]
FROM (
	SELECT tot.Sprint,tot.LogDate AS [Date],
		CASE WHEN SUM(tot.PointsScheduled) = 0 THEN (SELECT SUM(Points)
		FROM dbo.Stories st
		WHERE Sprint = 'Sprint01') ELSE SUM(tot.PointsScheduled) END AS 'PointsScheduled',
		SUM(tot.PointsLeft) AS 'PointsLeft'
	FROM (
			-- Get History for the Current Sprint
			SELECT Sprint,LogDate,SUM(Points) AS 'PointsScheduled', SUM(PointsLeft) AS 'PointsLeft'
			 FROM
				 dbo.Story_History st
				WHERE Sprint = @Sprint
			GROUP BY Sprint,LogDate
			UNION
			-- Get the Current Day
			SELECT	Sprint AS 'Sprint',CAST(GETDATE() AS DATE) AS 'LogDate',
				SUM(Points) AS 'PointsScheduled',
				SUM(PointsLeft) AS 'PointsLeft'
				FROM dbo.Stories
				WHERE Sprint = @Sprint
			GROUP BY Sprint
			UNION
			-- Get zero's for all days in sprint to round out our dataset
			SELECT 'Sprint01' AS 'Sprint',WorkDate,0,0
			FROM dbo.Sprint_Dates
			WHERE Sprint = @Sprint
		) tot
	GROUP BY tot.Sprint,tot.LogDate
) bd
)
SELECT
	 a.[Date]
	,ISNULL(b.PointsScheduled, a.PointsScheduled) AS [PointsScheduled]
	,ISNULL(b.PointsScheduled, a.[PointsLeft]) AS [PointsLeft]
	,ISNULL(b.PointsScheduled, a.[Goal]) AS [Goal]
FROM DayHistory a
	LEFT OUTER JOIN DayHistory b
		ON a.DayNumber = b.DayNumber - 1
			AND b.DayNumber = 2
ORDER BY Date

 

This query is where all the magic happens. First, you need to get your story point values for the days, from your history, and also from the current day, you also need to get all days for that sprint with zero’s so that your graph will have all days and not just days with burndown. The CTE around the main query calculates the burndown by day so you end up with 4 columns, Date, PointsScheduled, PointsLeft, Goal

Now that you have your Datasets, we need to create a parameter, and then the graph!

Create a new parameter called “Sprint”, and set up the available values. Remember the Dataset we created to get all the sprints? Here is where you use it, like this:

image

Next, we want to setup the default values. Remember the query to get the “Current Sprint” – that is used to set our default.

image

Once you have that all setup, it is time to build the graph!

We are really close to having a working report here, and check back for part 3 of the series to get the graph working correctly, and part 4 for beautification!

Categories
Agile Geeky/Programming

Agile: Creating an SSRS Burndown Chart Part 1

The burndown chart. A must have for any ScrumMaster and Agile team. What it should show you is the rate at which you are “burning” down story points.

image

As you can see from the chart above, 3 lines. Red is your “points scheduled”, Green is the “goal” and blue is “points left”. While it is easy enough to create this chart and track the burndown manually in Excel, many teams after using Excel turn towards other systems to track their points and sprints. Right now I have one team using Unfuddle, one team using TFS, there are others that use this chart that use Footprints and really you can use whatever, and this chart can be built off of any database as long as it has the right data.

First, you need a table with your stories in it. You need to have some key columns – Sprint, Points and PointsLeft.

CREATE TABLE [dbo].[Stories](
	[Sprint] [varchar](50) NULL,
	[Points] [int] NULL,
	[PointsLeft] [int] NULL,
	[StoryId] [int] NOT NULL,
	[StoryText] [varchar](max) NULL
) ON [PRIMARY]

Now you may have others, like StoryId, StoryText, Assignee, etc but we aren’t concerned about those for this chart.

You then need at least 2 or tables, and a SQL job. 1 table to hold your Sprint and Dates and one to hold your “Story History”

 

CREATE TABLE [dbo].[Sprint_Dates](
	[Sprint] [varchar](50) NOT NULL,
	[WorkDate] [date] NOT NULL
) ON [PRIMARY]

CREATE TABLE [dbo].[Story_History](
	[LogDate] [date] NOT NULL,
	[Sprint] [varchar](50) NOT NULL,
	[Points] [int] NULL,
	[PointsLeft] [int] NULL
) ON [PRIMARY]

 

You will need a SQL Agent Job to run at 11:55 PM to capture the history, which should run this query:

 

INSERT INTO dbo.Story_History (LogDate,Sprint,Points,PointsLeft)
SELECT CAST(GETDATE() AS DATE),Sprint,SUM(Points),SUM(PointsLeft)
FROM dbo.Stories
GROUP BY Sprint

 

Remember you might not need all 3 tables, just the history and dates. You can get your actual stories off of wherever your stories are stored in the database. Now that you have your data in place, you can get ready to write the actual report! Look for the next part in this series.