Behind the Green Door: Granting non-privileged users permissions to read execution status from the SSIS catalog

The SSIS catalog, introduced in SQL Server 2012, is a feature that allows us to manage projects and parameters, execute SSIS packages, and generate reports on our SSIS environment, all within a single SQL Server database, rather than spread across the range of .dtsconfig files, dtexec.exe batch files (did someone say 32bit?) , and log locations provided in previous versions of SQL Server Integration Services. You can read more about it here.

The SSIS catalog is contained in a single database called SSISDB. This is a user database, rather than a system database, and can be managed just like any other user database.   In this respect, you can think of SSIS as being just like any other vendor application which uses SQL Server to store its data, except that in this case the vendor in question is Microsoft.

I recently came across a requirement to allow the end users access to view the execution status of their SSIS packages without recourse to the IT Department. Naturally, this information was to be displayed in a “traffic light” indicator, since I heard somewhere that this is the only language end users understand. The users in question don’t need permission to run the SSIS packages, only to see the last execution time and execution status of each SSIS package – all the packages are run by SQL Server Agent scheduled jobs. I won’t be discussing the traffic light further here, we’ll content ourselves with creating a select statement to return the relevant information from the SSIS catalog.

I’ve created a dummy package, which you can download from here if you like, and a SQL Server Agent Job to execute the package.

SSIS package deployed to catalog

The job has a single step, which uses the SSIS subsystem to execute our package:

SQL Serv er Agent Job Step to run SSIS package

All of the information we require for our query is contained in a view named catalog.executions, inside the SSISDB database:

I’ve run the package a few times here.

Querying the SSIS Catalog

 

Now, in the normal course of things, our users naturally don’t have any permissions in the SSIS catalog.  At installation there is a single database role created in the SSISDB database, called SSIS_Admin, whose members can read and execute all packages and other objects  in the SSIS catalog. This is rather more access than our users require, so the next step is to create our own database role (SSISDB is a user database rather than a system database, after all) and grant the required permissions to this role.

The following script creates a new login for a windows user “englebert”, and a user for this login in the SSISDB database. Next it creates a role “SSIS_Reader” and adds our user to this role. The role is granted “read” and “read objects” permissions on our folder, equivalent to the action in this dialog:

SSISDB folder permissions

Sadly this query returns no rows.

no rows

This is because we’ve granted permissions on the objects in our folder, but this permission doesn’t extend to the executions of our packages.

To work around this, we need to change the SQL Server Agent job to use a TSQL step type.

We call the stored procedure catalog.create_execution , which gives us back the  execution_id  of this run of the package. Next, before running the package with catalog.start_execution , we call catalog.grant_permission , but with  @object_type  of 4, which means “operation”, and pass in the execution _id we just retrieved.

Having run this a couple of times, “Englebert” is now able to see all the executions that were started through this mechanism.

englebert success

 

Changing the database owner during SSDT Database Project Deployment

Slightly dis-spiriting that this is still required in 2014, given the blog entry from Gert Drapers in 2007 here and associated connect item here, but I was reminded of this recently when I noticed some stuff running as a named user account – the application in question made use of “execute as owner” in a couple of places.

The suggestion in the original blog entry is to use sp_changedbowner  in a post-deployment script to set the database owner, but nowadays we can do slightly better than that, not least because sp_changedbowner has been deprecated since SQL Server 2008.

The other drawback with sp_changedbowner is that like many of the “legacy” administrative stored procedures, it requires you to be in the context (using use …) of the database you wish to alter. Now, this isn’t generally a problem when using SSDT, as the generated deployment script issues a USE $(DatabaseName)  at the top, but you can never be too careful, particularly as you don’t know what else has gone on in the other post-deployment scripts – unless, of course, you can be bothered to read them.

ALTER AUTHORIZATION  is the current way to change the owner of a database, and indeed of any other securable.

So, what I do now is to include a file AlterAuthorisationDB.sql in every project, and call it from the post-deployment script. (Remember, you can only have one script per project marked as the post-deployment script).

This file consists of exactly one line:

This will work the same way in any project, since the $(DatabaseName)  variable is pre-defined.

I then call this from the main post-deployment script:

You can see I generally organise post-deployment scripts into folders, I use “Misc” for stuff like this.

Database refactoring and SSDT Deployment

This topic came up on Twitter last night; in particular the question of how to split a table into two during a deployment without losing the data. The articles – and Connect item – linked in that conversation were all a few years old, so I thought I’d revisit this topic using SSDT in Visual Studio 2013.

Splitting a table is a fairly common operation to have to undertake during an application upgrade – in particular when you realise that a relationship you thought was one-to-one should really be one-to-many, or even many-to-many, for instance Customer to Address. This operation is so common, in fact, that it’s addressed in the Refactoring Databases book.

Chinook SSDT Project Showing Customer Table

I’m using the Chinook database from chinookdatabase.codeplex.com, and we’ll make precisely this change, splitting the Customer table into Customer and Address. I’ve already imported the database into a Visual Studio 2013 Database Project. In this instance, we’re going to allow one customer to have many addresses, so we will need a foreign key from the new Address table back to our existing Customer table.

 

 

 

 

 

 

Here’s the DDL for the existing Customer Table:

Of these columns, we want to move Address, City, State, Country and PostalCode to our new Address table, and add a foreign key back to the Customer table.

The first step is to create our new table, and make the changes we need to our existing Customer table. The Primary Key of the new table will be CustomerID and AddressType.

Create the Address Table

 

Changed Customer Table

Now for the fun part. We need to add pre-deployment and post deployment scripts to our project. Pre-deployment scripts are executed after the schema comparison has taken place, so any objects we create here won’t be dropped by the dacpac upgrade operation. We can use this feature to create a temporary table, a table variable, or a “temporary” table (meaning a table called “tmp<table_name>” ) to hold the contents of our customer table whilst the rest of the deployment runs.  For no particular reason, I generally do the third of these.

There’s another consideration at this point, which is that we only want to do all of this if we’re changing from a version of the database with the address data in the Customer table to one where the addresses are held separately. There are a few approaches to this involving maintaining database version numbers in a table (e.g. AdventureWorks), or even in extended properties.  The approach I like to use is simply to test whether the change we’re making has already been made – I realise this isn’t as robust as the other methods, but it has suited me up until now. You can see this at the start of my pre-deployment script.

Finally, SSDT is going to complain that “Rows were detected. The schema update is terminating because data loss might occur.”. We have a couple of options to deal with this, the first being to uncheck the “Block Incremental Deployment If Data Loss Might occur” box in the publish profile advanced properties dialog. However, given that we quite like having this box checked, we’re going to take a different approach, namely deleting all the rows from the customer table before proceeding (remember we saved these rows in the “temporary” table).

The final gotcha is that if we inspect the Chinook data model, we can see that the Customer table is referenced by two other foreign keys, from the Invoice and Employee tables.

Chinook Data Model

Chinook Data Model

 

We need to disable both of these in our pre-deployment prior to deleting the rows.

 

 

All that remains is to populate the Customer and Address tables from our post-deployment script. Again, we only want to do this if it’s necessary, so we’ll test for the existence of our “temporary” table before doing anything. We’ll assume that we can use Address Type “1″ (it is, after all the first type!). There is a slight annoyance here, which is that Customer has an identity column, but this is easily overcome.

Finally, before heading to the pub, we need to re-enable those constraints we disabled in the pre-deployment step.

 

 

 

Intelligent Design, or the Evolution of the dacpac?

As the title suggests, what follows is a speculative history, prompted by Jamie Thomson’s “Dacpac Braindump” post, and thoroughly uninformed by any inside information on the development of SQL Server Data Tools. Nonetheless, speculating on why something works the way it does can sometimes be useful in understanding how something works the way it does.

In the beginning there was something called DataDude, or Visual Studio Team System for Database Professionals, and eventually something called Visual Studio 2010 database projects. These tools enabled users of Visual Studio to develop database objects in a declarative manner, build their projects into .dbschema files, and deploy these .dbschema files with vsdbcmd.exe. The Visual Studio developers looked at this, and saw that it was good.

Some things were still a bit of a pain though, not least the fact that one had to have a Visual Studio license in order to take advantage of these tools, and you needed to redistribute, or at least copy to your production servers, the deployment utility vsdbcmd.exe.

In parallel to this, a feature known as “Data Tier Applications” was introduced with SQL Server 2008 R2, and subsequently back-ported to SQL Server 2008 Sp2. This used the DacFx framework version 1.0 to create data-tier applications, which were a special kind of database that could be deployed using the “Data Tier Application Wizard” and “monitored” using something called the “SQL Server Utility”. Visual Studio 2010 also contained a “Data Tier Application” project type, which could be used to create such “applications”.

One major impediment to the widespread adoption of this technology was that in-place upgrades of the deployed schema weren’t possible; the “upgrade” process (from msdn) was as follows:

  • Verifies that the deployed DAC and the DAC package both have the same application name (for example, both names are set to Finance). It also evaluates whether the instance of the Database Engine meets the requirements specified in the server selection policy (if defined) in the new version of the DAC, and whether the existing database contains objects not supported in a DAC.
  • Deploys the new version of the DAC from the DAC package. This creates a new database with a temporary name.
  • Sets the original database to read-only if it is not already in read-only mode, and copies the data to the new database.
  • If the original database was in read-only mode, the new database is set to read-only.
  • The original database is renamed by appending a string to the end of the database name.
  • The new database is assigned the original database name.
  • After the database administrator has confirmed that the new database is operating correctly, the original database can be archived.

In short, this feature was as much use as a chocolate teapot.

These limitations were addressed with the release of DacFx 1.1, available in SQL Server 2008 R2 Sp1 and Visual Studio 2010 Sp1. However, since this new functionality was introduced in a service pack (or a service pack to a service pack?), nobody paid any attention to it (Hello, DAXMD!).

Enter SQL Server 2012, with its “extended” beta period. At this time, the tools now known as SSDT were referred to as project “Juneau”, which combined the development environment from Visual Studio Database Projects with the deployment capabilities of the DacFx framework. This, in my uninformed opinion, is the reason why SSDT was released with so many “missing” features – unit testing, schema view, data generation, etc, etc, as all the effort was going into re-engineering the project system to generate .dacpacs instead of .dbschemas.

So now, in early 2014, where do we find ourselves? Many of the “missing” features have been added back into the SSDT project system, and we’re able to deploy our projects idempotently using just the facilities provided in a base SQL Server installation, without having to redistribute vsdbcmd. There are alternative tools, most notably from RedGate, but it’s rare that I do any new database work without using SSDT.

 

Testing SSIS Send Mail Tasks without a mail server

Papercut is a handy utility for testing SSIS Send Mail tasks, or anything else  – Database Mail, Powershell, etc. – that needs to send mails, in environments where it’s either inconvenient or impossible to set up a “real” mail server. You can download it from Codeplex here.

There’s no installer, just download, extract and run the exe:

running papercut

 

We can minimise this window and it will disappear into the system tray.

Now we create an SSIS package with an SMTP Connection Manager, and just set the “SMTP Server” parameter to “localhost”:

SMTP Connection Manager

 

Add a Send Mail Task to use the connection manager:

 

Send Mail Task Configuration

 

And we’re done. When we run the SSIS package, we get a little pop-up in the system tray:     newmessage

 

Clicking on the envelope opens the Papercut window so we can view the message:

 

Message Received from SSIS Send Mail Task

 

In case you need it, the SSIS project for this post can be downloaded here. To import an .ispac file into SSDT-BI, follow the instructions here.

Dragging and dropping from SQL Server Object Explorer to an SSDT Project

Am I the last person on earth to have learned about this feature? I went so far as to raise a connect item requesting something similar.

Here’s the scenario:

SSOX and Solution Explorer

 

I have an SSDT project (containing the Northwind.dacpac from this post), and as you might expect the objects in the SQL Server Object Explorer (SSOX) window correspond exactly to the ones in the solution explorer window.

However, on this project, we’re using a shared development database, and look what we’ve found:

Rogue Table in SSOX

Needless to say, this “temp” table is nowhere to be found in our SSDT project, nor in our source control.

What I learned today from Mr “Microsoft” on Connect is that you can drag and drop the “table” from SSOX right into the appropriate folder in Solution Explorer:

 

And here is Barry’s table, installed in our project, and ready to be added to source control:

SSDT Table Editor and Solution Explorer

Changing the Master Data Services Web Service Account

Not too long ago, I had to change the service account, or rather the “Application Pool Identity” for a customer installation of SQL Server 2012 Master Data Services.

Searching revealed  one result from the msdn forums, which seems to only tell part of the story.

There are three steps to this process; the first is to create a new SQL Server login and add it to the appropriate roles. I already have a domain user called SQLSOPHISTRY\MyNewMdsAccount, so now we need to create the SQL Server login:

New Login Dialog SQL Server 2012

Creating the new login

and add our new login to the mds_exec group in our MDS database:

Add the user to the MDS database

Adding the new login to the mds_exec role in the MDS database

The next step is to add the new login to the “MDS_ServiceAccounts” domain group:

Add the MDS app pool account to the MDS_ServiceAccounts domain group

Adding the the MDS app pool account to the MDS_ServiceAccounts domain group.

The next step is to change the application pool identity using Internet Information Services Manager, or “inetmgr”, as you may know it.

The application pool account is in the “Advanced” settings of the application pool:

Application Pool Advanced Settings

We need to click on the ellipsis to change the “Identity” property of the Application Pool:

Changing the App Pool User with inetmgr

The final step is to recycle the MDS Application Pool:

Recycle the MDS Application Pool

After this, you should be able to see the MDS Service running as the new account.

Installing SQL Server Master Data Services Web Service Prerequisites with Powershell

This script will “switch on” all the required Windows Roles and Features listed at Web Application Requirements (Master Data Services) on a new installation of Windows Server 2012. Depending on what you selected when you were installing Windows and/or SQL Server 2012, some of these items may be present already, but the Powershell cmdlet Add-WindowsFeature seems to tolerate this.

If you’re running the web client on the same machine, you’ll need Silverlight 5 as well.

Creating a Northwind.dacpac with sqlpackage.exe

If you’ve been working with SQL Server for a while, you’ll be familiar with the Northwind bookseller database which has featured in many an example over the years.

The most recent version I could find to download was here: Northwind and pubs Sample Databases for SQL Server 2000

When you click “Download”, you’ll get an msi (!), which expands into a folder on your C: drive:

Folder with SQL 2000 Sample Files

We’re not interested in the mdf and ldf files, since we won’t be able to attach them to our SQL Server 2012 instance in any case. However, we also have scripts to install the Northwind and pubs databases.

This is the instnwnd.sql script, open in SQL Server 2012 Management Studio. We have to make a small change, to remove the two sp_dboption entries, as this system stored procedure is not available in SQL Server 2012.

sp_dboption not available in SQL Server 2012

 

Now we can use sqlpackage.exe to create a dacpac from our Northwind database.

The full command line is "%ProgramFiles(x86)%\Microsoft SQL Server"\110\DAC\bin\sqlpackage.exe /Action:Extract /SourceServerName:(local) /SourceDatabaseName:Northwind /TargetFile:Northwind.dacpac /p:ExtractAllTableData=true .

If we’ve typed it correctly, we should see something like the following: (click the image to enlarge)

sqlpackagecmd

 

And now we have a .dacpac we can deploy. I’m using the “deploy Data Tier Application wizard here.

Deploying the dacpac using the wizard

And that’s it, we’re done. The Northwind database is deployed along with all its data.

You can download my copy of Northwind.dacpac from here, but I think it’s © Microsoft, so don’t go ripping it on to DVDs (or even floppy disks!) to sell on eBay.

Validating UK National Insurance Numbers with TSQL

Quite a common thing to have to do for those of us in the UK.

A National Insurance number consists of a 2-letter prefix, a six-digit number, and a 1 character suffix.

There have been a few published guidelines over the years, notably the “Government Data Standards Catalog”, which sadly seems to have sunk without trace.  The most up-to-date reference I could find was “NIM39110 – National Insurance Numbers (NINOs): Format and Security: What a NINO looks like;” , which states that a National Insurance Number “is made up of two letters, six numbers and a final letter, which is always A, B, C, or D.” It turns out that there are scenarios where a space is valid too.

It goes on to say that “The characters D, F, I, Q, U and V are not used as either the first or second letter of a NINO prefix. The letter O is not used as the second letter of a prefix.”

Bizarrely, there is some redacted text after this; marked with “(This text has been withheld because of exemptions in the Freedom of Information Act 2000) “. One wonders what it is they’ve got to hide…

Anyway, back to the topic at hand, here’s a quick scalar function to validate these numbers.

 

I guess this is just one example of why so many people have asked for native Regex support in TSQL.