Query Store functionality in earlier versions of SQL Server

One of the best features introduced in SQL Server 2016 was the Query Store. Having the ability to see which queries had regressed (gotten bad plans) over a period of time was pretty much a game changer imho. But many of us are still labouring away on earlier versions of SQL Server so we don’t get to work with this great new feature.

Until now, with the release of OpenQueryStore

If you haven’t heard of OpenQueryStore, it’s an open source project designed to, you guessed it, bring Query Store functionality to earlier versions of SQL Server (2008 to 2014).

When I heard about this project I immediately went onto the GitHub site, downloaded the repository and read through the installation instructions. I’ve been playing around with it for a while and this week was given a preview of the v2.0 release (which has now been released).

v2.0 provides two different install options for OpenQueryStore, Classic or Centralized.

Classic behaves in a similar fashion to Query Store in 2016 by monitoring an individual database whereas Centralized is installed in one “management” database and you specify which databases in the SQL instance you wish to monitor.

I went for Centralized mode and installation is a cinch, just download the repo and then run the Install.ps1 script: –

.\Install.ps1 -SqlInstance "SQL2012" -Database "DBA" -OQSMode "Centralized" -SchedulerType "SQL Agent"

After the script has run, the installer gives advice on what to do next: –

So I needed to INSERT a row into oqs.[monitored_databases] for each database I wanted to monitor, setup a schedule for the SQL Agent job (I set the job to run every minute), then run the UPDATE against oqs.[collection_metadata] and OpenQueyStore will begin collecting data.


A word of caution here. I used the SQL Agent data collection method. There is another option to run using Service Broker but if you do this the install script will run: –

ALTER DATABASE [<your db>] SET ENABLE_BROKER;

This requires an exclusive lock on the database so watch out!


After a while you’ll be able to run the reports that come with the download to get an overview of your system. This is the main dashboard: –

OpenQueryStore Main Dashboard

Here’s the Wait Stats report: –

OpenQueryStore Wait Statistics

These reports are great for a quick overview of the database monitored. Nicely laid out and with a quick glance I can see if there are any immediate causes for concern.

However, the real benefit of OpenQueryStore is in the data collected. The installation creates the following objects within the database: –

  • [oqs].[activity_log]
  • [oqs].[collection_metadata]
  • [oqs].[intervals]
  • [oqs].[monitored_databases]
  • [oqs].[plan_dbid]
  • [oqs].[plans]
  • [oqs].[queries]
  • [oqs].[query_runtime_stats]
  • [oqs].[wait_stats]

So there’s an absolute wealth of data to dig through! The reports provide query IDs so I can drop one into say: –

DECLARE @queryID SMALLINT;

SELECT TOP 1
    rs.[query_id]
    ,rs.[interval_id]
    ,rs.[last_execution_time]
    ,rs.[execution_count]
    ,rs.[avg_rows]
    ,rs.[last_logical_reads]
    ,rs.[avg_logical_reads]
    ,rs.[last_logical_writes]
    ,rs.[avg_logical_writes]
    ,q.[query_statement_text]
    ,p.[plan_handle]
FROM [oqs].[query_runtime_stats] rs
INNER JOIN [oqs].[Queries] q ON rs.[query_id] = q.[query_id]
INNER JOIN [oqs].[Plans] p ON q.[plan_id] = p.[plan_id]
WHERE rs.[query_id] = @queryID
ORDER BY rs.[interval_id] DESC;
GO

And then view all the execution stats of this query. Pretty cool, eh?

If you’re working with SQL Server versions 2008 to 2014 I’d highly recommend that you install this on a development box and start investigating the data that it’s collecting. As with any new system, set it up and monitor to see what it’s doing but I haven’t seen anything untoward.

The project is in (very) active development so keep an eye on the twitter account for more updates.

Thanks for reading!

Friday Reading 2017-08-18

Cool week, I’ve been selected to present at SQL Saturday Holland & SQL Saturday Denmark this year. September and October are going to be busy months!

Anyway, this week I’ve been reading…

Open Query Store v1.1 has landed!
v1.1 of the very cool project to enable Query Store functionality in versions of SQL Server before 2016 has been released

Working with Very Large Tables Like a Pro in SQL Server 2016
Guy Glantser (t) presenting on working with vlarge tables. What’s cool about this is he goes over new functionality that can help, stretch databases for example

SQL Server 2017 – New T-SQL Functions
Rob Reid (t) goes over some of the new T-SQL functions available in SQL Server 2017

Azure Cloud “Fear” Busting #4 – Losing Control?
Arun’s (t) final post in his Azure Fear Busting series (you’ve read the other ones right?)

Announcing the new release of Docker Enterprise Edition
Official Docker blog detailing what’s new in Docker EE v17.06

Have a good weekend!

Attaching databases via a dockerfile


There’s been an update posted about this topic here-

Attaching databases via a dockerfile – UPDATE


Last week I presented my session on SQL Server & Containers for the PASS Virtualization Group and during my prep I noticed that there’s some functionality available to Windows containers and not Linux containers.

One of the (if not the) main benefits of working with SQL in a container is that you can create a custom image to build container from that has all of your development databases available as soon as the container comes online.

This is really simple to do with Windows containers. Say I want to attach DatabaseA that has one data file (DatabaseA.mdf) and a log file (DatabaseA_log.ldf): –

ENV attach_dbs="[{'dbName':'DatabaseA','dbFiles':['C:\\SQLServer\\DatabaseA.mdf','C:\\SQLServer\\DatabaseA_log.ldf']}]"

Nice and simple! One line of code and any containers spun up from the image this dockerfile creates will have DatabaseA ready to go.

However this functionality is not available when working with Linux containers. Currently you cannot use an environment variable to attach a database to a SQL instance running in a Linux container.

This was a problem for me as I wanted to change things up a little for the Virtualization Group’s webinar. I wanted to show all the code in my slides running on Windows Server but do my demos on my Windows 10 desktop but working with Linux containers. I wanted to do this as I thought it would be cool to show how you can work with SQL on Linux from Windows.

I started doing some research online and there are different work arounds to attaching the database into SQL in a Linux container but they all involved separate scripts outside of the dockerfile. I wanted to keep things simple, only show minor changes from Windows containers so I had to get a bit creative.

Here’s what I came up with: –

HEALTHCHECK --interval=10s  \
	CMD /opt/mssql-tools/bin/sqlcmd -S . -U sa -P Testing11@@ \
		-Q "CREATE DATABASE [DatabaseA] ON (FILENAME = '/var/opt/sqlserver/DatabaseA.mdf'),(FILENAME = '/var/opt/sqlserver/DatabaseA_log.ldf') FOR ATTACH"

EDIT – 2018-12-11 – Finally came back to this and blogged about attaching databases via a script here

A bit more involved but it performs the same functions as the attach_dbs environment variable in the dockerfile for Windows containers. Here’s what each part of the code does: –

# Instruct docker to wait for 10 seconds (to allow SQL to initialise) and then perform a check to ensure the container is running as expected
HEALTHCHECK --interval=10s

# Use sqlcmd to connect to the SQL instance within the container
CMD /opt/mssql-tools/bin/sqlcmd -S . -U sa -P Testing11@@

# Runs a SQL script to attach the database
-Q "CREATE DATABASE [DatabaseA] ON (FILENAME = '/var/opt/sqlserver/DatabaseA.mdf'),(FILENAME = '/var/opt/sqlserver/DatabaseA_log.ldf') FOR ATTACH"

So that’s how you can get the same result, an image in which you can create containers with DatabaseA available on startup, whether you are working with Linux or Windows containers by running: –

docker build -t demoimage &amp;lt;pathtodockerfilelocation&amp;gt;

If you want to see the full dockefiles, I’ve made both the Windows and Linux versions that I use for my demos available on my GitHub here.

Thanks for reading!

Monday Coffee: Time Off

Nice to be back after a week off. I’ve been writing three posts a week (sometimes more) since the beginning of the year so decided last week that I’d take a (albeit short) break.

It got me thinking about actual time off and how different people deal with taking a break from work. I have to admit, going on leave for me end up with me working more in the lead up to it in order to get everything done before I go and then working more to catch up once I’m back.

It shouldn’t be that way, should it?

I know I’m not the only one who this happens to and I don’t really have an answer for it. I don’t think I could just drop everything before going on leave but maybe there’s an answer to playing catch up when I’m back.

I once saw a senior manager in a previous company come back from leave, highlight all his unread emails and, promptly delete them. When I asked him about it (I was standing at his desk when he did it) he said that anything that was that important would be sent again.

I like that approach as he’s exactly right. We shouldn’t be expected to trawl through (sometimes hundreds) of emails once we’re back from holiday.

Ahh, what am I saying. There’s no way I’d do that, I’ve have kept up-to-date with my email whilst on holiday 🙂

Have a good week!

Friday Reading 2017-08-04

It’s a bank holiday weekend here in Ireland so I’m looking forward to having the extra day off. Bought a new computer this week so am going to spend a bit of time setting that up (one of my favourite things to do).

One thing to mention is that next week (on the 9th @ 17:00 GMT) I’ll be presenting my SQL Server & Containers session for the Pass Virtualisation Virtual Chapter. Really looking forward to it, you can find out more details here.

Anyway, this week I’ve been reading…

Kubernetes Interactive Tutorials
James Anderson (b|t) recommended this to me a while back but I just haven’t had the chance to look at it. The tutorials are a really good way of getting your toe in the Kubernetes water.

SQL Server 2017 RC2 Now Available
MS Official SQL Server Blog announcing SQL Server 2017 RC2.

Upgrading SQL Server–Day 1
First part of Glenn Berry’s blog series on upgrading and migrating to SQL Server 2016/2017.

Sudo in Powershell
Here’s a module that replicates sudo functionality in Powershell (warning, I haven’t had a chance to fully test this yet).

Jurassic Park: 10 things you might have missed
Fun Den Of Geek article to round the week off.

Have a good weekend!