Running dbatools commands with VS Code tasks

I’ve started to look at the excellent dbatools.io to automate some of the checks that I routinely perform on my SQL instances.

But before I go into this post, I want to say a thank you to Cláudio Silva (t). The powershell commands that are below are based off the code he posted in his excellent blog Have you backed up your SQL logins today?

The first dbatools commands that I looked at are: –

These commands do exactly what they say on the tin. Pretty standard stuff for DBAs but what’s cool is how we can use Visual Studio Code to quickly and easily check that all our databases are being backed up and have a recent (good) CHECK DB.

I’m going to setup two scripts to run the dbatools commands against my SQL instances via Visual Studio Code Tasks.

I don’t have a central management server so I created a database in my local instance of SQL to hold all my server names: –

CREATE DATABASE [DBA];
GO

USE [DBA];
GO

CREATE TABLE [monitoring].[CorporateSQLServers](
	[ServerID] [int] IDENTITY(1,1) NOT NULL,
	[ServerName] [sysname] NOT NULL,
PRIMARY KEY CLUSTERED ([ServerID] ASC) ON [PRIMARY]) 
ON [PRIMARY]
GO

Once the table was created, I added in all the server names that I wanted to perform my checks against.

Next thing was to write the scripts. I took Cláudio’s script and modified (actually simplified would be a better description) it, firstly to get the last backup details of my databases: –

#Requires -module dbatools

#Get a list of instances where you will run the command through
$SQLServers = Invoke-Sqlcmd -ServerInstance "localhost" -Query "SELECT ServerName FROM [DBA].[monitoring].[CorporateSQLServers]"

#For each instance 
$SQLServers | ForEach-Object {
     Get-DbaLastBackup -SqlInstance $_.ServerName
} | Out-GridView -Wait

Then to get the last know good Check DB of all my databases: –

#Requires -module dbatools

#Get a list of instances where you will run the command through
$SQLServers = Invoke-Sqlcmd -ServerInstance "localhost" -Query "SELECT ServerName FROM [DBA].[monitoring].[CorporateSQLServers]"

#For each instance 
$SQLServers | ForEach-Object {
     Get-DbaLastGoodCheckDb -SqlInstance $_.ServerName
} | Out-GridView -Wait

N.B.- The -Wait option is there to prevent the grid output from closing when run as a task

I saved both of these scripts in a directory that Visual Studio Code was pointed at and opened the program.

Then I opened the command palette by running Ctrl + Shift + P and typed Task.

The first option is Configure Task Runner, select that and it will open up the tasks.json file within Visual Studio Code. This is where I can create tasks to run my powershell scripts.

I edited the file with the following code: –

{
    // See https://go.microsoft.com/fwlink/?LinkId=733558
    // for the documentation about the tasks.json format
    "version": "0.1.0",
    "command": "powershell",
    "isShellCommand": true,
    "tasks" : [
        {
            "taskName": "Corporate Database Backup Checks",
            "args": ["-ExecutionPolicy",
                    "Unrestricted",
                    "-NoProfile",
                    "-File","${cwd}/CorporateSQLChecks/LastFullBackup.ps1"],
            "showOutput": "always",
            "suppressTaskName": true
        },
        {
            "taskName": "Corporate Database Integrity Checks",
            "args": ["-ExecutionPolicy",
                    "Unrestricted",
                    "-NoProfile",
                    "-File","${cwd}/CorporateSQLChecks/LastGoodCheckDB.ps1"],
            "showOutput": "always",
            "suppressTaskName": true
        }
    ]
}

By doing this, I will then get the option to run both my scripts from the Visual Studio Code command palette.

Excellent stuff! I now have a quick and easy way to check that all my databases are being backed up and have a recent Check DB.

Thanks for reading!

Friday Reading 2017-07-07

Final Lions Test tomorrow! The win last week sets up a thrilling series decider, if they win they’ll be the first Lions Team to win a series against New Zealand for 46 years! In the build up I’ve been reading…

Setting up Visual Studio Code Tasks
I’ve been using VS Code for a bit now and this week set up tasks to run some of my powershell scripts. Really useful.

Best OS for Docker Host?
A Reddit thread about which OS is recommend for a Docker Host.

Building a dedicated backup test server
Chrissey LeMaire goes through how to automate testing your backups using dbatools.io commands

The Feynman Technique: The Best Way to Learn Anything
Article on a learning technique. The advice about teaching is very true!

SQLSaturday Porto
SQLSaturday #685 has been announced and the call for sessions is open

Have a good weekend!

Persisting data in docker containers – Part Three

Last week in Part Two I went through how to create named volumes and map them to containers in order to persist data.

However, there is also another option available in the form of data volume containers.

The idea here is that create we a volume in a container and then mount that volume into another container. Confused? Yeah, me too but best to learn by doing so let’s run through how to use these things now.

So first we create the data volume container and create a data volume: –

docker create -v C:\SQLServer --name Datastore microsoft/windowsservercore

Note – that we’re not using an image running SQL Server. The windowservercore & mssql-server-windows images use common layers so we can save space by using the windowsservercore image for our data volume container.

Now create a container and mount the directory from data volume container to it: –

docker run -d -p 16789:1433 --env ACCEPT_EULA=Y --env sa_password=Testing11@@ --volumes-from Datastore --name testcontainer microsoft/mssql-server-windows

Actually, let’s also create another container but not start it up: –

docker create -d -p 16789:1433 --env ACCEPT_EULA=Y --env sa_password=Testing11@@ --volumes-from Datastore --name testcontainer2 microsoft/mssql-server-windows

So we now have three containers: –

Let’s confirm that the volume is within the first container: –

docker exec -i testcontainer powershell
ls

Update – April 2018
Loopback has now been enabled for Windows containers, so we can use localhost,16789 to connect locally. You can read more about it here

Ok, let’s get the private IP of the first container and connect to it: –

docker inspect testcontainer

Once you have the IP, connect to the SQL instance within the container and create a database (same as in Part One): –

USE [master];
GO
 
CREATE DATABASE [TestDB]
    ON PRIMARY
(NAME = N'TestDB', FILENAME = N'C:\SQLServer\TestDB.mdf')
    LOG ON
(NAME = N'TestDB_log', FILENAME = N'C:\SQLServer\TestDB_log.ldf')
GO

USE [TestDB];
GO
 
CREATE TABLE dbo.testtable
(ID INT);
GO
 
INSERT INTO dbo.testtable
(ID)
VALUES
(10);
GO 100

Now, let’s stop the first container and start the second container: –

docker stop testcontainer

docker start testcontainer2

And let’s have a look in the container: –

docker exec -i testcontainer2 powershell

ls

cd sqlserver

ls

So the files of the database that we created in the first container are available to the second container via the data container!

Ok, so that’s all well and good but why would I want to use data volume containers instead of the other methods I covered in Part One & Part Two?

The docker documentation here says that it’s best to use a data volume container however they don’t give a reason as to why!

The research that I’ve been doing into this has brought me to discussions where people say that there’s no point in using data volume containers and that just using volumes will do the trick.

I tend to agree that there’s not much point in using data volume containers, volumes will give you everything that you need. But it’s always good to be aware of the different options that a technology can provide.

So which method of data persistence would I recommend?

I would recommend using volumes mapped from the host if you’re going to need to persist data with containers. By doing this you can have greater resilience in your setup by separating your database files from the drive that your docker system is hosted on. I also prefer to have greater control over where the files live as well, purely because I’m kinda anal about these things 🙂

I know there’s an argument about using named volumes as it keeps everything within the docker ecosystem but for me, I don’t really care. I have control over my docker host so it doesn’t matter that there’s a dependency on the file system. What I would say though is, try each of these methods out, test thoroughly and come to decision on your own (there, buck successfully passed!).

Thanks for reading!

Monday Coffee: Alerts & Checks

Hope you all had a good weekend, the Lions won so I’m pretty happy.

Over the weekend I was thinking about the alert system that I have in place at my current position. A good alerting system is (obviously) essential for any DBA to have in place but we all know that’s not enough right?

No matter how good your alerts are, detailed monitoring of your systems is also essential.

Detailed monitoring will allow you to catch issues before they occur, taking you from “fire fighting” issues and actively working towards keeping your system stable.

However, the processes around your monitoring system(s) are also just as important. I’ve worked for companies were it was a requirement to spend pretty much all of each morning meticulously checking off tasks in excel spreadsheets, in order to retain a detailed history.

For me, that’s an example of a bad monitoring process. There’s a lot of very good monitoring tools out there that automatically record stats for us and provide reporting.

Monitoring tools are there to allow us as DBAs to quickly observe and review our systems and that good work is wiped out by having to manually enter data. We don’t want to performing manual tasks, we want to be working on our current projects.

So if you’re in a position were you need to be entering in data manually, you should check out dbatools.io

dbatools is a project written by DBAs for DBAs. There’s over 200 commands there that will allow you to quickly analyse your suite of servers, pulling back whatever stats you need. Do away with that manual data entry, there’s a better way 🙂

Have a good week!