10

Creating SQL Containers from a Dockerfile

I’ve been playing around with SQL containers on Windows Sever 2016 a lot recently and well, building empty SQL containers is all fine and dandy but it’s kinda missing the point. What containerization allows you to do is build custom images that are designed for your environment, say with a bunch of databases ready to go (for QA, dev, testing etc.) from which containers can be built in a very short amount of time.

So if you need a new SQL instance spun up for testing? Having a pre-built custom image ready will allow you to do that very rapidly and the simplest way to build a custom image is from a dockerfile.

So let’s go through the process.

This post assumes that you have the docker engine already installed on Windows Server 2016. If you haven’t set that up you can following the instructions on how to do it here.

I’m also going to be running all my powershell commands in a remote session, if you don’t know how to set that up the instructions are here.

First thing to do is verify that your docker engine is running:-

docker version

dockerversion

And that you have a vanilla SQL Server image available:-

docker images

dockerimages

If you don’t you can follow the instructions here to pull an image from the docker repository here.

Now create a directory on your server to hold your dockerfile and database files. I’m going to create C:\Docker\Demo

mkdir c:\docker\demo

Ok, your server is all good to go. What I’m going to do now is:-

  • jump onto my local instance of SQL 2016
  • create a few databases
  • shutdown my instance of SQL
  • copy the database files to my server
  • create a dockerfile to build an SQL container image with those databases available
  • build a new SQL container

Ok, so in my instance of SQL I’m going to run:-

USE [master]
GO

CREATE DATABASE [DatabaseA] ON PRIMARY 
(	NAME		= N'DatabaseA'
	,FILENAME	= N'C:\SQLServer\SQLData\DatabaseA.mdf'
	,SIZE		= 8192 KB
	,MAXSIZE	= UNLIMITED
	,FILEGROWTH = 65536 KB) 
LOG ON 
(	NAME		= N'DatabaseA_log'
	,FILENAME	= N'C:\SQLServer\SQLLog\DatabaseA_log.ldf'
	,SIZE		= 8192 KB
	,MAXSIZE	= 2048 GB
	,FILEGROWTH = 65536 KB)
GO

CREATE DATABASE [DatabaseB] ON PRIMARY
(	NAME		= N'DatabaseB'
	,FILENAME	= N'C:\SQLServer\SQLData\DatabaseB.mdf'
	,SIZE		= 8192 KB
	,MAXSIZE	= UNLIMITED
	,FILEGROWTH = 65536 KB) ,
(	NAME		= N'DatabaseB_Data'
	,FILENAME	= N'C:\SQLServer\SQLData\DatabaseB_Data.ndf'
	,SIZE		= 8192 KB
	,MAXSIZE	= UNLIMITED
	,FILEGROWTH = 65536 KB)
LOG ON 
(	NAME		= N'DatabaseB_log'
	,FILENAME	= N'C:\SQLServer\SQLLog\DatabaseB_log.ldf'
	,SIZE		= 8192 KB
	,MAXSIZE	= 2048 GB
	,FILEGROWTH = 65536 KB)
GO

CREATE DATABASE [DatabaseC] ON PRIMARY 
(	NAME		= N'DatabaseC'
	,FILENAME	= N'C:\SQLServer\SQLData\DatabaseC.mdf'
	,SIZE		= 8192 KB
	,MAXSIZE	= UNLIMITED
	,FILEGROWTH = 65536 KB) 
LOG ON 
(	NAME		= N'DatabaseC_log'
	,FILENAME	= N'C:\SQLServer\SQLLog\DatabaseC_log.ldf'
	,SIZE		= 8192 KB
	,MAXSIZE	= 2048 GB
	,FILEGROWTH = 65536 KB)
GO

Really simple code just to create three databases, one (DatabaseB) has an extra data file as I want to show how to add databases with multiple data files to a SQL container via a docker file.

Once the databases are created, shutdown the instance either through the SQL config manager or run:-

SHUTDOWN WITH NOWAIT

N.B.- This is my local dev instance! Do not run this on anything other than your own dev instance!

Next thing to do is create our dockerfile. Open up your favourite text editor (mine is Notepad++, I’ve tried others but it simply is the best imho) and drop in:-

# using vNext image
FROM microsoft/mssql-server-windows

# create directory within SQL container for database files
RUN powershell -Command (mkdir C:\\SQLServer)

#copy the database files from host to container
COPY DatabaseA.mdf C:\\SQLServer
COPY DatabaseA_log.ldf C:\\SQLServer

COPY DatabaseB.mdf C:\\SQLServer
COPY DatabaseB_Data.ndf C:\\SQLServer
COPY DatabaseB_log.ldf C:\\SQLServer

COPY DatabaseC.mdf C:\\SQLServer
COPY DatabaseC_log.ldf C:\\SQLServer

# set environment variables
ENV sa_password=Testing11@@

ENV ACCEPT_EULA=Y

ENV attach_dbs="[{'dbName':'DatabaseA','dbFiles':['C:\\SQLServer\\DatabaseA.mdf','C:\\SQLServer\\DatabaseA_log.ldf']},{'dbName':'DatabaseB','dbFiles':['C:\\SQLServer\\DatabaseB.mdf','C:\\SQLServer\\DatabaseB_Data.ndf','C:\\SQLServer\\DatabaseB_log.ldf']},{'dbName':'DatabaseC','dbFiles':['C:\\SQLServer\\DatabaseC.mdf','C:\\SQLServer\\DatabaseC_log.ldf']}]"

What this file is going to do is create a container based on the lines of code in the file and then save it as a new custom image (the intermediate container is deleted at the end of the process). Let’s go through it line by line…

FROM microsoft/mssql-server-windows
This is saying to base our image on the original image that we pulled from the docker hub.

RUN powershell -Command (mkdir C:\\SQLServer)
Within the container create a directory to store the database files

COPY DatabaseA.mdf C:\\SQLServer…
Each one of these lines copies the database files into the container

ENV sa_password=Testing11@@
Set the SQL instance’s SA password

ENV ACCEPT_EULA=Y
Accept the SQL Server licence agreement (your container won’t run without this)

ENV attach_dbs=”[{‘dbName’:’DatabaseA’,’dbFiles’:[‘C:\\SQLServer\\DatabaseA.mdf’…
And finally, attach each database to the SQL instance


Name the file dockerfile (no extension), then copy it and the database files to your server into the directory created earlier.

demodirectory

Now we can build our custom image. So in your powershell command window, navigate to the directory with the dockerfile in and run:-

docker build -t demo .

This will build a custom docker image running SQL with our databases. The -t flag will tag the image as demo and don’t forget to include the . as this tells the docker engine to look for a file in the directory called dockerfile.

Once that’s complete, verify the image has been created:-

docker images

dockercustomimage

Awesome stuff! We have our custom image. So let’s create a container from it:-

docker run -d -p 15788:1433 --name democontainer demo

This will create and run a new container based off our image with the host server’s port 15788 mapped to the port 1433 within the container. Once that’s complete, verify that the container is running:-

docker ps

dockercontainer

Haha! Cool! Also, how quick was that??

We have our container up and running. Let’s connect to it remotely via SSMS and check that the databases are there. So use the host server’s IP address and the custom port that we specified when creating the container:-

connectssmstocontainer

And then have a look in object explorer:-

verifydatabasesincontainer

And there you have it. One newly built SQL container from a custom image running our databases.

Imagine being able to spin up new instances of SQL with a full set of databases ready to go in minutes. This is main advantage that container technology gives you, no more waiting to install SQL and then restore databases. Your dev or QA person can simply run one script and off they go.

I really think this could be of significant benefit to many companies and we’re only just starting to explore what this can offer.

Thanks for reading!

1

Monday Coffee 2017-02-06

A lot’s been said about last week’s Gitlab outage so I’m not going to go over the details here but I do want to talk about one point that was when I was watching the guys fix the issue live on youtube.

A lot of the people making comments kept asking if the person who made the original mistake was going to be fired (and I mean a lot of people were asking). Finally one of the guys on the channel responded by saying that this wasn’t a personal mistake, it was a process failure and if the person who had made the original error hadn’t taken a backup before he started this work, they would not have been able to recover anything at all.

I did wonder about who was asking the question though and I came to the conclusion that it couldn’t have been anyone who’s worked in the tech industry for any sustained period time.

Show me a senior technical person who has never made a mistake causing an outage and I’ll show you a fibber

People do not get fired for making one mistake, could you imagine? Everyone that I know in tech has made at least one error causing an outage, others (like me) have made more than one. OK, yes, if this was the latest in a line of mistakes that that person had made at GitLab then maybe, but I doubt that it was.

It comes down to the age old adage, learn from your mistakes. Bet that guy at Gitlab won’t make that mistake again πŸ™‚

Have a good (mistake free) week.

0

Friday Reading 2017-02-03

2017 is trundling along now, this week I’ve been reading…

The GitLab incident report
We’ve all had that moment of “what did I just do??” Here’s how the guys at Gitlab responded to such a moment.

Linux OS Fundamentals for the SQL Admin
I don’t know how I missed this webinar but thankfully it’s been uploaded to youtube. Anthony Nocentino (b|t) goes over linux basics for us SQL DBAs.

How six lines of code + SQL Server can bring Deep Learning to ANY App
Rimma Nehme (t) goes through the what, why & how of using ML with SQL Server.

Stopping or Postponing an upgrade to SQL Database V12
Need to stay on v11 for now? Read this.

Hyper-V & powershell
Good starting reference for anyone using Hyper-V wanting to get into powershell

Have a good weekend!

3

SQL Containers and Networking

I recently talked with the guys over at SQL Data Partners on their podcast about SQL Server and containers. It was real good fun and I enjoyed chatting with Carlos Chacon (b|t) and Steve Stedman (b|t)about container technology and my experiences with it so far. Would definitely like to go back on (if they’ll have me πŸ™‚ )

Anyway, during the podcast one of the questions that came up was “How do containers interact with the network resources on the host server?”

To be honest, I wasn’t sure. So rather can try and give a half answer I said to the guys that I didn’t know and I’d have to come back to them.

Career Tip – when working with technology it’s always better to say you don’t know but will research and come back with an answer, than it is to try and blag your way through.

Once the podcast recording was over I started to think about it. Now there’s a bit of a clue in the code when you run a container:-

docker run -d -p 15798:1433 --name TestContainer ImageName

The -p 15798:1433 part of the code specifies which port on the host server maps to a port in the container. So there’s a NAT network in there somewhere?

I went off and did a bit of research and I found the following diagram which details how the containers interact with the host at the network layer:-

container_networks
Image source

In essence the container host’s network connectivity is extended to containers via a Hyper-V virtual switch which containers connect to via either the Host virtual NIC (this is for windows server containers) or a synthetic VM NIC (for Hyper-V containers).

The containers themselves can connect to the host network via different modes.The default is a NAT network that is created by the docker engine onto which container endpoints are automatically attached and this allows for port forwarding from the Host to the containers (which we see in the code earlier in this blog).

This can all be seen by running the following commands:-

To list the docker networks:-

docker network ls

dockernetworks
And there’s our NAT network.

To get the network adapters of a server:-

Get-NetAdapter

networkadapter
There’s the vNIC that the containers use to connect to the virtual switch (I’m running my docker engine in a VM, hence the other hyper-v NIC).

To get the virtual switches of a hyper-v host (remember some hyper-v elements are installed when the container feature is enabled):-

Get-VMSwitch

virtualswitch
And there’s the virtual switch.

So there’s how it works! Thanks for reading.

0

Monday Coffee 2017-01-30

Wow, what a week last week was.

The biggest thing that happened was that I got a session at SQL Saturday Iceland! They had loads of submissions and could only pick around 20 so very chuffed to have been one of those selected, cue smug face.

The event is on the 17th of March in Reykjavik and I’ll be talking about one of my favourite subjects, containers. I’ve pretty much written the presentation at this point but I’ve still got over a month to polish it. One thing I am going to do is get my demos down, I’ve got an Azure account so I’ll be building an environment there to use. However from the advice that I’ve been given I’ll also be recording videos of each of the demos so if the presentation technical gremlins raise their heads I won’t have to worry.

It feels like I’ve lived and breathed containers over the last year, it’s a subject that I feel passionate about. I don’t want to go over the top but I think that they’re a game changer, especially when it comes to Dev & Test environments.

Say you have 20 dev and test guys, all working with apps that reference databases in SQL Server. Now whenever a release goes out to production, those environments will also have to be updated to make sure that everyone is working on the same version. There’s tonnes of software out there to automate deployments but what if you could run a powershell script and bam! everyone is on the working on exactly the same version of the database(s) within minutes.

That’s what containers give you, the ability to quickly and easily spin up custom instances of SQL Server.

Another good example of when containers come into their own is patching. Microsoft now recommends applying every CU to your production environment. Are you really going to have to get all those other instances of SQL up to the same patch level as well? You should, no point in testing on one patch level of SQL and then deploying to another.

If your dev guys are using containers it’s no problem. They can blow away their old containers and deploy new ones at the higher patch level. Great stuff eh?

There are other advantages, hey there are disadvantages too but I’m not going to go into all of them here. You’ll have to come to my session πŸ™‚

Have a good week!