0

Persisting data in docker containers – Part Three

Last week in Part Two I went through how to create named volumes and map them to containers in order to persist data.

However, there is also another option available in the form of data volume containers.

The idea here is that create we a volume in a container and then mount that volume into another container. Confused? Yeah, me too but best to learn by doing so let’s run through how to use these things now.

So first we create the data volume container and create a data volume: –

docker create -v C:\SQLServer --name Datastore microsoft/windowsservercore

Note – that we’re not using an image running SQL Server. The windowservercore & mssql-server-windows images use common layers so we can save space by using the windowsservercore image for our data volume container.

Now create a container and mount the directory from data volume container to it: –

docker run -d -p 16789:1433 --env ACCEPT_EULA=Y --env sa_password=Testing11@@ --volumes-from Datastore --name testcontainer microsoft/mssql-server-windows

Actually, let’s also create another container but not start it up: –

docker create -d -p 16789:1433 --env ACCEPT_EULA=Y --env sa_password=Testing11@@ --volumes-from Datastore --name testcontainer2 microsoft/mssql-server-windows

So we now have three containers: –

Let’s confirm that the volume is within the first container: –

docker exec -i testcontainer powershell
ls

Update – April 2018
Loopback has now been enabled for Windows containers, so we can use localhost,16789 to connect locally. You can read more about it here

Ok, let’s get the private IP of the first container and connect to it: –

docker inspect testcontainer

Once you have the IP, connect to the SQL instance within the container and create a database (same as in Part One): –

USE [master];
GO
 
CREATE DATABASE [TestDB]
    ON PRIMARY
(NAME = N'TestDB', FILENAME = N'C:\SQLServer\TestDB.mdf')
    LOG ON
(NAME = N'TestDB_log', FILENAME = N'C:\SQLServer\TestDB_log.ldf')
GO

USE [TestDB];
GO
 
CREATE TABLE dbo.testtable
(ID INT);
GO
 
INSERT INTO dbo.testtable
(ID)
VALUES
(10);
GO 100

Now, let’s stop the first container and start the second container: –

docker stop testcontainer

docker start testcontainer2

And let’s have a look in the container: –

docker exec -i testcontainer2 powershell

ls

cd sqlserver

ls

So the files of the database that we created in the first container are available to the second container via the data container!

Ok, so that’s all well and good but why would I want to use data volume containers instead of the other methods I covered in Part One & Part Two?

The docker documentation here says that it’s best to use a data volume container however they don’t give a reason as to why!

The research that I’ve been doing into this has brought me to discussions where people say that there’s no point in using data volume containers and that just using volumes will do the trick.

I tend to agree that there’s not much point in using data volume containers, volumes will give you everything that you need. But it’s always good to be aware of the different options that a technology can provide.

So which method of data persistence would I recommend?

I would recommend using volumes mapped from the host if you’re going to need to persist data with containers. By doing this you can have greater resilience in your setup by separating your database files from the drive that your docker system is hosted on. I also prefer to have greater control over where the files live as well, purely because I’m kinda anal about these things πŸ™‚

I know there’s an argument about using named volumes as it keeps everything within the docker ecosystem but for me, I don’t really care. I have control over my docker host so it doesn’t matter that there’s a dependency on the file system. What I would say though is, try each of these methods out, test thoroughly and come to decision on your own (there, buck successfully passed!).

Thanks for reading!

0

Monday Coffee: Alerts & Checks

Hope you all had a good weekend, the Lions won so I’m pretty happy.

Over the weekend I was thinking about the alert system that I have in place at my current position. A good alerting system is (obviously) essential for any DBA to have in place but we all know that’s not enough right?

No matter how good your alerts are, detailed monitoring of your systems is also essential.

Detailed monitoring will allow you to catch issues before they occur, taking you from “fire fighting” issues and actively working towards keeping your system stable.

However, the processes around your monitoring system(s) are also just as important. I’ve worked for companies were it was a requirement to spend pretty much all of each morning meticulously checking off tasks in excel spreadsheets, in order to retain a detailed history.

For me, that’s an example of a bad monitoring process. There’s a lot of very good monitoring tools out there that automatically record stats for us and provide reporting.

Monitoring tools are there to allow us as DBAs to quickly observe and review our systems and that good work is wiped out by having to manually enter data. We don’t want to performing manual tasks, we want to be working on our current projects.

So if you’re in a position were you need to be entering in data manually, you should check out dbatools.io

dbatools is a project written by DBAs for DBAs. There’s over 200 commands there that will allow you to quickly analyse your suite of servers, pulling back whatever stats you need. Do away with that manual data entry, there’s a better way πŸ™‚

Have a good week!

0

Friday Reading 2017-06-30

Second test for the British & Irish Lions tomorrow, they need this one to keep the series alive. I’ll be up early to watch it tomorrow, the team selection is here.

But anyway, this week I’ve been reading…

Using SQL Server Diagnostics
Arun Sirpal goes through how to use a new extension for SSMS that analyses SQL Server dump files

PowerShell Module for the SQL Server Diagnostics API – 1st Command Get-SQLDiagRecommendations
Continuing looking at the new SQL Server Diagnostics tool, Rob Sewell goes through a powershell module he’s built to interact with the API

Demystifying Microsoft Distributed Transaction Coordinator and SQL Server HA Configurations
Allan Hirt clarifies an area that always been a bit of a black box for me, Distributed Transactions

SQL Server 2012 SP4 now to be released in September 2017
I wonder what they found when testing πŸ™‚

‘Vaccine’ created for huge cyber-attack
There’s a defence against the latest ransomware, but that doesn’t mean you shouldn’t be fully patched

Have a good weekend!

1

Persisting data in docker containers – Part Two

Last week in Part One I went through how to mount directories from the host server into a docker container in order to persist data.

However, you don’t have to do this in order to share volumes between containers. Another method is to create named volumes within the docker ecosystem (as it were) and then map those volumes into containers upon creation.

Here’s how to do it.

First we create the volume that we want to share within docker: –

docker volume create sqldata
docker volume ls


Now let’s create two containers both referencing the volume. One will be up and running whilst the other remains in the stopped state: –

docker run -d -p 15789:1433 -v sqldata:C\sqldata --env ACCEPT_EULA=Y --env sa_password=Testing11@@ --name testcontainer microsoft/mssql-server-windows


docker create -p 15789:1433 -v sqldata:C\sqldata --env ACCEPT_EULA=Y --env sa_password=Testing11@@ --name testcontainer2 microsoft/mssql-server-windows


Let’s have a look in the first container to see if the volume is there: –

docker exec -i testcontainer powershell


Update – April 2018
Loopback has now been enabled for Windows containers, so we can use localhost,15789 to connect locally. You can read more about it here

Ok, it’s there. So grab the private IP address of the container so that we can connect to it in SSMS: –

docker inspect testcontainer


Now we’ll create a database with its files in that location: –

USE [master];
GO
 
CREATE DATABASE [TestDB]
    ON PRIMARY
(NAME = N'TestDB', FILENAME = N'C:\sqldata\TestDB.mdf')
    LOG ON
(NAME = N'TestDB_log', FILENAME = N'C:\sqldata\TestDB_log.ldf')
GO

USE [TestDB];
GO
 
CREATE TABLE dbo.testtable
(ID INT);
GO
 
INSERT INTO dbo.testtable
(ID)
VALUES
(10);
GO 100


Right, now let’s blow that first container away and spin up the second one: –

docker stop testcontainer

docker rm testcontainer

docker start testcontainer2


Hmm, all looks good. But let’s check that the volume is there with the database’s files: –

docker exec -i testcontainer2 powershell

cd sqldata

ls


Cool! The files are there, so let’s connect to the SQL instance within the second container and see if we can attach the database: –

docker inspect testcontainer2


Let’s try the attach: –

USE [master]
GO
CREATE DATABASE [TestDB] ON 
( FILENAME = N'C:\sqldata\TestDB.mdf' ),
( FILENAME = N'C:\sqldata\TestDB_log.ldf' )
 FOR ATTACH
GO


Awesome stuff! We’ve got a database that was created in another container successfully attached into another one.

So at this point you may be wondering what the advantage is of doing this over mounting folders from the host? Well, to be honest, I really can’t see what the advantages are.

The volume is completely contained within the docker ecosystem so if anything happens to the docker install, we’ve lost the data. OK, OK, I know it’s inΒ C:\ProgramData\docker\volumes\ on the host but still I’d prefer to have more control over its location.

I like the idea of mounted volumes better if I’m honest. I can specify where my database files are with much more control and I can also have access if needed.

However, each to their own personal preference and if you have a good reason for using named volumes over mounted volumes, let me know πŸ™‚

Thanks for reading!

4

Monday Coffee: Attending Conferences

Last week I asked the following question on Twitter: –

I was interested to see the responses as I wanted to know how more experienced presenters get the time off to speak at so many events. Up until now I’ve been using annual leave and this was a way of gauging what was the general standard out there.

Well, the results showed that there is no general standard.

I had a lot of responses saying that companies would give time off and pay for travel expenses as it was viewed as training but then there were others that said the complete opposite (and everything in-between).

Btw, I’m discounting the consultants that replied. You run your own companies! πŸ™‚

My personal opinion is that attending conferences, even as a speaker, is training. I always go to other sessions when I’ve been speaking at events, I’m not going to stay up in the speaker room all day when there are great sessions going on!

However, I can also see the company side of things, especially if there was no formal agreement in the employment contract. You are out of the office after all, and for a least part of the time, unable to work remotely (if say, needed on call).

So, I would offer having a set amount of days that I can use to attend conferences with also the option of working around hours that I’m off (if needed). That way the company gets the work they need from me and I can attend the events that I wish to.

What does your company offer?