Deploying SQL Server to an Azure Container Instance using Terraform – Part Two

In a previous post I went through how to deploy SQL Server running in an Azure Container Instance using Terraform.

In that post, I used hardcoded variables in the various .tf files. This isn’t great to be honest as in order to change those values, we’d need to update each .tf file. It would be better to replace the hardcoded values with variables whose values can be set in one separate file.

So let’s update each value with a variable in the .tf files.

First file to update is the providers.tf file: –

provider "azurerm" {
version         = "1.24.0"
subscription_id = "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
tenant_id       = "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb"
client_id       = "cccccccc-cccc-cccc-cccc-cccccccccccc"
client_secret   = "dddddddd-dddd-dddd-dddd-dddddddddddd"
}

Replace the values in the file with the following: –

provider "azurerm" {
version         = "1.24.0"
subscription_id = "${var.subId}"
tenant_id       = "${var.tenantId}"
client_id       = "${var.clientId}"
client_secret   = "${var.clientSecret}"
}

Now that we have referenced variables in the files we need to define those variables. First create a file to hold the variables: –

mkdir variables.tf

And then drop in the following: –

variable "subId" {
description = "please provide subscription Id"
type        = "string"
}

variable "tenantId" {
description = "please provide tenant Id"
type        = "string"
}

variable "clientId" {
description = "please provide client Id"
type        = "string"
}

variable "clientSecret" {
description = "please provide client secret"
type        = "string"
}

OK, so we have defined the variables that we referenced in the providers.tf file. However, we haven’t set any values for those variables.

If we left the files like this, when we deploy we would be asked to manually enter values for the variables. What we now need to do is create another file where we can set the variable values.

First thing to do is create a directory called vars: –

new-item vars -type directory

And now create a file to hold the values (notice that its extension is .tfvars to distinguish it from the other files): –

cd vars
new-item aci.tfvars

Drop the original hardcoded values in: –

subId = "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
tenantId = "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb"
clientId = "cccccccc-cccc-cccc-cccc-cccccccccccc"
clientSecret = "dddddddd-dddd-dddd-dddd-dddddddddddd"

And now we’re good to go! But before we deploy, let’s have another look at the azurecontainerinstance.tf file: –

resource "azurerm_container_group" "testcontainergroup1" {
name                = "testcontainergroup1"
location            = "${azurerm_resource_group.azurecontainerinstances.location}"
resource_group_name = "${azurerm_resource_group.azurecontainerinstances.name}"
ip_address_type     = "public"
dns_name_label      = "testcontainergroup1"
os_type             = "Linux"

container {
name   = "testcontainer"
image  = "mcr.microsoft.com/mssql/server:2019-CTP2.5-ubuntu"
cpu    = "1.0"
memory = "2.0"

ports {
port     = 1433
protocol = "TCP"
}

environment_variables = {
"ACCEPT_EULA" = "Y"
"SA_PASSWORD" = "Testing1122"
}
}
}

We’re already using variables for the resource group and location (which reference the resource group created in the resourcegroup.tf file) but let’s add in variables for the container name, image, and sa password: –

resource "azurerm_container_group" "testcontainergroup1" {
name                = "testcontainergroup1"
location            = "${azurerm_resource_group.azurecontainerinstance2.location}"
resource_group_name = "${azurerm_resource_group.azurecontainerinstance2.name}"
ip_address_type     = "public"
dns_name_label      = "testcontainergroup1"
os_type             = "Linux"

container {
name   = "${var.containerName}"
image  = "${var.containerImage}"
cpu    = "1.0"
memory = "2.0"

ports {
port     = 1433
protocol = "TCP"
}

environment_variables = {
"ACCEPT_EULA" = "Y"
"SA_PASSWORD" = "${var.saPassword}"
}
}
}

Cool, now we need to update the variables.tf file: –

variable "subId" {
description = "please provide subscription Id"
type        = "string"
}

variable "clientId" {
description = "please provide client Id"
type        = "string"
}

variable "clientSecret" {
description = "please provide client secret"
type        = "string"
}

variable "tenantId" {
description = "please provide tenant Id"
type        = "string"
}

variable "saPassword" {
description = "please provide an SA password"
type        = "string"
}

variable "containerImage" {
description = "please provide container image"
type        = "string"
}

variable "containerName" {
description = "please provide container name"
type        = "string"
}

And drop in the values to the aci.tfvars file: –

subId = "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
tenantId = "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb"
clientId = "cccccccc-cccc-cccc-cccc-cccccccccccc"
clientSecret = "dddddddd-dddd-dddd-dddd-dddddddddddd"
saPassword = "Testing1122"
containerImage = "mcr.microsoft.com/mssql/server:2019-CTP2.5-ubuntu"
containerName = "testcontainer1"

Before we deploy, let’s format these files: –

terraform fmt

Really love that feature. If you run it at the root of your project it will format all your files for you 🙂

Now we can test the deployment (notice we’re referencing the aci.tfvars file): –

terraform plan -var-file "./vars/aci.tfvars"

If all looks good (the values for the variables are being picked up), we can deploy: –

terraform apply -var-file "./vars/aci.tfvars"

What’s great about using variables like this is that we now don’t have to touch the .tf files where we define our resources. If we want to say, deploy a different image to an ACI we just have to update the containerImage variable in the aci.tfvars file.

This also means that if we push this project to a github repo we can ignore the vars directory, so no sensitive information is exposed (such as the sa password for our SQL instance).

Thanks for reading!

Deploying SQL Server to an Azure Container Instance using Terraform – Part One

A couple of weeks ago I attended John Martin’s (t) Terraform pre-con at Data in Devon. I’ve been hearing about Terraform recently so was excited to check it out.

John’s pre-con was absolutely fantastic, he provided a great insight into Terraform and I highly recommend it to anyone who’s looking at learning it to sign up if you see it at a future event.

So armed with my new found knowledge, I wanted to go about deploying SQL Server into an Azure Container Instance using Terraform. This blog is the first in a series where I’ll go through how to use Terraform. This post will get SQL Server up and running in an ACI and then future posts will go a bit deeper.

To follow along with this blog you will need the following tools: –

Terraform
Azure CLI
VSCode
Terraform extension for VSCode
Azure Terraform extension for VSCode

OK, let’s go through building the required Terraform files to deploy SQL Server to an Azure Container Instance.

First, log into your Azure account: –

az login

az-login.JPG

Make a note of the id and tenantId.

Next, create a service principal which we will reference to allow Terraform to create resources in Azure. Make a note of the appid and secret: –

az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"

Create service principal.JPG

Ok, now we can start creating Terraform files.

Let’s create a directory for the terraform files and navigate to it: –

mkdir Terraform-ACI

cd Terraform-ACI

The first file to create is a providers file. This tells Terraform that we will be working in Azure so it will pull down the relevant plugin: –

new-item providers.tf

Now drop the following into the providers.tf file (using the values you noted above): –

provider "azurerm" {
version = "1.24.0"
subscription_id = "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
tenant_id = "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb"
client_id = "cccccccc-cccc-cccc-cccc-cccccccccccc"
client_secret = "dddddddd-dddd-dddd-dddd-dddddddddddd"
}

The first resource we’ll create in Azure is a resource group to hold the ACI: –

new-item resourcegroup.tf

Drop the following into the resource group file

resource "azurerm_resource_group" "azurecontainerinstances" {
name = "azurecontainerinstances"
location = "eastus"
}

Note the format of the file, it’s laid out as: –

resource "RESOURCE_TYPE" "RESOURCE_NAME" {
attribute_name = "attribute_value"
}

The RESOURCE_NAME is the name of the resource that will be referenced in Terraform. It can look a little confusing as in the code above I’ve called the resource name in Terraform the same name as the name of the resource group in Azure (the name attribute).

What’s cool is with the Azure Terraform VSCode extension, we can click on the RESOURCE_TYPE field and it will take us to the documentation for that resource. Makes things a lot easier!

Resource Group.JPG

Ok, now that we have a resource group we can create the Terraform file to deploy the Azure Container Instance into it: –

new-item azurecontainerinstance.tf

Drop the following into the file: –

resource "azurerm_container_group" "testcontainergroup1" {
name = "testcontainergroup1"
location = "${azurerm_resource_group.azurecontainerinstances.location}"
resource_group_name = "${azurerm_resource_group.azurecontainerinstances.name}"
ip_address_type = "public"
dns_name_label = "testcontainergroup1"
os_type = "Linux"

container {
name = "testcontainer"
image = "mcr.microsoft.com/mssql/server:2019-CTP2.5-ubuntu"
cpu = "1.0"
memory = "2.0"

ports {
port = 1433
protocol = "TCP"
}

environment_variables = {
"ACCEPT_EULA" = "Y"
"SA_PASSWORD" = "Testing1122"
}
}
}

What this is going to do is create an Azure Container Instance Group with one container it in, running SQL Server 2019 CTP 2.5. It’ll be publicly exposed to the internet on port 1433 (I’ll cover fixing that in a future post) so we’ll get a public IP that we can use to connect to.

Notice that the location and resource_group_name are set using variables that retrieve the values of the resource group are going to create.

Cool! We are ready to go!

Initialise the directory: –

terraform init

Terraform init.JPG

One cool feature of Terraform is the ability to format all the files (seriously love this): –

terraform fmt

And now we can test the deployment (also love this feature): –

terraform plan

Terraform Plan.JPG

If all looks good, we can deploy!

terraform apply

Awesome stuff. Terraform will give us updates on how the deployment is doing: –

Terraform-ACI-1.gif

To get the IP address of the new Azure Container Instance: –

az container show --name testcontainergroup1 --resource-group azurecontainerinstances

Show Container Details.JPG

Drop that IP Address into SSMS or ADS: –

Azure Data Studio.JPG

And boom! We are connected to SQL Server 2019 CTP2.5 running in an Azure Container Instance deployed with Terraform.

If you want to tear down what’s just been built, you can run: –

terraform destroy

Thanks for reading!

Running a SQL Server Azure Container Instance in a virtual network

Up until now Azure Container Instances only had one option to allow us to connect. That was assigning a public IP address that was directly exposed to the internet.

Not really great as exposing SQL Server on port 1433 to the internet is generally a bad idea: –

Now I know there’s a lot of debated about whether or not you should change the port that SQL is listening on to prevent this from happening. My personal opinion is, that if someone wants to get into your SQL instance, changing the port isn’t going to slow them down much. However, a port change will stop opportunistic hacks (such as the above).

But now we have another option. The ability to deploy a ACI within a virtual network in Azure! So let’s run through how to deploy.

First thing, I’m going to be using the azure-cli. Deploying to a virtual network is a new feature so you’ll need to upgrade to v2.0.46. On windows it’s pretty simple, go to here and download the .msi. If you’re on linux (or running in WSL) run:-

sudo apt-get update && sudo apt-get install --only-upgrade -y azure-cli

OK, now log in to azure: –

az login

Create a resource group (this is currently only supported in (westeurope & westus): –

az group create --name containers1 --location westeurope

Now create azure container instance!

az container create \
    --resource-group containers1 \
    --image mcr.microsoft.com/mssql/server:vNext-CTP2.0-ubuntu \
    --name testcontainer1 \
    --ports 1433 \
    --vnet-name aci_vnet1 \
    --vnet-address-prefix 10.0.0.0/16 \
    --subnet aci_subnet1 \
    --subnet-address-prefix 10.0.0.0/24 \
    --environment-variables ACCEPT_EULA=Y SA_PASSWORD=Testing1122

How simple is that? We don’t need to create the virtual network beforehand. It will be created for us. Also note, I’m using the new SQL 2019 running on Ubuntu image.

If you want to create the virtual network before hand, here’s how to do it. First create the virtual network: –

az network vnet create --resource-group containers1 --name aci_vnet1

And now create the subnet: –

az network vnet subnet create --resource-group container1 \ 
    --vnet-name aci_vnet1 --name aci_subnet1 \
        --address-prefix 10.0.0.0/24 \
          --delegation

Note the –delegation option. This is required for the subnet to be used for ACIs but once set, that subnet cannot be used for anything other than Azure Container Instances.

You can have a look at the delegation options by running: –

az network vnet subnet list-available-delegations --resource-group containers1

And now you can deploy an ACI: –

az container create \
    --resource-group containers1 \
    --image mcr.microsoft.com/mssql/server:vNext-CTP2.0-ubuntu \
    --name testcontainer1 \
    --ports 1433 \
    --vnet-name aci_vnet1 \
    --subnet aci_subnet1 \
    --environment-variables ACCEPT_EULA=Y SA_PASSWORD=Testing1122

To check that the container has been created successfully: –

az container show –name testcontainer1 –resource-group containers1

Once up and running the container will have an IP within the virtual network. So that means you’ll need another resource within the network (on a different subnet) in order to connect. I’ve been using a Ubuntu jump box with the mssql-cli installed.

Full code is in this gist

N.B. – There does seem to be an issue currently when trying to delete a subnet that has been delegated for ACIs. I’m speaking with MS about it and will update this post once it is resolved.

EDIT 2018-10-06 – The network resources have to be manually deleted. The scripts to do that are in this gist

Thanks for reading!

Creating SQL images in Azure with ACR Build – Part Two

EDIT – 2018-10 – ACR Build is now known as ACR Tasks

In Part One I detailed how we can use ACR Build to push SQL Server docker images to the Azure Container Registry.

But that all seems a bit manual, doesn’t it?

One of the cool things about ACR build is that we can hook it into GitHub so when we commit to a repository it will build us a new image (very, very cool 🙂 ). Let’s run through how to set that up now…

There’s a couple of pre-reqs required to follow the code here. The first thing to do is fork and clone the following repository: –
https://github.com/dbafromthecold/AzureContainerRegistryBuild

The next thing to do is create a Personal Access Token (PAT) in GitHub.
Go to https://github.com/settings/tokens and select the following options for the new PAT: –

Scroll to the bottom of the page and hit Generate Token (make sure to copy the token)

Ok, now we’re set up to create a build task in ACR. I’m going to continue on from my previous post so will log into Azure: –

az login

 


N.B. – In Part One I used a powershell console however in this post I’ve switch to Bash for Windows. This is because if you try to create a build task in powershell, it’ll error out: –

I’ve played around a bit but couldn’t get it to work, so I cut my losses and jumped into bash. Probably shouldn’t be running the Azure-CLI commands in a powershell window but I needed to previously as I had to build a docker image locally, we don’t need to do that now.


 

Before we do anything else let’s just verify that we can see the existing repository created in Part One and the tagged image in it: –

az acr repository show-tags --name TestContainerRegistry01 --repository testimage<a 

Great! Now let’s create the build task: –

az acr build-task create \
    --registry TestContainerRegistry01 \
    --name buildsqlimage \
    --image testimage:{{.Build.ID}} \
    --context https://github.com/<your-github-username>/AzureContainerRegistryBuild \
    --branch master \
    --git-access-token YOUR_PERSONAL_ACCESS_TOKEN


N.B. – The {{.Build.ID}} will generate a tag for the new image created within the repository (based on the number created by ACR Build)

Fantastic, one build task created! How easy was that??

Let’s test by running: –

az acr build-task run --registry TestContainerRegistry01 --name buildsqlimage

And the progress of the build task can be monitored: –

az acr build-task logs --registry TestContainerRegistry01

After a while (be patient 🙂 ) a new tag will be in the repository: –

az acr repository show-tags --name TestContainerRegistry01 --repository testimage

Cool! Now let’s commit to the repository in GitHub and see if a new image is created. So navigate to where you cloned the repository: –

cd AzureContainerRegistryBuild

What we’re going to do is drop an environment variable into the dockerfile to change the port that SQL is listening on.

nano Dockerfile

Drop the following into the dockerfile:-

ENV MSSQL_TCP_PORT=15666

Now commit and push!

git add .
git commit -m "Changed the port for SQL Server"
git push

And then a new tag will be in the repository!

az acr repository show-tags --name TestContainerRegistry01 --repository testimage

Awesome! How cool is that?!

To make sure that it’s worked, let’s run an Azure Container Instance (using the credentials stored in the Key Vault created in Part One): –

az container create \
    --resource-group containers1 \
    --image testcontainerregistry01.azurecr.io/testimage:aa4 \
    --registry-username $(az keyvault secret show \
                        --vault-name aptestkeyvault01 \
                        --name testcontainerregistry-pull-usr \
                        --query value --output tsv) \
    --registry-password $(az keyvault secret show \
                        --vault-name aptestkeyvault01 \
                        --name testcontainerregistry-pull-pwd \
                        --query value --output tsv) \
    --name testcontainer2 \
    --cpu 2 --memory 4 \
    --environment-variables SA_PASSWORD=Testing1122 \
    --ip-address public \
    --ports 15666

N.B. – Make sure the image tag is correct!

Confirm that the ACI is up and running: –

az container show --name testcontainer2 --resource-group containers1

And BOOM! Connect to SQL Server: –

How. Cool. Is. That? Automatically creating SQL images in ACR with just a commit to a Github repo!

Thanks for reading!

Creating SQL images in Azure with ACR Build – Part One

EDIT – 2018-10 – ACR Build is now known as ACR Tasks

Whenever I’ve pushed images up to an Azure Container Registry I’ve been building them locally (using Docker for Windows) and then manually pushing them up. However, I don’t need to do this. What I can do instead is use the Azure Container Registry Build service.

Let’s have a look at how it works.


To follow along with the code here you will need the Azure-CLI installed and Docker for Windows running


First things first, log into azure: –

az login

Then create a resource group: –

az group create --resource-group containers1 --location eastus

Create a registry (the ACR): –

az acr create --resource-group containers1 --name TestContainerRegistry01 --sku Standard --location eastus

And then login to the registry: –

az acr login --name TestContainerRegistry01

Now create a directory on your local machine to hold the repo that we’re going to use to build the image: –

mkdir c:\git\dbafromthecold
cd c:\git\dbafromthecold

And then pull the repo down: –

git clone https://github.com/dbafromthecold/AzureContainerRegistryBuild.git


The dockerfile within this repo is pretty simple. All it’s going to do is build us an image that when run will have a database named TestDatabase already there. For more info on building images from a dockerfile, check out my blog here

Now navigate to the repo: –

cd c:\git\dbafromthecold\AzureContainerRegistryBuild

Great! We’re ready to build the image and push it to the Azure Container Registry.
To do this, run: –

az acr build --registry TestContainerRegistry01 --image testimage:latest .

Hmm…that does look like it hasn’t worked but trust me…it has (just wait a few minutes) 🙂

To verify that the new repository has been created: –

az acr repository list --name TestContainerRegistry01 --output table

And to view the tagged image within it: –

az acr repository show-tags --name TestContainerRegistry01 --repository testimage

Awesome! Our custom image is in our ACR!

But has it worked? Has it really? Oh ye of little faith…

I guess the only way to find out is to run a container! So let’s run a Azure Container Instance from our new image.

I’ve already blogged about creating ACIs here so I’m just going to run through this quickly but don’t worry, everything you need to deploy an ACI is in the following code.

In order for the ACI to pull the image from our new ACR we need to store credentials that can be used to grant access. So first, create a keyvault: –

az keyvault create --resource-group containers1 --name aptestkeyvault01

Once the vault is created, we need to create a service principal with read permissions to the ACR and store its credentials in the vault: –

az keyvault secret set `
  --vault-name aptestkeyvault01 `
  --name testcontainerregistry-pull-pwd `
  --value $(az ad sp create-for-rbac `
  			--name testcontainerregistry-pull `
  			--scopes $(az acr show --name testcontainerregistry01 --query id --output tsv) `
  			--role reader `
  			--query password `
  			--output tsv)

Then we grab the service principal’s appId which will be the username passed to the Azure Container Registry: –

az keyvault secret set `
  --vault-name aptestkeyvault01 `
  --name testcontainerregistry-pull-usr `
  --value $(az ad sp show `
  		--id http://testcontainerregistry-pull --query appId --output tsv)

Ok, now we can run a Azure Container Instance from our custom image!

az container create `
    --resource-group containers1 `
    --image testcontainerregistry01.azurecr.io/testimage:latest `
	--registry-username $(az keyvault secret show `
						--vault-name aptestkeyvault01 `
						--name testcontainerregistry-pull-usr `
						--query value --output tsv) `
	--registry-password $(az keyvault secret show `
						--vault-name aptestkeyvault01 `
						--name testcontainerregistry-pull-pwd `
						--query value --output tsv) `
	--name testcontainer1 `
	--cpu 2 --memory 4 `
	--environment-variables SA_PASSWORD=Testing1122 `
	--ip-address public `
	--ports 1433

Ok, we need to wait for the ACI to spin up. We can monitor for this by running: –

az container show --name testcontainer1 --resource-group containers1

Once the above command comes back with a status of Succeeded and an IP address we can drop that into SSMS: –

And boom! We’ve connected to an ACI created from our custom image built using Azure Container Registry Build!

So that’s an intro into ACR Build. There’s plenty more that we can do with it and I’ll explore that in future posts.

Thanks for reading!