0

Accessing the Kubernetes API from SQL Server 2025

The sp_invoke_external_rest_endpoint stored procedure that’s available in 2025 allows for SQL Server to hit external rest endpoints…which opens up quite a few interesting options.

I was thinking about this the other day and it occurred to me that Kubernetes has a REST API 🙂

So can we hit that from within SQL Server?

Let’s have a look how to do it. Now there’s a few steps, here’s what we’re going to do: –

1. Create a certificate authority and a signed certificate
2. Deploy a reverse proxy to Kubernetes
3. Configure SQL Server to be able to hit the reverse proxy
4. Use the stored procedure to hit the Kubernetes API via the reverse proxy

Ok, let’s go!

First thing to do is create a CA and then a signed certificate. It would be nice if we could just upload the root certificate from the Kubernetes cluster but (and trust me here, I’ve tried for longer than I’m going to admit)…it just doesn’t seem to work. All you’ll get is this infuriating error: –

Msg 31608, Level 16, State 24, Procedure sys.sp_invoke_external_rest_endpoint_internal, Line 1 [Batch Start Line 0]
An error occurred, failed to communicate with the external rest endpoint. HRESULT: 0x80072ee7.

So creating our own signed certificate seems to be the only way forward.

(but I’m going to keep trying, so if I do get it working…I’ll update here)

One thing to note, I’m running SQL Server 2025 in WSL on Windows 11…hitting a local Kubernetes cluster with Metallb installed, running in Hyper-V.

Cool ok first things first, set some variables: –

DOMAIN="api.dbafromthecold.local"
DAYS_VALID=365
NAMESPACE="default"
CA_KEY="CA.key"
CA_CERT="CA.pem"

Create the certificate authority: –

openssl genrsa -out $CA_KEY 2048
openssl req -x509 -new -nodes -key $CA_KEY -sha256 -days 3650 -out $CA_CERT \
-subj "/C=IE/ST=Local/L=Test/O=TestCA/CN=My Local CA"

Then we’re going to create a key and a certificate signing request: –

openssl genrsa -out $DOMAIN.key 2048
openssl req -new -key $DOMAIN.key -out $DOMAIN.csr \
-subj "/C=IE/ST=Local/L=Test/O=TestAPI/CN=$DOMAIN"

Use the CA certificate and key to sign the CSR and generate a server certificate: –

openssl x509 -req -in $DOMAIN.csr -CA $CA_CERT -CAkey $CA_KEY -CAcreateserial \
-out $DOMAIN.crt -days $DAYS_VALID -sha256

OK, great…we have our signed certificate. Now let’s deploy a reverse nginx proxy to Kubernetes!

Create a secret in kubernetes to hold the signed certificate and key: –

kubectl create secret tls k8s-api-cert \
--cert=$DOMAIN.crt --key=$DOMAIN.key \
--namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -

This stores the certificate and key which we’ll later mount into the nginx pod.

Deploy the ingress controller, which’ll expose our proxy: –

helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace

Confirm the ingress controller is up and running: –

kubectl get all -n ingress-nginx

Now create a config map containing the reverse proxy configuration: –

kubectl create configmap k8s-api-nginx-conf --from-literal=nginx.conf='
events {}
http {
  server {
    listen 443 ssl;
    ssl_certificate /etc/nginx/certs/tls.crt;
    ssl_certificate_key /etc/nginx/certs/tls.key;
    location / {
      proxy_pass https://10.0.0.41:6443;
      proxy_ssl_verify off;
    }
  }
}
' -n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -

N.B. – 10.0.0.41:6443 is the IP address and port of the kube-apiserver

This config will tell nginx to accept HTTPS connections (required for sp_invoke_external_rest_endpoint), terminate TLS using the secret we created, and proxy the request to the Kubernetes API server.

Deploy nginx, mounting the config map and TLS secret to enable the reverse proxy: –

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-api-proxy
  namespace: $NAMESPACE
spec:
  replicas: 1
  selector:
    matchLabels:
      app: k8s-api-proxy
  template:
    metadata:
      labels:
        app: k8s-api-proxy
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 443
        volumeMounts:
        - name: nginx-conf
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
        - name: nginx-cert
          mountPath: /etc/nginx/certs
          readOnly: true
      volumes:
      - name: nginx-conf
        configMap:
          name: k8s-api-nginx-conf
      - name: nginx-cert
        secret:
          secretName: k8s-api-cert
EOF

Create a service to expose the nginx deployment: –

kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: k8s-api-proxy
  namespace: $NAMESPACE
spec:
  selector:
    app: k8s-api-proxy
  ports:
    - protocol: TCP
      port: 443
      targetPort: 443
EOF

And now, create an ingress resource to route traffic to the nginx service: –

kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: k8s-api-ingress
  namespace: $NAMESPACE
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - $DOMAIN
      secretName: k8s-api-cert
  rules:
    - host: $DOMAIN
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: k8s-api-proxy
                port:
                  number: 443
EOF

Getting there! Let’s check the pod and the ingress: –

kubectl get pods
kubectl get ingress

Ha ha! Ok, we’re almost ready to hit the Kubernetes API. But first we have to configure SQL to trust the certificate.

To do that we need to copy the certificate to /var/opt/mssql/security/ca-certificates. That location doesn’t exist by default so it needs to be created.

mkdir /var/opt/mssql/security/ca-certificates

cp $CA_CERT /var/opt/mssql/security/ca-certificates/$CA_CERT
chown mssql:mssql /var/opt/mssql/security/ca-certificates/$CA_CERT

Then restart SQL Server: –

sudo systemctl restart mssql-server

Check the SQL Server’s error log, we’re looking for: –

Successfully placed CA.crt in trusted root store

Oh, and if we want to test with curl…we need to place the cert in the server’s trust store: –

sudo cp $CA_CERT /usr/local/share/ca-certificates/myCA.crt
sudo update-ca-certificates

But before we test, we need to make sure that SQL Server can resolve our domain name. That means adding an entry into the /etc/hosts file: –

10.0.0.50 api.dbafromthecold.local

Let’s try it!

curl https://api.dbafromthecold.local/version

Boooooom! It works! Now let’s try within SQL Server. Enable the stored procedure: –

EXECUTE sp_configure 'external rest endpoint enabled', 1;
RECONFIGURE WITH OVERRIDE;

And let’s go….

DECLARE @version NVARCHAR(MAX);

EXEC sp_invoke_external_rest_endpoint
@url = 'https://api.dbafromthecold.local/version',
@method = 'GET',
@response = @version OUTPUT
PRINT @version

Niiiiiiice! But what if we want to list the pods in the cluster?

The API endpoint is reachable but if we try to hit it right now, we’ll get: –

{
  "status": "Failure",
  "message": "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
  "reason": "Forbidden",
  "details": {
    "kind": "pods"
  },
  "code": 403
}

The reason that’s happening is that the request is reaching the Kubernetes API server, but the user it’s running as — system:anonymous — doesn’t have permission to list pods.

Let’s fix that by:

1. Creating a service account in Kubernetes
2. Creating a role that allows reading (and creating 🙂 ) of pods
3. Bind the role to the service account
4. Extract a token from that service account
5. Use that token in the stored procedure

So create the service account: –

kubectl create serviceaccount api-reader

And a role that allows listing and creating pods: –

kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: default
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch","create"]
EOF

Now bind that role to the service account: –

kubectl create rolebinding pod-reader-binding \
--role=pod-reader \
--serviceaccount=default:api-reader \
--namespace=default

Create a token for that service account: –

kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: api-reader-token
  annotations:
    kubernetes.io/service-account.name: api-reader
type: kubernetes.io/service-account-token
EOF

And grab the token: –

kubectl get secret api-reader-token -o jsonpath="{.data.token}" | base64 -d

This’ll give us a big long token that we can use in the header parameter of the stored procedure…like this: –

DECLARE @pods NVARCHAR(MAX);

EXEC sp_invoke_external_rest_endpoint
@url = 'https://api.dbafromthecold.local/api/v1/namespaces/default/pods',
@headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
@method = 'GET',
@response = @pods OUTPUT
PRINT @pods

But that’ll just give us a big chunk of json back…let’s try and parse it a little.

DECLARE @pods NVARCHAR(MAX);

EXEC sp_invoke_external_rest_endpoint
@url = 'https://api.dbafromthecold.local/api/v1/namespaces/default/pods',
@headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
@method = 'GET',
@response = @pods OUTPUT

SELECT
    pod_name,
    namespace,
    container_image,
    pod_ip,
    status
FROM OPENJSON(@pods, '$.result.items')
WITH (
    pod_name NVARCHAR(100)        '$.metadata.name',
    namespace NVARCHAR(100)       '$.metadata.namespace',
    container_image NVARCHAR(100) '$.spec.containers[0].image',
    pod_ip NVARCHAR(50)           '$.status.podIP',
    status NVARCHAR(50)           '$.status.phase'
);

And that’ll return a row for each pod in the cluster (in the default namespace).

This cluster only has the one pod in the default namespace (the nginx pod) so the results will be: –

pod_name namespace container_image pod_ip status
k8s-api-proxy-abcde default nginx:latest 10.244.0.5 Running

Phew! Still with me? Great! One more thing then…what about deploying a pod to Kubernetes from SQL Server?

That’s why we created the role with the create verb earlier 🙂

Let’s keep it simple and just create another nginx pod in the default namespace in the cluster: –

DECLARE @deploy NVARCHAR(MAX);
EXEC sp_invoke_external_rest_endpoint
    @url = 'https://api.dbafromthecold.local/api/v1/namespaces/default/pods',
    @headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
    @method = 'POST',
    @payload = '{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
    "name": "nginx",
    "namespace": "default"
  },
  "spec": {
    "containers": [
      {
        "name": "nginx",
        "image": "nginx:latest",
        "ports": [
          {
            "containerPort": 80
          }
        ]
      }
    ]
  }
}',
@response = @deploy OUTPUT
PRINT @deploy

And now if we check the cluster: –

kubectl get pods

There’s our pod!

So that’s how to hit the Kubernetes API via a reverse proxy from SQL Server 2025 using sp_invoke_external_rest_endpoint 🙂

Cue maniacal laughter.

Thanks for reading!

0

EightKB 2025 – Schedule and Registration

Hello Hello, We. Are. Back!

The schedule for EightKB 2025 Edition has been announced!

We’re kicking off at 1pm UTC on August 21st…here’s the schedule: –

N.B. – If you click on the session on the website, it shows the session abstract…nice touch eh?

Once again we have five amazing speakers delivering five great, in-depth sessions on various SQL Server internals topics.

We only have five slots in each event which meant we ended up not picking some amazing sessions. I want to say a MASSIVE thank you to all who submitted…session selection is the WORST part of running a conference! Really, truly…thank you to everyone who submitted.

Registration is now open, it’s completely free and you can sign up here: –
https://eightkb.online/

The event will be in Zoom and we’ll have chat going in the EightKB channel in the SQL Community Slack…please come and hang out with us there!

EightKB is a 100% community driven with no sponsors so, we have our own Bonfire store selling t-shirts!

Don’t they look snazzy?!

Any money generated from the store will be put straight back into the event.

Hope to see you on August the 21st!

0

Vertically scaling SQL Server online in Kubernetes


UPDATE – JANUARY 2026 – This feature has now been moved to stable in Kubernetes v1.35…full details are here: –
https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/

Whilst this is great unfortunately I did a bit more digging with regards to SQL Server (which OK, I should have done initially) and it seems that SQL Server will not see the new limits without a restart. So we can increase the limits in the pod without a restart but SQL will not see them.

My recommendation is to set the resizePolicy to RestartContainer in the manifest: –

resizePolicy:
- resourceName: cpu
  restartPolicy: RestartContainer
- resourceName: memory
  restartPolicy: RestartContainer

Then when increasing the memory via: –

kubectl patch pod mssql-statefulset-0 --subresource resize --patch `
'{\"spec\":{\"containers\":[{\"name\":\"mssql-container\", \"resources\":{\"requests\":{\"memory\":\"4096Mi\"}, \"limits\":{\"memory\":\"4096Mi\"}}}]}}'

This will restart the container within the pod, not the whole pod which is quicker but does mean a slight outage.


ORIGINAL ARTICLE

One of the new features in Kubernetes v1.33 is the ability to resize CPU and memory resources for containers online, aka without having to recreate the pod the container is running in. In the past, when adjusting a pod’s resources, Kubernetes would delete the existing pod and create a new one via a controller.

Not a problem for applications that can have multiple replicas running, but for SQL Server this would cause a disruption as we (generally) only have one pod running SQL Server in a statefulset. Let’s see this in action.

First we’ll deploy this simple statefulset to Kubernetes: –

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mssql-statefulset
spec:
  serviceName: "mssql"
  replicas: 1
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      name: mssql-pod
  template:
    metadata:
      labels:
        name: mssql-pod
    spec:
      securityContext:
        fsGroup: 10001
      containers:
        - name: mssql-container
          image: mcr.microsoft.com/mssql/server:2022-CU16-ubuntu-20.04
          ports:
            - containerPort: 1433
              name: mssql-port
          env:
            - name: ACCEPT_EULA
              value: "Y"
            - name: MSSQL_SA_PASSWORD
              value: "Testing1122"
          resources:
            requests:
              memory: "2048Mi"
              cpu: "2000m"
            limits:
              memory: "2048Mi"
              cpu: "2000m"

The important part here is the CPU and memory settings: –

          resources:
            requests:
              memory: "2048Mi"
              cpu: "2000m"
            limits:
              memory: "2048Mi"
              cpu: "2000m"

N.B. – you may have noticed that the limits and requests here are the same value. This is to set a “Guaranteed” Quality of Service for the pod…it’s a recommended best practice for SQL Server in Kubernetes, more info is here: –
https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-kubernetes-best-practices-statefulsets

Let’s apply that manifest: –

kubectl apply -f ./sqlserver.yaml

In the past we had to edit our statefulset to increase these values: –

kubectl edit sts mssql-statefulset

This would recreate the pod with the new limits/requests: –

But now as of Kubernetes v1.33 we can scale pods without a restart! See here for the more info: –
https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/

One thing to note…the code to do this in the official docs will error out if you’re running kubectl on Windows (sigh). In order for this to run successfully on Windows a backslash has to be added before any double quote character.

So in order to increase the memory of the pod running, we would run: –

kubectl patch pod mssql-statefulset-0 --subresource resize --patch `
'{\"spec\":{\"containers\":[{\"name\":\"mssql-container\", \"resources\":{\"requests\":{\"memory\":\"4000Mi\"}, \"limits\":{\"memory\":\"4000Mi\"}}}]}}'

And then if we check out the pod’s yaml: –

kubectl get pod mssql-statefulset-0 -o yaml

And there we are, cool!

So to do this for CPU, we would run: –

kubectl patch pod mssql-statefulset-0 --subresource resize --patch `
'{\"spec\":{\"containers\":[{\"name\":\"mssql-container\", \"resources\":{\"requests\":{\"cpu\":\"4000m\"}, \"limits\":{\"cpu\":\"4000m\"}}}]}}'

Ok, I appreciate this code isn’t exactly the easiest to type out! Thankfully we can now add a –subresource resize flag to an edit command: –

kubectl edit pod mssql-statefulset-0 --subresource resize

And this will allow us to update the CPU and memory limits/requests of the pod without a restart!

Thanks for reading!

0

EightKB 2025

EightKB is back!

The biggest online SQL Server internals conference is back in 2025…it’s all happening on August the 21st!

We’ve open our call for speakers, you can submit here: –
https://sessionize.com/eightkb-august-2025/

We’re looking for experts, not necessarily expert speakers. If you haven’t presented before we offer mentoring as part of our speaker program to help you prepare for your session so that you can enjoy presenting on the day.

As a speaker this is your chance to really go all out! If you’ve ever wanted to deep dive into a topic, this is the event to do so. No topic is too advanced…you can do as many (or as little or none at all!) demos as you would like. Field questions during the session or respond after the event…completely up to you.

One thing we are doing slightly different this year is that only four of the sessions have to focus on SQL Server internals. The fifth session can be on ANY TECH TOPIC YOU LIKE, as long as it’s 300 level and above!

As ever, speakers do not have to use a slide template, and we don’t ask for speakers to add our logo to their deck. We just want you to turn up and enjoy presenting!

After the event, we’ll provide feedback of your session from the attendees and an unbranded video of your session that you can use however you would like.

And back by popular demand, our mind-melting t-shirt and hoodie range! EightKB is 100% funded by the sales of these, so if you have enjoyed our previous conferences…please consider buying a shirt!

https://www.bonfire.com/store/eightkb/

Hope to see you there!

2

Visualising SQL Server in Kubernetes

The other day I came across an interesting repo on github, KubeDiagrams.

What this repo does is generate Kubernetes architecture diagrams from Kubernetes manifest files…nice!

Deploying applications to Kubernetes can get complicated fast…especially with stateful applications such as SQL Server.

So having the ability to easily generate diagrams is really helpful…because we all should be documenting everything, right? 🙂

Plus I’m rubbish at creating diagrams!

So let’s have a look at how this works. First, install the dependencies via pip: –

pip install pyyaml
pip install diagrams

And install graphviz: –

sudo apt install graphviz

Great, now pull down the repo: –

git clone https://github.com/philippemerle/KubeDiagrams.git

And we’re good to go! So here’s an example manifest file to deploy a SQL Server statefulset to Kubernetes: –

apiVersion: v1
kind: Namespace
metadata:
  name: mssql
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: mssql-sc
provisioner: docker.io/hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-headless
  namespace: mssql
spec:
  clusterIP: None
  selector:
    name: mssql-pod
  ports:
    - name: mssql-port
      port: 1433
      targetPort: 1433
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mssql-statefulset
  namespace: mssql
spec:
  serviceName: "mssql-headless"
  replicas: 1
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      name: mssql-pod
  template:
    metadata:
      labels:
        name: mssql-pod
    spec:
      securityContext:
        fsGroup: 10001
      containers:
        - name: mssql-container
          image: mcr.microsoft.com/mssql/server:2022-CU16-ubuntu-20.04
          ports:
            - containerPort: 1433
              name: mssql-port
          env:
            - name: MSSQL_PID
              value: "Developer"
            - name: ACCEPT_EULA
              value: "Y"
            - name: MSSQL_AGENT_ENABLED
              value: "1"
            - name: MSSQL_SA_PASSWORD
              value: "Testing1122"
          resources:
            requests:
              memory: "1024Mi"
              cpu: "500m"
            limits:
              memory: "2048Mi"
              cpu: "2000m"
          volumeMounts:
            - name: sqlsystem
              mountPath: /var/opt/mssql
            - name: sqldata
              mountPath: /opt/sqlserver/data
            - name: sqllog
              mountPath: /opt/sqlserver/log
  volumeClaimTemplates:
    - metadata:
        name: sqlsystem
        namespace: mssql
      spec:
        accessModes:
         - ReadWriteOncePod
        storageClassName: mssql-sc
        resources:
          requests:
            storage: 25Gi
    - metadata:
        name: sqldata
        namespace: mssql
      spec:
        accessModes:
         - ReadWriteOncePod
        storageClassName: mssql-sc
        resources:
          requests:
            storage: 25Gi
    - metadata:
        name: sqllog
        namespace: mssql
      spec:
        accessModes:
         - ReadWriteOncePod
        storageClassName: mssql-sc
        resources:
          requests:
            storage: 25Gi
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-service
  namespace: mssql
spec:
  ports:
  - name: mssql-ports
    port: 1433
    targetPort: 1433
  selector:
    name: mssql-pod
  type: LoadBalancer

I’m using Docker Desktop here (hence the provisioner: docker.io/hostpath in the storage class). What this’ll create is a namespace, storage class, headless service for the statefulset, the statefulset itself, three persistent volume claims, and a load balanced service to connect to SQL.

Quite a lot of objects for a simple SQL Server deployment, right? (ahh I know it’s a statefulset, but you know what I mean)

So let’s point KubeDiagrams at the manifest: –

./kube-diagrams mssql-statefulset.yaml

And here’s the output!

Pretty cool, eh?

I noticed a couple of quirks. The docs say it’ll work with any version 3 install of python. I had 3.8 installed but had to upgrade to 3.9.

Also I had to add namespace: mssql to the PVCs in the statefulset, otherwise KubeDiagrams threw a warning: –

Error: ‘sqlsystem/mssql/PersistentVolumeClaim/v1’ resource not found!
Error: ‘sqldata/mssql/PersistentVolumeClaim/v1’ resource not found!
Error: ‘sqllog/mssql/PersistentVolumeClaim/v1’ resource not found

But other than those, it works really well and is a great way to visualise objects in Kubernetes.

Massive thank you to the creator, Philippe Merle!

Thanks for reading!