Performance tuning KubeVirt for SQL Server

Following on from my last post about Getting Started With KubeVirt & SQL Server, in this post I want to see if I can improve the performance from the initial test I ran.

In the previous test, I used SQL Server 2025 RC1…so wanted to change that to RTM (now that’s it’s been released) but I was getting some strange issues running in the StatefulSet. However, SQL Server 2022 seemed to have no issues and as much as I want to investigate what’s going on with 2025 (pretty sure it’s host based, not an issue with SQL 2025)…I want to dive into KubeVirt more…so let’s go with 2022 in both KubeVirt and the StatefulSet.

I also separated out the system databases, user database data, and user database log files onto separate volumes…here’s what the StatefulSet manifest looks like: –

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mssql-statefulset
spec:
  serviceName: "mssql"
  replicas: 1
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      name: mssql-pod
  template:
    metadata:
      labels:
        name: mssql-pod
      annotations:
        stork.libopenstorage.org/disableHyperconvergence: "true"
    spec:
      securityContext:
        fsGroup: 10001
      containers:
        - name: mssql-container
          image: mcr.microsoft.com/mssql/rhel/server:2022-CU22-rhel-9.1
          ports:
            - containerPort: 1433
              name: mssql-port
          env:
            - name: MSSQL_PID
              value: "Developer"
            - name: ACCEPT_EULA
              value: "Y"
            - name: MSSQL_AGENT_ENABLED
              value: "1"
            - name: MSSQL_SA_PASSWORD
              value: "Testing1122"
            - name: MSSQL_DATA_DIR
              value: "/opt/sqlserver/data"
            - name: MSSQL_LOG_DIR
              value: "/opt/sqlserver/log"
          resources:
            requests:
              memory: "8192Mi"
              cpu: "4000m"
            limits:
              memory: "8192Mi"
              cpu: "4000m"
          volumeMounts:
            - name: sqlsystem
              mountPath: /var/opt/mssql/
            - name: sqldata
              mountPath: /opt/sqlserver/data/
            - name: sqllog
              mountPath: /opt/sqlserver/log/
  volumeClaimTemplates:
    - metadata:
        name: sqlsystem
      spec:
        accessModes:
         - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: px-fa-direct-access
    - metadata:
        name: sqldata
      spec:
        accessModes:
         - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: px-fa-direct-access
    - metadata:
        name: sqllog
      spec:
        accessModes:
         - ReadWriteOnce
        resources:
          requests:
            storage: 25Gi
        storageClassName: px-fa-direct-access

And here’s what the KubeVirt VM manifest looks like: –

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: win2025
spec:
  runStrategy: Manual # VM will not start automatically
  template:
    metadata:
      labels:
        app: sqlserver
    spec:
      domain:
        firmware:
          bootloader:
            efi:
              secureBoot: false
        resources: # requesting same limits and requests for guaranteed QoS
          requests:
            memory: "8Gi"
            cpu: "4"
          limits:
            memory: "8Gi"
            cpu: "4"
        devices:
          disks:
            # Disk 1: OS
            - name: osdisk
              disk:
                bus: scsi
            # Disk 2: SQL System
            - name: sqlsystem
              disk:
                bus: scsi
            # Disk 3: SQL Data
            - name: sqldata
              disk:
                bus: scsi
            # Disk 4: SQL Log
            - name: sqllog
              disk:
                bus: scsi
            # Windows installer ISO
            - name: cdrom-win2025
              cdrom:
                bus: sata
                readonly: true
            # VirtIO drivers ISO
            - name: virtio-drivers
              cdrom:
                bus: sata
                readonly: true
            # SQL Server installer ISO
            - name: sql2022-iso
              cdrom:
                bus: sata
                readonly: true
          interfaces:
            - name: default
              model: virtio
              bridge: {}
              ports:
                - port: 3389 # port for RDP
                - port: 1433 # port for SQL Server      
      networks:
        - name: default
          pod: {}
      volumes:
        - name: osdisk
          persistentVolumeClaim:
            claimName: winos
        - name: sqlsystem
          persistentVolumeClaim:
            claimName: sqlsystem
        - name: sqldata
          persistentVolumeClaim:
            claimName: sqldata
        - name: sqllog
          persistentVolumeClaim:
            claimName: sqllog
        - name: cdrom-win2025
          persistentVolumeClaim:
            claimName: win2025-pvc
        - name: virtio-drivers
          containerDisk:
            image: kubevirt/virtio-container-disk
        - name: sql2022-iso
          persistentVolumeClaim:
            claimName: sql2022-pvc

I then ran the hammerdb test again…running for 10 minutes with a 2 minute ramp up time. Here are the results: –

# Statefulset result
TEST RESULT : System achieved 46594 NOPM from 108126 SQL Server TPM

# KubeVirt result
TEST RESULT : System achieved 18029 NOPM from 41620 SQL Server TPM

Oooooook…that has made a difference! KubeVirt TPM is now up to 38% of the statefulset TPM. But I’m still seeing a high privileged CPU time in the KubeVirt VM: –

So I went through the docs and found that there are a whole bunch of options for VM configuration…the first one I tried was the Hyper-V feature. This should allow Windows to use paravirtualized interfaces instead of emulated hardware, reducing VM exit overhead and improving interrupt, timer, and CPU coordination performance.

Here’s what I added to the VM manifest: –

        features:
          hyperv: {} # turns on Hyper-V feature so the guest “thinks” it’s running under Hyper-V - needs the Hyper-V clock timer too, otherwise VM pod will not start
        clock:
          timer:
            hyperv: {} 

N.B. – for more information on what’s happening here, check out this link: –
https://www.qemu.org/docs/master/system/i386/hyperv.html

Stopped/started the VM and then ran the test again. Here’s the results: –

TEST RESULT : System achieved 40591 NOPM from 94406 SQL Server TPM

Wait, what!? That made a huge difference…it’s now 87% of the StatefulSet result! AND the privileged CPU time has come down: –

But let’s not stop there…let’s keep going and see if we can get TPM parity between KubeVirt and SQL in a StatefulSet.

There’s a bunch more flags that can be set for the Hyper-V feature and the overall VM, so let’s set some of those: –

        features:
          acpi: {} # ACPI support (power management, shutdown, reboot, device enumeration)
          apic: {} # Advanced Programmable Interrupt Controller (modern interrupt handling for Windows/SQL)
          hyperv: # turns on Hyper-V vendor feature block so the guest “thinks” it’s running under Hyper-V. - needs the Hyper-V clock timer too, otherwise VM pod will not start
            reenlightenment: {} # Allows guest to update its TSC frequency after migrations or time adjustments
            ipi: {} # Hyper-V IPI acceleration - faster inter-processor interrupts between vCPUs
            synic: {} # Hyper-V Synthetic Interrupt Controller - improves interrupt delivery
            synictimer: {} # Hyper-V synthetic timer - stable high-resolution guest time source
            spinlocks:
              spinlocks: 8191 # Prevents Windows spinlock stalls on SMP systems - avoids boot/timeouts under load
            reset: {} # Hyper-V reset infrastructure - cleaner VM resets
            relaxed: {} # Relaxed timing - reduces overhead when timing deviations occur under virtualization
            vpindex: {} # Per-vCPU indexing - improves Windows scheduler awareness of vCPU layout
            runtime: {} # Hyper-V runtime page support - gives guest better insight into hypervisor behavior
            tlbflush: {} # Hyper-V accelerated TLB flush - improves scalability on multi-vCPU workloads
            frequencies: {} # Exposes host CPU frequency data - allows proper scaling & guest timing
            vapic: {} # Virtual APIC support - reduces interrupt latency and overhead
        clock:
          timer:
            hyperv: {} # Hyper-V clock/timer - stable time source, recommended when using Hyper-V enlightenments

Memory and CPU wise…I went and added: –

       ioThreadsPolicy: auto # Automatically allocate IO threads for QEMU to reduce disk I/O contention
        cpu:
          cores: 4
          dedicatedCpuPlacement: true # Guarantees pinned physical CPUs for this VM to improve latency & stability
          isolateEmulatorThread: true # Pins QEMU’s emulator thread to a dedicated pCPU instead of sharing with vCPUs
          model: host-passthrough # Exposes all host CPU features directly to the VM
          numa:
            guestMappingPassthrough: {} # Mirrors host NUMA topology to the guest to reduce cross-node latency
        memory:
          hugepages:
            pageSize: 1Gi # Uses 1Gi hugepages for reduced TLB pressure

N.B. – this required configuring the host to reserve hugepages at boot

And then for disks…I installed the latest virtio drivers on the VM…switched the disks for the SQL system, data, and log files to use virtio instead of a scsi bus and then added for each disk: –

dedicatedIOThread: true

Other device settings added were: –

autoattachGraphicsDevice: false # Do not attach a virtual graphics/display device (VNC/SPICE) - removes unnecessary emulation
autoattachMemBalloon: false # Disable the VirtIO memory balloon - prevents dynamic memory changes, improves consistency
autoattachSerialConsole: true # Attach a serial console for debugging and virtctl console access
networkInterfaceMultiqueue: true # Enable multi-queue virtio-net so NIC traffic can use multiple RX/TX queues

All of this results in a bit of a monster manifest file for the VM: –

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: win2025
spec:
  runStrategy: Manual # VM will not start automatically
  template:
    metadata:
      labels:
        app: sqlserver
    spec:
      domain:
        ioThreadsPolicy: auto # Automatically allocate IO threads for QEMU to reduce disk I/O contention
        cpu:
          cores: 4
          dedicatedCpuPlacement: true # Guarantees pinned physical CPUs for this VM - improves latency & stability
          isolateEmulatorThread: true # Pins QEMU’s emulator thread to a dedicated pCPU instead of sharing with vCPUs
          model: host-passthrough # Exposes host CPU features directly to the VM - best performance (but less portable)
          numa:
            guestMappingPassthrough: {} # Mirrors host NUMA topology to the guest - reduces cross-node memory latency
        memory:
          hugepages:
            pageSize: 1Gi # Uses 1Gi hugepages for reduced TLB pressure - better performance for large-memory SQL
        firmware:
          bootloader:
            efi:
              secureBoot: false # Disable Secure Boot (often required when using custom/older virtio drivers)
        features:
          acpi: {} # ACPI support (power management, shutdown, reboot, device enumeration)
          apic: {} # Advanced Programmable Interrupt Controller (modern interrupt handling for Windows/SQL)
          hyperv: # Enable Hyper-V enlightenment features for Windows guests to improve performance & timing
            reenlightenment: {} # Allows guest to update its TSC frequency after migrations or time adjustments
            ipi: {} # Hyper-V IPI acceleration - faster inter-processor interrupts between vCPUs
            synic: {} # Hyper-V Synthetic Interrupt Controller - improves interrupt delivery
            synictimer: {} # Hyper-V synthetic timer - stable high-resolution guest time source
            spinlocks:
              spinlocks: 8191 # Prevents Windows spinlock stalls on SMP systems - avoids boot/timeouts under load
            reset: {} # Hyper-V reset infrastructure - cleaner VM resets
            relaxed: {} # Relaxed timing - reduces overhead when timing deviations occur under virtualization
            vpindex: {} # Per-vCPU indexing - improves Windows scheduler awareness of vCPU layout
            runtime: {} # Hyper-V runtime page support - gives guest better insight into hypervisor behavior
            tlbflush: {} # Hyper-V accelerated TLB flush - improves scalability on multi-vCPU workloads
            frequencies: {} # Exposes host CPU frequency data - allows proper scaling & guest timing
            vapic: {} # Virtual APIC support - reduces interrupt latency and overhead
        clock:
          timer:
            hyperv: {} # Hyper-V clock/timer - stable time source, recommended when using Hyper-V enlightenments
        resources: # requests == limits for guaranteed QoS (exclusive CPU & memory reservation)
          requests:
            memory: "8Gi"
            cpu: "4"
            hugepages-1Gi: "8Gi"
          limits:
            memory: "8Gi"
            cpu: "4"
            hugepages-1Gi: "8Gi"
        devices:
          autoattachGraphicsDevice: false # Do not attach a virtual graphics/display device (VNC/SPICE) - removes unnecessary emulation
          autoattachMemBalloon: false # Disable the VirtIO memory balloon - prevents dynamic memory changes, improves consistency
          autoattachSerialConsole: true # Attach a serial console for debugging and virtctl console access
          networkInterfaceMultiqueue: true # Enable multi-queue virtio-net so NIC traffic can use multiple RX/TX queues
          disks:
            # Disk 1: OS
            - name: osdisk
              disk:
                bus: scsi   # Keep OS disk on SCSI - simpler boot path once VirtIO storage is already in place
              cache: none
            # Disk 2: SQL System
            - name: sqlsystem
              disk:
                bus: virtio
              cache: none
              dedicatedIOThread: true # Give this disk its own IO thread - reduces contention with other disks
            # Disk 3: SQL Data
            - name: sqldata
              disk:
                bus: virtio
              cache: none
              dedicatedIOThread: true # Separate IO thread for data file I/O - improves parallelism under load
            # Disk 4: SQL Log
            - name: sqllog
              disk:
                bus: virtio
              cache: none
              dedicatedIOThread: true # Separate IO thread for log writes - helps with low-latency sequential I/O
            # Windows installer ISO
            - name: cdrom-win2025
              cdrom:
                bus: sata
                readonly: true
            # VirtIO drivers ISO
            - name: virtio-drivers
              cdrom:
                bus: sata
                readonly: true
            # SQL Server installer ISO
            - name: sql2022-iso
              cdrom:
                bus: sata
                readonly: true
          interfaces:
            - name: default
              model: virtio # High-performance paravirtualized NIC (requires NetKVM driver in the guest)
              bridge: {} # Bridge mode - VM gets an IP on the pod network (via the pod’s primary interface)
              ports:
                - port: 3389 # RDP
                - port: 1433 # SQL Server
      networks:
        - name: default
          pod: {} # Attach VM to the default Kubernetes pod network
      volumes:
        - name: osdisk
          persistentVolumeClaim:
            claimName: winos
        - name: sqlsystem
          persistentVolumeClaim:
            claimName: sqlsystem
        - name: sqldata
          persistentVolumeClaim:
            claimName: sqldata
        - name: sqllog
          persistentVolumeClaim:
            claimName: sqllog
        - name: cdrom-win2025
          persistentVolumeClaim:
            claimName: win2025-pvc
        - name: virtio-drivers
          containerDisk:
            image: kubevirt/virtio-container-disk
        - name: sql2022-iso
          persistentVolumeClaim:
            claimName: sql2022-pvc

And then I ran the tests again: –

# StatefulSet
TEST RESULT : System achieved 47200 NOPM from 109554 SQL Server TPM

# KubeVirt
TEST RESULT : System achieved 46563 NOPM from 108184 SQL Server TPM

BOOOOOOOOOM! OK, so that’s 98% of the TPM achieved in the StatefulSet. And there’s a bit of variance in those results so these are now pretty much the same!

Ok so it’s not the most robust performance testing ever done…and I am fully aware that testing in a lab like this is one thing, whereas running SQL Server in KubeVirt…even in a dev/test environment is a completely other situation. There are still questions over stability and resiliency BUT from this I hope that it shows that we shouldn’t be counting KubeVirt out as a platform for SQL Server, based on performance.

Thanks for reading!

Running SQL Server on KubeVirt – Getting Started

With all the changes that have happened with VMware since the Broadcom acquisition I have been asked more and more about alternatives for running SQL Server.

One of the options that has repeatedly cropped up is KubeVirt

KubeVirt provides the ability to run virtual machines in Kubernetes…so essentially could provide an option to “lift and shift” VMs from VMware to a Kubernetes cluster.

A bit of background on KubeVirt…it’s a CNCF project accepted in 2019 and moved to “incubating” maturity level in 2022…so it’s been around a while now. KubeVirt uses custom resources and controllers in order to create, deploy, and manage VMs in Kubernetes by using libvirt and QEMU under the hood to provision those virtual machines.

I have to admit, I’m skeptical about this…we already have a way to deploy SQL Server to Kubernetes, and I don’t really see the benefits of deploying an entire VM.

But let’s run through how to get up and running with SQL Server in KubeVirt. There are a bunch of pre-requisites required here so I’ll detail the setup that I’m using.

I went with a physical server for this, as I didn’t want to deal with any nested virtualisation issues (VMs within VMs) and I could only get ONE box…so I’m running a “compressed” Kubernetes cluster, aka the node is both a control and worker node. I also needed a storage provider, and as I work for Pure Storage…I have access to a FlashArray which I’ll provision persistent volumes from via Portworx (the PX-CSI offering to be exact). Portworx provides a CSI driver that exposes FlashArray storage to Kubernetes for PersistentVolume provisioning.

So it’s not an ideal setup…I’ll admit…but should be good enough to get up and running to see what KubeVirt is all about.

Let’s go ahead and get started with KubeVirt.

First thing to do is actually deploy KubeVirt to the cluster…I followed the guide here: –
https://kubevirt.io/user-guide/cluster_admin/installation/

export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt) # set the latest KubeVirt release
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml # deploy the KubeVirt operator
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml # create the KubeVirt CR (instance deployment request) which triggers the actual installation

Let’s wait until all the components are up and running: –

kubectl -n kubevirt wait kv kubevirt --for condition=Available

Here’s what each of these components does: –

virt-api        - The API endpoint used by Kubernetes and virtctl to interact with VM and VMI subresources.
virt-controller - Control-plane component that reconciles VM and VMI objects, creates VMIs, and manages migrations.
virt-handler    - Node-level component responsible for running and supervising VMIs and QEMU processes on each node.
virt-operator   - Manages the installation, upgrades, and lifecycle of all KubeVirt core components.

There’s two pods for the controllers and operator as they are deployments with a default replicas value of 2…I’m running on a one node cluster so could scale those down but I’ll leave the defaults for now.

More information on the architecture of KubeVirt can be found here: –
https://kubevirt.io/user-guide/architecture/

And I found this blog post really useful!
https://arthurchiao.art/blog/kubevirt-create-vm/

The next tool we’ll need is the Containerized Data Importer, this is the backend component that will allow us to upload ISO files to the Kubernetes cluster, which will then be mounted as persistent volumes when we deploy a VM. The guide I followed was here : –
https://github.com/kubevirt/containerized-data-importer

export VERSION=$(curl -s https://api.github.com/repos/kubevirt/containerized-data-importer/releases/latest | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/')
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml

And again let’s wait for all the components to be up and running: –

kubectl get all -n cdi

Right, the NEXT tool we’ll need is virtctl this is the CLI that allows us to deploy/configure/manage VMs in KubeVirt: –

export VERSION=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)
wget https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-amd64

And confirm that it’s installed (add to your $PATH environment variable): –

virtctl version

Okey dokey, now need to upload our ISO files for Windows and SQL Server to the cluster.

Note I’m referencing the storage class from my storage provider (PX-CSI) here. Also, I could not get this to work from my desktop, I had to upload the ISO files to the Kubernetes node and run there. The value for the –uploadproxy-url flag is the IP address of the cdi-uploadproxy service: –

Uploading the Windows ISO (I went with Windows Server 2025): –

virtctl image-upload pvc win2025-pvc --size 10Gi \
--image-path=./en-us_windows_server_2025_updated_oct_2025_x64_dvd_6c0c5aa8.iso \
--uploadproxy-url=https://10.97.56.82:443 \
--storage-class px-fa-direct-access \
--insecure

And uploading the SQL Server 2025 install ISO: –

virtctl image-upload pvc sql2025-pvc --size 10Gi \
--image-path=./SQLServer2025-x64-ENU.iso \
--uploadproxy-url=https://10.97.56.82:443 \
--storage-class px-fa-direct-access \
--insecure

Let’s confirm the resulting persistent volumes: –

kubectl get persistent volumes

Ok, so the next step is to pull down a container image so that it can be referenced in the VM yaml. This image contains the VirtIO drivers needed for Windows to detect the VM’s virtual disks and network interfaces: –

sudo ctr images pull docker.io/kubevirt/virtio-container-disk:latest
sudo ctr images ls | grep virtio

The final thing to do is create the PVCs/PVs that will be used for the OS, SQL data files, and SQL log files within the VM. The yaml is: –

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: winos
spec:
  accessModes: [ "ReadWriteOnce" ]
  resources:
    requests:
      storage: 100Gi
  storageClassName: px-fa-direct-access
  volumeMode: Block
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sqldata
spec:
  accessModes: [ "ReadWriteOnce" ]
  resources:
    requests:
      storage: 50Gi
  storageClassName: px-fa-direct-access
  volumeMode: Block
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sqllog
spec:
  accessModes: [ "ReadWriteOnce" ]
  resources:
    requests:
      storage: 25Gi
  storageClassName: px-fa-direct-access
  volumeMode: Block

And then create!

kubectl apply -f pvc.yaml

Right, now we can create the VM! Below is the yaml I used to create the VM: –

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: win2025
spec:
  runStrategy: Manual # VM will not start automatically
  template:
    metadata:
      labels:
        app: sqlserver
    spec:
      domain:
        firmware:
          bootloader:
            efi: # uefi boot
              secureBoot: false # disable secure boot
        resources: # requesting same limits and requests for guaranteed QoS
          requests:
            memory: "8Gi"
            cpu: "4"
          limits:
            memory: "8Gi"
            cpu: "4"
        devices:
          disks:
            # Disk 1: OS
            - name: osdisk
              disk:
                bus: scsi
            # Disk 2: SQL Data
            - name: sqldata
              disk:
                bus: scsi
            # Disk 3: SQL Log
            - name: sqllog
              disk:
                bus: scsi
            # Windows installer ISO
            - name: cdrom-win2025
              cdrom:
                bus: sata
                readonly: true
            # VirtIO drivers ISO
            - name: virtio-drivers
              cdrom:
                bus: sata
                readonly: true
            # SQL Server installer ISO
            - name: sql2025-iso
              cdrom:
                bus: sata
                readonly: true
          interfaces:
            - name: default
              model: virtio
              bridge: {}
              ports:
                - port: 3389 # port for RDP
                - port: 1433 # port for SQL Server      
      networks:
        - name: default
          pod: {}
      volumes:
        - name: osdisk
          persistentVolumeClaim:
            claimName: winos
        - name: sqldata
          persistentVolumeClaim:
            claimName: sqldata
        - name: sqllog
          persistentVolumeClaim:
            claimName: sqllog
        - name: cdrom-win2025
          persistentVolumeClaim:
            claimName: win2025-pvc
        - name: virtio-drivers
          containerDisk:
            image: kubevirt/virtio-container-disk
        - name: sql2025-iso
          persistentVolumeClaim:
            claimName: sql2025-pvc

Let’s deploy the VM: –

kubectl apply -f win2025.yaml

And let’s confirm: –

kubectl get vm

So now we’re ready to start the VM and install windows: –

virtctl start win2025

This will start an instance of the VM we created…to monitor the startup: –

kubectl get vm
kubectl get vmi
kubectl get pods

So we have a virtual machine, an instance of that virtual machine, and a virt-launcher pod…which is actually running the virtual machine by launching the QEMU process for the virtual machine instance.

Once the VM instance has been started, we can connect to it via VNC and run through the Windows installation process. I’m using TigerVNC here.

virtctl vnc win2025 --vnc-path "C:\Tools\vncviewer64-1.15.0.exe" --vnc-type=tiger

Hit any key to boot from the ISO (you’ll need to go into the boot options) but we’re now running through a normal Windows install process!

When the option to select the drive to install Windows appears, we have to load the drivers from the ISO we mounted from the virtio-container-disk:latest container
image: –

Once those are loaded, we’ll be able to see all the disks attached to the VM and continue the install process.

When the install completes, we’ll need to check the drivers in Device Manager: –

Go through and install any missing drivers (check disks and anything under “other devices”).

OK because VNC drives me nuts…once we have Windows installed, we’ll open up remote connections within Windows and then deploy a node port service to the cluster to open up port 3389…which will let us RDP to the VM: –

apiVersion: v1
kind: Service
metadata:
  name: win2025-rdp
spec:
  ports:
  - port: 3389
    protocol: TCP
    targetPort: 3389
  selector:
    vm.kubevirt.io/name: win2025
  type: NodePort

Confirm service and port: –

kubectl get services

Once we can RDP, we can continue to configure Windows (if we want to) but the main thing now is to get SQL Server 2025 installed. Don’t forget to online and format the disks for the SQL Server data and log files!

The ISO file containing the SQL install media is mounted within the VM…so it’s just a normal install. Run through the install and confirm it’s successful: –

Once the installation is complete…let’s deploy another node port service to allow us to connect to SQL in the VM: –

apiVersion: v1
kind: Service
metadata:
  name: win2025-sql
spec:
  ports:
  - port: 1433
    protocol: TCP
    targetPort: 1433
  selector:
    vm.kubevirt.io/name: win2025
  type: NodePort

Confirm the service: –

kubectl get services

And let’s attempt to connect to SQL Server in SSMS: –

And there is SQL Server running in KubeVirt!

Ok, let’s run a performance test to see how it compares with SQL deployed to the same Kubernetes cluster as a statefulset. I used Anthony Nocentino’s containerised HammerDB tool for this…here are the results: –

# Statefulset result
TEST RESULT : System achieved 45319 NOPM from 105739 SQL Server TPM

# KubeVirt result
TEST RESULT : System achieved 5962 NOPM from 13929 SQL Server TPM

OK, well that’s disastrous! 13% of the transaction per minute achieved for the SQL instance in the statefulset on the same cluster!

I also noticed a very high CPU privileged time when running the test against the database in the KubeVirt instance, which indicates that the VM is spending a lot of time in kernel or virtualization overhead. This is more than likely caused by incorrectly configured drivers, so it’s definitely not an optimal setup.

So OK, this might not be a perfectly fair test, but the gap is still significant. And it’s a lot of effort to go through just to get an instance of SQL Server up and running. But now that we do have a VM running SQL Server, I’ll explore how (or if) we can clone that VM so we don’t have to repeat this entire process for each new deployment…I’ll cover that in a later blog post. I’ll also see if I can address the performance issues.

But to round things off…deploying SQL as a statefulset to a Kubernetes cluster would still be my recommendation.

Thanks for reading!

Accelerated Database Recovery for tempdb in SQL Server 2025

Accelerated database recovery was introduced in SQL Server 2019 and provides fast recovery, instantaneous transaction rollback, and aggressive log truncation. A complete overview of how ADR achieves this is documented here.

Now in SQL Server 2025 we can switch on ADR for tempdb! Ok, so the fast recovery part isn’t applicable for tempdb…but the instantaneous transaction rollback and aggressive log truncation 100% are! So let’s have a look at what happens when we switch ADR for tempdb in SQL Server 2025.

Before we run through the test, let’s check how much tempdb log space is in use: –

USE [tempdb];
GO

SELECT database_id, used_log_space_in_percent FROM sys.dm_db_log_space_usage
GO

OK, so not much log in use (as expected).

For this first test, we’ll see what happens without ADR being enabled. To confirm ADR is not enabled: –

SELECT name,is_accelerated_database_recovery_on 
FROM sys.databases
WHERE database_id = 2
GO

Great! So in a new query window, we’ll run: –

USE [tpcc];
GO

SET STATISTICS TIME ON

BEGIN TRANSACTION

SELECT TOP (1000000) * INTO #TempTable
FROM [dbo].[order_line];

UPDATE #TempTable
SET ol_amount = ol_amount + 1;

Just some code to insert data into a temp table and then update…hopefully generating some log for us to analyse.

Btw, the database I’m using here to pull the data from is generated from Anthony Nocentino’s containerised HammerDB instance (really handy, go check it out).

Once that script has been run, now let’s look at the amount of log used in tempdb: –

USE [tempdb];
GO

SELECT database_id, used_log_space_in_percent FROM sys.dm_db_log_space_usage
GO

97%…nearly full. Ok, I admit…I’ve kept the log pretty small on this instance for the purposes of this demo but still…we should see a difference when we re-run this test with ADR enabled.

But first…let’s rollback that transaction: –

ROLLBACK

And the stats from the rollback are: –

SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.

SQL Server Execution Times:
CPU time = 1187 ms, elapsed time = 1342 ms.

Didn’t take too long but something needed to be rolled back.

Now let’s look what difference there is when we enable accelerated database recovery for tempdb.

To enable ADR on tempdb run: –

ALTER DATABASE [tempdb] SET ACCELERATED_DATABASE_RECOVERY = ON
GO

The good news is that this statement does not require an exclusive lock on tempdb to run. The bad news is that the SQL instance needs to be restarted in order for the change to take effect.

Once SQL has been restarted, confirm that ADR is enabled: –

SELECT name,is_accelerated_database_recovery_on 
FROM sys.databases
WHERE database_id = 2
GO

let’s check the space used in the log again: –

SELECT database_id, used_log_space_in_percent FROM sys.dm_db_log_space_usage
GO

Around 4.4%…now run the query in the second window again: –

USE [tpcc];
GO

SET STATISTICS TIME ON

BEGIN TRANSACTION

SELECT TOP (1000000) * INTO #TempTable
FROM [dbo].[order_line];

UPDATE #TempTable
SET ol_amount = ol_amount + 1

Once that has completed, check how much space has been used in the tempdb transaction log: –

SELECT database_id, used_log_space_in_percent FROM sys.dm_db_log_space_usage
GO

56%…way down from the 97% used with ADR disabled. This is because, when ADR is enabled, the transaction log is “aggressively truncated”…even with active transactions! This is because recovery now relies on the persistent version store (PVS), SLOG, and only part of the transaction log since the last checkpoint. Since there’s no longer a need to retain the log for the whole transaction, log can be truncated aggressively as checkpoints and backups occur.

And now let’s see what happens when we rollback that transaction: –

ROLLBACK

SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 1 ms.

Near instantaneous rollback!

This is because, instead of scanning the transaction log to roll back changes, ADR enables SQL Server to perform a logical revert using the PVS to instantly undo all versioned operations.

So, pretty handy for any workload with heavy use of temporary objects! However, there are a few things to watch out for with ADR…check out the MS docs for the full list.

Thanks for reading!

Deploying SQL Server to Kubernetes via SQL Server 2025

Following on from my previous post about hitting the Kubernetes API from SQL Server 2025 let’s go a little further and deploy SQL Server 2025 to Kubernetes from…SQL Server 2025.

You may be thinking….why? Well…

In all honesty, this is just a bit of fun, I’m just playing around with the sp_invoke_external_rest_endpoint stored procedure, I don’t expect anyone to actually do this in a live environment. Anyway…let’s run through how this works.

First things first, we need to update the role created previously to allow our service account access: –

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: default
rules:
- apiGroups: [""]
  resources: ["pods", "services", "persistentvolumeclaims","secrets"]
  verbs: ["get", "list", "watch", "create","update","patch"]
- apiGroups: ["apps"]
  resources: ["statefulsets"]
  verbs: ["get", "list", "watch", "create","update","patch"]
EOF

Storage classes are a cluster-scoped resource so we need to create a ClusterRole to be able to create one: –

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: mssql-storageclass-deployer
rules:
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch", "create","update","patch"]
EOF

And then we need a ClusterRoleBinding to map the ClusterRole to the service account: –

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: mssql-storageclass-deployer-binding
subjects:
- kind: ServiceAccount
  name: api-reader
  namespace: default
roleRef:
  kind: ClusterRole
  name: mssql-storageclass-deployer
  apiGroup: rbac.authorization.k8s.io
EOF

Ok, ready to start deploying SQL Server….but before we get to that, we need a storage class. I’ve installed OpenEBS on my cluster so I’ll reference that: –

DECLARE @deploy NVARCHAR(MAX);
EXEC sp_invoke_external_rest_endpoint
    @url = 'https://api.dbafromthecold.local/apis/storage.k8s.io/v1/storageclasses',
    @headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
    @method = 'POST',
    @payload = '{
  "apiVersion": "storage.k8s.io/v1",
  "kind": "StorageClass",
  "metadata": {
    "name": "mssql-storage",
    "annotations": {
      "openebs.io/cas-type": "local",
      "cas.openebs.io/config": "- name: StorageType\n  value: hostpath\n- name: BasePath\n  value: /var/local-hostpath"
    }
  },
  "provisioner": "openebs.io/local",
  "reclaimPolicy": "Delete",
  "volumeBindingMode": "WaitForFirstConsumer"
}',
    @response = @deploy OUTPUT

SELECT 
    s.[name]
FROM OPENJSON(@deploy, '$.result')
WITH (
    [name] NVARCHAR(100)           '$.metadata.name'
) AS s;

And we’ll need a secret to store the SQL instance’s sa password: –

DECLARE @deploy NVARCHAR(MAX);
EXEC sp_invoke_external_rest_endpoint
    @url = 'https://api.dbafromthecold.local/api/v1/namespaces/default/secrets',
    @headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
    @method = 'POST',
    @payload = '{
  "apiVersion": "v1",
  "kind": "Secret",
  "metadata": {
    "name": "mssql-sa-secret"
  },
  "type": "Opaque",
  "stringData": {
    "MSSQL_SA_PASSWORD": "Testing1122"
  }
}',
    @response = @deploy OUTPUT

SELECT 
    s.[name]
FROM OPENJSON(@deploy, '$.result')
WITH (
    [name] NVARCHAR(100)           '$.metadata.name'
) AS s;

Right! Let’s deploy SQL Server 2025 to Kubernetes with 1 persistent volume claim using the storage class and referencing the secret for the sa password: –

DECLARE @deploy NVARCHAR(MAX);
EXEC sp_invoke_external_rest_endpoint
    @url = 'https://api.dbafromthecold.local/apis/apps/v1/namespaces/default/statefulsets',
    @headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
    @method = 'POST',
    @payload = '{
  "apiVersion": "apps/v1",
  "kind": "StatefulSet",
  "metadata": {
    "name": "mssql-statefulset"
  },
  "spec": {
    "serviceName": "mssql",
    "replicas": 1,
    "selector": {
      "matchLabels": {
        "name": "mssql-pod"
      }
    },
    "template": {
      "metadata": {
        "labels": {
          "name": "mssql-pod"
        }
      },
      "spec": {
        "securityContext": {
          "fsGroup": 10001
        },
        "containers": [
          {
            "name": "mssql-container",
            "image": "mcr.microsoft.com/mssql/server:2025-CTP2.0-ubuntu-22.04",
            "ports": [
              {
                "containerPort": 1433,
                "name": "mssql-port"
              }
            ],
            "env": [
              {
                "name": "MSSQL_PID",
                "value": "Developer"
              },
              {
                "name": "ACCEPT_EULA",
                "value": "Y"
              },
              {
                "name": "MSSQL_AGENT_ENABLED",
                "value": "1"
              },
              {
                "name": "MSSQL_SA_PASSWORD",
                "valueFrom": {
                  "secretKeyRef": {
                    "name": "mssql-sa-secret",
                    "key": "MSSQL_SA_PASSWORD"
                  }
                }
              }
            ],
            "volumeMounts": [
              {
                "name": "sqlsystem",
                "mountPath": "/var/opt/mssql"
              }
            ]
          }
        ]
      }
    },
    "volumeClaimTemplates": [
      {
        "metadata": {
          "name": "sqlsystem"
        },
        "spec": {
          "accessModes": [
            "ReadWriteOnce"
          ],
          "resources": {
            "requests": {
              "storage": "1Gi"
            }
          },
          "storageClassName": "mssql-storage"
        }
      }
    ]
  }
}',
    @response = @deploy OUTPUT

SELECT 
    s.[name],
    s.[namespace],
    s.replicas
FROM OPENJSON(@deploy, '$.result')
WITH (
    [name] NVARCHAR(100)           '$.metadata.name',
    [namespace] NVARCHAR(100)      '$.metadata.namespace',
    replicas INT                   '$.spec.replicas'
) AS s;

And finally, we’ll need a load balanced service (using Metallb that I’ve previously installed on the cluster) to connect to SQL Server within Kubernetes: –

DECLARE @deploy NVARCHAR(MAX);
EXEC sp_invoke_external_rest_endpoint
    @url = 'https://api.dbafromthecold.local/api/v1/namespaces/default/services',
    @headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
    @method = 'POST',
    @payload = '{
  "apiVersion": "v1",
  "kind": "Service",
  "metadata": {
    "name": "mssql-service"
  },
  "spec": {
    "ports": [
      {
        "name": "mssql-port",
        "port": 1433,
        "targetPort": 1433
      }
    ],
    "selector": {
      "name": "mssql-pod"
    },
    "type": "LoadBalancer"
  }
}',
    @response = @deploy OUTPUT

SELECT 
    s.[name],
    s.[namespace]
FROM OPENJSON(@deploy, '$.result')
WITH (
    [name] NVARCHAR(100)           '$.metadata.name',
    [namespace] NVARCHAR(100)      '$.metadata.namespace'
) AS s;

Cool! Let’s confirm on the cluster: –

kubectl get sc
kubectl get pvc
kubectl get secret
kubectl get all

Final test, connecting to the instance in SSMS: –

Thanks for reading!

Accessing the Kubernetes API from SQL Server 2025

The sp_invoke_external_rest_endpoint stored procedure that’s available in 2025 allows for SQL Server to hit external rest endpoints…which opens up quite a few interesting options.

I was thinking about this the other day and it occurred to me that Kubernetes has a REST API 🙂

So can we hit that from within SQL Server?

Let’s have a look how to do it. Now there’s a few steps, here’s what we’re going to do: –

1. Create a certificate authority and a signed certificate
2. Deploy a reverse proxy to Kubernetes
3. Configure SQL Server to be able to hit the reverse proxy
4. Use the stored procedure to hit the Kubernetes API via the reverse proxy

Ok, let’s go!

First thing to do is create a CA and then a signed certificate. It would be nice if we could just upload the root certificate from the Kubernetes cluster but (and trust me here, I’ve tried for longer than I’m going to admit)…it just doesn’t seem to work. All you’ll get is this infuriating error: –

Msg 31608, Level 16, State 24, Procedure sys.sp_invoke_external_rest_endpoint_internal, Line 1 [Batch Start Line 0]
An error occurred, failed to communicate with the external rest endpoint. HRESULT: 0x80072ee7.

So creating our own signed certificate seems to be the only way forward.

(but I’m going to keep trying, so if I do get it working…I’ll update here)

One thing to note, I’m running SQL Server 2025 in WSL on Windows 11…hitting a local Kubernetes cluster with Metallb installed, running in Hyper-V.

Cool ok first things first, set some variables: –

DOMAIN="api.dbafromthecold.local"
DAYS_VALID=365
NAMESPACE="default"
CA_KEY="CA.key"
CA_CERT="CA.pem"

Create the certificate authority: –

openssl genrsa -out $CA_KEY 2048
openssl req -x509 -new -nodes -key $CA_KEY -sha256 -days 3650 -out $CA_CERT \
-subj "/C=IE/ST=Local/L=Test/O=TestCA/CN=My Local CA"

Then we’re going to create a key and a certificate signing request: –

openssl genrsa -out $DOMAIN.key 2048
openssl req -new -key $DOMAIN.key -out $DOMAIN.csr \
-subj "/C=IE/ST=Local/L=Test/O=TestAPI/CN=$DOMAIN"

Use the CA certificate and key to sign the CSR and generate a server certificate: –

openssl x509 -req -in $DOMAIN.csr -CA $CA_CERT -CAkey $CA_KEY -CAcreateserial \
-out $DOMAIN.crt -days $DAYS_VALID -sha256

OK, great…we have our signed certificate. Now let’s deploy a reverse nginx proxy to Kubernetes!

Create a secret in kubernetes to hold the signed certificate and key: –

kubectl create secret tls k8s-api-cert \
--cert=$DOMAIN.crt --key=$DOMAIN.key \
--namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -

This stores the certificate and key which we’ll later mount into the nginx pod.

Deploy the ingress controller, which’ll expose our proxy: –

helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace

Confirm the ingress controller is up and running: –

kubectl get all -n ingress-nginx

Now create a config map containing the reverse proxy configuration: –

kubectl create configmap k8s-api-nginx-conf --from-literal=nginx.conf='
events {}
http {
  server {
    listen 443 ssl;
    ssl_certificate /etc/nginx/certs/tls.crt;
    ssl_certificate_key /etc/nginx/certs/tls.key;
    location / {
      proxy_pass https://10.0.0.41:6443;
      proxy_ssl_verify off;
    }
  }
}
' -n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -

N.B. – 10.0.0.41:6443 is the IP address and port of the kube-apiserver

This config will tell nginx to accept HTTPS connections (required for sp_invoke_external_rest_endpoint), terminate TLS using the secret we created, and proxy the request to the Kubernetes API server.

Deploy nginx, mounting the config map and TLS secret to enable the reverse proxy: –

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-api-proxy
  namespace: $NAMESPACE
spec:
  replicas: 1
  selector:
    matchLabels:
      app: k8s-api-proxy
  template:
    metadata:
      labels:
        app: k8s-api-proxy
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 443
        volumeMounts:
        - name: nginx-conf
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
        - name: nginx-cert
          mountPath: /etc/nginx/certs
          readOnly: true
      volumes:
      - name: nginx-conf
        configMap:
          name: k8s-api-nginx-conf
      - name: nginx-cert
        secret:
          secretName: k8s-api-cert
EOF

Create a service to expose the nginx deployment: –

kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: k8s-api-proxy
  namespace: $NAMESPACE
spec:
  selector:
    app: k8s-api-proxy
  ports:
    - protocol: TCP
      port: 443
      targetPort: 443
EOF

And now, create an ingress resource to route traffic to the nginx service: –

kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: k8s-api-ingress
  namespace: $NAMESPACE
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - $DOMAIN
      secretName: k8s-api-cert
  rules:
    - host: $DOMAIN
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: k8s-api-proxy
                port:
                  number: 443
EOF

Getting there! Let’s check the pod and the ingress: –

kubectl get pods
kubectl get ingress

Ha ha! Ok, we’re almost ready to hit the Kubernetes API. But first we have to configure SQL to trust the certificate.

To do that we need to copy the certificate to /var/opt/mssql/security/ca-certificates. That location doesn’t exist by default so it needs to be created.

mkdir /var/opt/mssql/security/ca-certificates

cp $CA_CERT /var/opt/mssql/security/ca-certificates/$CA_CERT
chown mssql:mssql /var/opt/mssql/security/ca-certificates/$CA_CERT

Then restart SQL Server: –

sudo systemctl restart mssql-server

Check the SQL Server’s error log, we’re looking for: –

Successfully placed CA.crt in trusted root store

Oh, and if we want to test with curl…we need to place the cert in the server’s trust store: –

sudo cp $CA_CERT /usr/local/share/ca-certificates/myCA.crt
sudo update-ca-certificates

But before we test, we need to make sure that SQL Server can resolve our domain name. That means adding an entry into the /etc/hosts file: –

10.0.0.50 api.dbafromthecold.local

Let’s try it!

curl https://api.dbafromthecold.local/version

Boooooom! It works! Now let’s try within SQL Server. Enable the stored procedure: –

EXECUTE sp_configure 'external rest endpoint enabled', 1;
RECONFIGURE WITH OVERRIDE;

And let’s go….

DECLARE @version NVARCHAR(MAX);

EXEC sp_invoke_external_rest_endpoint
@url = 'https://api.dbafromthecold.local/version',
@method = 'GET',
@response = @version OUTPUT
PRINT @version

Niiiiiiice! But what if we want to list the pods in the cluster?

The API endpoint is reachable but if we try to hit it right now, we’ll get: –

{
  "status": "Failure",
  "message": "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
  "reason": "Forbidden",
  "details": {
    "kind": "pods"
  },
  "code": 403
}

The reason that’s happening is that the request is reaching the Kubernetes API server, but the user it’s running as — system:anonymous — doesn’t have permission to list pods.

Let’s fix that by:

1. Creating a service account in Kubernetes
2. Creating a role that allows reading (and creating 🙂 ) of pods
3. Bind the role to the service account
4. Extract a token from that service account
5. Use that token in the stored procedure

So create the service account: –

kubectl create serviceaccount api-reader

And a role that allows listing and creating pods: –

kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: default
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch","create"]
EOF

Now bind that role to the service account: –

kubectl create rolebinding pod-reader-binding \
--role=pod-reader \
--serviceaccount=default:api-reader \
--namespace=default

Create a token for that service account: –

kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: api-reader-token
  annotations:
    kubernetes.io/service-account.name: api-reader
type: kubernetes.io/service-account-token
EOF

And grab the token: –

kubectl get secret api-reader-token -o jsonpath="{.data.token}" | base64 -d

This’ll give us a big long token that we can use in the header parameter of the stored procedure…like this: –

DECLARE @pods NVARCHAR(MAX);

EXEC sp_invoke_external_rest_endpoint
@url = 'https://api.dbafromthecold.local/api/v1/namespaces/default/pods',
@headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
@method = 'GET',
@response = @pods OUTPUT
PRINT @pods

But that’ll just give us a big chunk of json back…let’s try and parse it a little.

DECLARE @pods NVARCHAR(MAX);

EXEC sp_invoke_external_rest_endpoint
@url = 'https://api.dbafromthecold.local/api/v1/namespaces/default/pods',
@headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
@method = 'GET',
@response = @pods OUTPUT

SELECT
    pod_name,
    namespace,
    container_image,
    pod_ip,
    status
FROM OPENJSON(@pods, '$.result.items')
WITH (
    pod_name NVARCHAR(100)        '$.metadata.name',
    namespace NVARCHAR(100)       '$.metadata.namespace',
    container_image NVARCHAR(100) '$.spec.containers[0].image',
    pod_ip NVARCHAR(50)           '$.status.podIP',
    status NVARCHAR(50)           '$.status.phase'
);

And that’ll return a row for each pod in the cluster (in the default namespace).

This cluster only has the one pod in the default namespace (the nginx pod) so the results will be: –

pod_name namespace container_image pod_ip status
k8s-api-proxy-abcde default nginx:latest 10.244.0.5 Running

Phew! Still with me? Great! One more thing then…what about deploying a pod to Kubernetes from SQL Server?

That’s why we created the role with the create verb earlier 🙂

Let’s keep it simple and just create another nginx pod in the default namespace in the cluster: –

DECLARE @deploy NVARCHAR(MAX);
EXEC sp_invoke_external_rest_endpoint
    @url = 'https://api.dbafromthecold.local/api/v1/namespaces/default/pods',
    @headers = '{"Authorization":"Bearer eyXXXXX....XXXX"}',
    @method = 'POST',
    @payload = '{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
    "name": "nginx",
    "namespace": "default"
  },
  "spec": {
    "containers": [
      {
        "name": "nginx",
        "image": "nginx:latest",
        "ports": [
          {
            "containerPort": 80
          }
        ]
      }
    ]
  }
}',
@response = @deploy OUTPUT
PRINT @deploy

And now if we check the cluster: –

kubectl get pods

There’s our pod!

So that’s how to hit the Kubernetes API via a reverse proxy from SQL Server 2025 using sp_invoke_external_rest_endpoint 🙂

Cue maniacal laughter.

Thanks for reading!