0

EightKB 2026

EightKB is back again for 2026!

The biggest online SQL Server internals conference is back…it’s all happening on August the 20th!

We’ve open our call for speakers, you can submit here: –
https://sessionize.com/eightkb-august-2026/

As a speaker this is your chance to really go all out! If you’ve ever wanted to deep dive into a topic, this is the event to do so. No topic is too advanced…you can do as many (or as little or none at all!) demos as you would like. Field questions during the session or respond after the event…completely up to you.

EightKB is about featuring experts in their field, not expert speakers. If you haven’t presented before, we offer mentoring as part of our speaker program to help you prepare for your session so that you can enjoy presenting on the day. And even if you’ve presented at a tonne of events before…we’ll be happy to review your session! Completely up to you!

Continuing on from last year, only four of the sessions have to focus on SQL Server internals. The fifth session can be on ANY TECH TOPIC YOU LIKE, as long as it’s 300 level and above!

As ever, speakers do not have to use a slide template, and we don’t ask for speakers to add our logo to their deck. We just want you to turn up and enjoy presenting!

After the event, we’ll provide feedback of your session from the attendees and an unbranded video of your session that you can use however you would like.

Hope to see you there!

0

Presenting with Visual Studio Code

A while back I wrote a quick post on setting up key mappings in Visual Studio Code…they make presenting (and generally working) in Visual Studio Code really smooth.

But one thing that kinda bugs me is the location of the terminal…I’ve always had it at the bottom, which is generally fine, and I know you can move it around (top, right, left)…however I’ve found that when presenting, space is at a premium. I bump up the font size and this can result in a lot of scrolling through results in the terminal, which ain’t great.

But what if we could have a similar setup to how Paul Randal has his SQL Server Management Studio configured.

What I mean is, can we have a powershell terminal as a tab next to the editor? This would be great when running scripts with a large output, no more scrolling!

Here’s how it looks in SQL Server Management Studio: –

So let’s make VS Code open a powershell terminal in a separate tab. Add this to settings.json: –

"terminal.integrated.defaultLocation": "editor"

And let’s make the highlighting yellow: –

"workbench.colorCustomizations": {
"editor.selectionBackground": "#fff59d",
"editor.selectionHighlightBackground": "#fff59d80",
"editor.wordHighlightBackground": "#fff59d66",
"editor.wordHighlightStrongBackground": "#fff17699"
},

Here’s what VS Code looks like now: –

OK, I’ll bet loads of people know about this but hey, hope this helps someone out there…I think it looks really good!

Combining this with the key mappings and ZoomIt (Ctrl+2 call outs) allows me to present code clearly and smoothly…no more waving the mouse around 🙂

Oh and remember…when presenting, don’t use dark mode 😀

Thanks for reading!

0

Startup scripts in SQL Server containers

I was messing around performing investigative work on a pod running SQL Server 2025 in Kubernetes the other day and noticed something…the sqlservr process is no longer PID 1 in its container.

Instead there is: –

Hmm, ok we have a script /opt/mssql/bin/launch_sqlservr.sh and then the sqlservr binary is called.

I swear this wasn’t always the case, have I seen that before? Started to doubt myself so spun up a pod running an older version of SQL (2019 CU5) and took a look: –

Ahh ok, there has been a change. Now those two processes there are expected, one is essentially a watcher process and the other is sql server (full details here: –
https://techcommunity.microsoft.com/blog/sqlserver/sql-server-on-linux-why-do-i-have-two-sql-server-processes/3204412)

I went and had a look at a 2022 image and that script is there as well…so there has been a change at some point to execute that script first in the container (not sure when and I’m not going back to check all the different images 🙂 )

Right, but what is that script doing?

Now this is a bit of a rabbit hole but from what I can work out, that script calls three other scripts: –

/opt/mssql/bin/permissions_check.sh
Checks the location and ownership of the master database.

/opt/mssql/bin/init_custom_setup.sh
Determines whether one-time SQL Server initialization should run on first startup.

/opt/mssql/bin/run_custom_setup.sh
If initialisation is enabled, wait for SQL Server to be ready, then use environment variables and the setup-scripts directory to perform a custom setup.

Oooooh, OK…custom setup available? Let’s have a look at that.

Essentially it comes down to whether or not SQL is spinning up for the first time (so we haven’t persisted data from one container to another) and if certain environment variables are set…these are: –

MSSQL_DB – used to create a database
MSSQL_USER – login/user for that database
MSSQL_PASSWORD – password for that login
MSSQL_SETUP_SCRIPTS_LOCATION – location for custom scripts

Nice…so let’s have a go at using those!

Here’s a SQL Server 2025 Kubernetes manifest using the first three: –

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mssql-statefulset-test
spec:
  serviceName: "mssql"
  replicas: 1
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      name: mssql-pod
  template:
    metadata:
      labels:
        name: mssql-pod
    spec:
      securityContext:
        fsGroup: 10001
      containers:
        - name: mssql-container-test
          image: mcr.microsoft.com/mssql/server:2025-RTM-ubuntu-22.04
          ports:
            - containerPort: 1433
              name: mssql-port
          env:
            - name: ACCEPT_EULA
              value: "Y"
            - name: MSSQL_SA_PASSWORD
              value: "Testing1122"
            - name: MSSQL_DB
              value: "testdatabase"
            - name: MSSQL_USER
              value: "testuser"
            - name: MSSQL_PASSWORD
              value: "Testing112233"

Then if we look at the logs for SQL in that pod (I’ve stripped out the normal startup messages): –

Creating database testdatabase
2026-01-23 10:56:38.48 spid51      [DBMgr::FindFreeDatabaseID] Next available DbId EX locked: 5
2026-01-23 10:56:38.56 spid51      Starting up database 'testdatabase'.
2026-01-23 10:56:38.59 spid51      Parallel redo is started for database 'testdatabase' with worker pool size [2].
2026-01-23 10:56:38.60 spid51      Parallel redo is shutdown for database 'testdatabase' with worker pool size [2].
Creating login testuser with password defined in MSSQL_PASSWORD environment variable
Changed database context to 'testdatabase'.

There it is creating the database! Cool!

But what about the last environment variable, the custom scripts location?

From the startup scripts, this has a default value of /mssql-server-setup-scripts.d so let’s drop a script in there and see what happens.

To do this I created a simple T-SQL script to create a test database: –

CREATE DATABASE testdatabase2;

And then created a configmap in Kubernetes referencing that script: –

kubectl create configmap mssql-setup-scripts --from-file=./create-database.sql

Now we can reference that in our SQL manifest: –

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mssql-statefulset-test
spec:
  serviceName: "mssql"
  replicas: 1
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      name: mssql-pod
  template:
    metadata:
      labels:
        name: mssql-pod
    spec:
      securityContext:
        fsGroup: 10001
      containers:
        - name: mssql-container-test
          image: mcr.microsoft.com/mssql/server:2025-RTM-ubuntu-22.04
          ports:
            - containerPort: 1433
              name: mssql-port
          env:
            - name: ACCEPT_EULA
              value: "Y"
            - name: MSSQL_SA_PASSWORD
              value: "Testing1122"
          volumeMounts:
            - name: setup-scripts
              mountPath: /mssql-server-setup-scripts.d
              readOnly: true
      volumes:
        - name: setup-scripts
          configMap:
            name: mssql-setup-scripts

And now we have these entries in the SQL startup log: –

Executing custom setup script /mssql-server-setup-scripts.d/create-database.sql
2026-01-23 11:08:52.08 spid60      Starting up database 'testdatabase2'.

Ha, and there’s our script being executed and the database created!

I had a look around and couldn’t see this documented anywhere (it may be somewhere though) but hey, another way of customising SQL Server in a container.

Although in reality I’d probably be using a custom image for SQL Server but this was fun to dive into 🙂

Thanks for reading!

0

Data Céilí 2026 Call for Speakers!

Data Céilí 2026 Call for Speakers is now live!

Data Céilí (pronounced kay-lee), is Ireland’s free, community led, Microsoft Data Platform event.

We had a fantastic event this year so…we’re back in the summer of 2026!

The event will run be held at Trinity College in the centre of Dublin, with pre-cons on the 11th of June and the main event on the 12th.

The Call for Speakers has opened and can be found here: –
https://sessionize.com/data-ceili-2026/

We’re looking for anything covering the Microsoft Data Platform, from beginner sessions to expert! So calling all you fantastic speakers out there, we would love for you to come and speak at Ireland’s best Microsoft Data Platform conference.

Hope to see you there!

0

Performance tuning KubeVirt for SQL Server

Following on from my last post about Getting Started With KubeVirt & SQL Server, in this post I want to see if I can improve the performance from the initial test I ran.

In the previous test, I used SQL Server 2025 RC1…so wanted to change that to RTM (now that’s it’s been released) but I was getting some strange issues running in the StatefulSet. However, SQL Server 2022 seemed to have no issues and as much as I want to investigate what’s going on with 2025 (pretty sure it’s host based, not an issue with SQL 2025)…I want to dive into KubeVirt more…so let’s go with 2022 in both KubeVirt and the StatefulSet.

I also separated out the system databases, user database data, and user database log files onto separate volumes…here’s what the StatefulSet manifest looks like: –

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mssql-statefulset
spec:
  serviceName: "mssql"
  replicas: 1
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      name: mssql-pod
  template:
    metadata:
      labels:
        name: mssql-pod
      annotations:
        stork.libopenstorage.org/disableHyperconvergence: "true"
    spec:
      securityContext:
        fsGroup: 10001
      containers:
        - name: mssql-container
          image: mcr.microsoft.com/mssql/rhel/server:2022-CU22-rhel-9.1
          ports:
            - containerPort: 1433
              name: mssql-port
          env:
            - name: MSSQL_PID
              value: "Developer"
            - name: ACCEPT_EULA
              value: "Y"
            - name: MSSQL_AGENT_ENABLED
              value: "1"
            - name: MSSQL_SA_PASSWORD
              value: "Testing1122"
            - name: MSSQL_DATA_DIR
              value: "/opt/sqlserver/data"
            - name: MSSQL_LOG_DIR
              value: "/opt/sqlserver/log"
          resources:
            requests:
              memory: "8192Mi"
              cpu: "4000m"
            limits:
              memory: "8192Mi"
              cpu: "4000m"
          volumeMounts:
            - name: sqlsystem
              mountPath: /var/opt/mssql/
            - name: sqldata
              mountPath: /opt/sqlserver/data/
            - name: sqllog
              mountPath: /opt/sqlserver/log/
  volumeClaimTemplates:
    - metadata:
        name: sqlsystem
      spec:
        accessModes:
         - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: px-fa-direct-access
    - metadata:
        name: sqldata
      spec:
        accessModes:
         - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: px-fa-direct-access
    - metadata:
        name: sqllog
      spec:
        accessModes:
         - ReadWriteOnce
        resources:
          requests:
            storage: 25Gi
        storageClassName: px-fa-direct-access

And here’s what the KubeVirt VM manifest looks like: –

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: win2025
spec:
  runStrategy: Manual # VM will not start automatically
  template:
    metadata:
      labels:
        app: sqlserver
    spec:
      domain:
        firmware:
          bootloader:
            efi:
              secureBoot: false
        resources: # requesting same limits and requests for guaranteed QoS
          requests:
            memory: "8Gi"
            cpu: "4"
          limits:
            memory: "8Gi"
            cpu: "4"
        devices:
          disks:
            # Disk 1: OS
            - name: osdisk
              disk:
                bus: scsi
            # Disk 2: SQL System
            - name: sqlsystem
              disk:
                bus: scsi
            # Disk 3: SQL Data
            - name: sqldata
              disk:
                bus: scsi
            # Disk 4: SQL Log
            - name: sqllog
              disk:
                bus: scsi
            # Windows installer ISO
            - name: cdrom-win2025
              cdrom:
                bus: sata
                readonly: true
            # VirtIO drivers ISO
            - name: virtio-drivers
              cdrom:
                bus: sata
                readonly: true
            # SQL Server installer ISO
            - name: sql2022-iso
              cdrom:
                bus: sata
                readonly: true
          interfaces:
            - name: default
              model: virtio
              bridge: {}
              ports:
                - port: 3389 # port for RDP
                - port: 1433 # port for SQL Server<br />
      networks:
        - name: default
          pod: {}
      volumes:
        - name: osdisk
          persistentVolumeClaim:
            claimName: winos
        - name: sqlsystem
          persistentVolumeClaim:
            claimName: sqlsystem
        - name: sqldata
          persistentVolumeClaim:
            claimName: sqldata
        - name: sqllog
          persistentVolumeClaim:
            claimName: sqllog
        - name: cdrom-win2025
          persistentVolumeClaim:
            claimName: win2025-pvc
        - name: virtio-drivers
          containerDisk:
            image: kubevirt/virtio-container-disk
        - name: sql2022-iso
          persistentVolumeClaim:
            claimName: sql2022-pvc

I then ran the hammerdb test again…running for 10 minutes with a 2 minute ramp up time. Here are the results: –


<h1>Statefulset result</h1>

TEST RESULT : System achieved 46594 NOPM from 108126 SQL Server TPM

<h1>KubeVirt result</h1>

TEST RESULT : System achieved 18029 NOPM from 41620 SQL Server TPM

Oooooook…that has made a difference! KubeVirt TPM is now up to 38% of the statefulset TPM. But I’m still seeing a high privileged CPU time in the KubeVirt VM: –

So I went through the docs and found that there are a whole bunch of options for VM configuration…the first one I tried was the Hyper-V feature. This should allow Windows to use paravirtualized interfaces instead of emulated hardware, reducing VM exit overhead and improving interrupt, timer, and CPU coordination performance.

Here’s what I added to the VM manifest: –

        features:
          hyperv: {} # turns on Hyper-V feature so the guest “thinks” it’s running under Hyper-V - needs the Hyper-V clock timer too, otherwise VM pod will not start
        clock:
          timer:
            hyperv: {} 

N.B. – for more information on what’s happening here, check out this link: –
https://www.qemu.org/docs/master/system/i386/hyperv.html

Stopped/started the VM and then ran the test again. Here’s the results: –

TEST RESULT : System achieved 40591 NOPM from 94406 SQL Server TPM

Wait, what!? That made a huge difference…it’s now 87% of the StatefulSet result! AND the privileged CPU time has come down: –

But let’s not stop there…let’s keep going and see if we can get TPM parity between KubeVirt and SQL in a StatefulSet.

There’s a bunch more flags that can be set for the Hyper-V feature and the overall VM, so let’s set some of those: –

        features:
          acpi: {} # ACPI support (power management, shutdown, reboot, device enumeration)
          apic: {} # Advanced Programmable Interrupt Controller (modern interrupt handling for Windows/SQL)
          hyperv: # turns on Hyper-V vendor feature block so the guest “thinks” it’s running under Hyper-V. - needs the Hyper-V clock timer too, otherwise VM pod will not start
            reenlightenment: {} # Allows guest to update its TSC frequency after migrations or time adjustments
            ipi: {} # Hyper-V IPI acceleration - faster inter-processor interrupts between vCPUs
            synic: {} # Hyper-V Synthetic Interrupt Controller - improves interrupt delivery
            synictimer: {} # Hyper-V synthetic timer - stable high-resolution guest time source
            spinlocks:
              spinlocks: 8191 # Prevents Windows spinlock stalls on SMP systems - avoids boot/timeouts under load
            reset: {} # Hyper-V reset infrastructure - cleaner VM resets
            relaxed: {} # Relaxed timing - reduces overhead when timing deviations occur under virtualization
            vpindex: {} # Per-vCPU indexing - improves Windows scheduler awareness of vCPU layout
            runtime: {} # Hyper-V runtime page support - gives guest better insight into hypervisor behavior
            tlbflush: {} # Hyper-V accelerated TLB flush - improves scalability on multi-vCPU workloads
            frequencies: {} # Exposes host CPU frequency data - allows proper scaling &amp;amp; guest timing
            vapic: {} # Virtual APIC support - reduces interrupt latency and overhead
        clock:
          timer:
            hyperv: {} # Hyper-V clock/timer - stable time source, recommended when using Hyper-V enlightenments

Memory and CPU wise…I went and added: –

       ioThreadsPolicy: auto # Automatically allocate IO threads for QEMU to reduce disk I/O contention
        cpu:
          cores: 4
          dedicatedCpuPlacement: true # Guarantees pinned physical CPUs for this VM to improve latency &amp;amp; stability
          isolateEmulatorThread: true # Pins QEMU’s emulator thread to a dedicated pCPU instead of sharing with vCPUs
          model: host-passthrough # Exposes all host CPU features directly to the VM
          numa:
            guestMappingPassthrough: {} # Mirrors host NUMA topology to the guest to reduce cross-node latency
        memory:
          hugepages:
            pageSize: 1Gi # Uses 1Gi hugepages for reduced TLB pressure

N.B. – this required configuring the host to reserve hugepages at boot

And then for disks…I installed the latest virtio drivers on the VM…switched the disks for the SQL system, data, and log files to use virtio instead of a scsi bus and then added for each disk: –

dedicatedIOThread: true

Other device settings added were: –

autoattachGraphicsDevice: false # Do not attach a virtual graphics/display device (VNC/SPICE) - removes unnecessary emulation
autoattachMemBalloon: false # Disable the VirtIO memory balloon - prevents dynamic memory changes, improves consistency
autoattachSerialConsole: true # Attach a serial console for debugging and virtctl console access
networkInterfaceMultiqueue: true # Enable multi-queue virtio-net so NIC traffic can use multiple RX/TX queues

All of this results in a bit of a monster manifest file for the VM: –

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: win2025
spec:
  runStrategy: Manual # VM will not start automatically
  template:
    metadata:
      labels:
        app: sqlserver
    spec:
      domain:
        ioThreadsPolicy: auto # Automatically allocate IO threads for QEMU to reduce disk I/O contention
        cpu:
          cores: 4
          dedicatedCpuPlacement: true # Guarantees pinned physical CPUs for this VM - improves latency &amp;amp; stability
          isolateEmulatorThread: true # Pins QEMU’s emulator thread to a dedicated pCPU instead of sharing with vCPUs
          model: host-passthrough # Exposes host CPU features directly to the VM - best performance (but less portable)
          numa:
            guestMappingPassthrough: {} # Mirrors host NUMA topology to the guest - reduces cross-node memory latency
        memory:
          hugepages:
            pageSize: 1Gi # Uses 1Gi hugepages for reduced TLB pressure - better performance for large-memory SQL
        firmware:
          bootloader:
            efi:
              secureBoot: false # Disable Secure Boot (often required when using custom/older virtio drivers)
        features:
          acpi: {} # ACPI support (power management, shutdown, reboot, device enumeration)
          apic: {} # Advanced Programmable Interrupt Controller (modern interrupt handling for Windows/SQL)
          hyperv: # Enable Hyper-V enlightenment features for Windows guests to improve performance &amp;amp; timing
            reenlightenment: {} # Allows guest to update its TSC frequency after migrations or time adjustments
            ipi: {} # Hyper-V IPI acceleration - faster inter-processor interrupts between vCPUs
            synic: {} # Hyper-V Synthetic Interrupt Controller - improves interrupt delivery
            synictimer: {} # Hyper-V synthetic timer - stable high-resolution guest time source
            spinlocks:
              spinlocks: 8191 # Prevents Windows spinlock stalls on SMP systems - avoids boot/timeouts under load
            reset: {} # Hyper-V reset infrastructure - cleaner VM resets
            relaxed: {} # Relaxed timing - reduces overhead when timing deviations occur under virtualization
            vpindex: {} # Per-vCPU indexing - improves Windows scheduler awareness of vCPU layout
            runtime: {} # Hyper-V runtime page support - gives guest better insight into hypervisor behavior
            tlbflush: {} # Hyper-V accelerated TLB flush - improves scalability on multi-vCPU workloads
            frequencies: {} # Exposes host CPU frequency data - allows proper scaling &amp;amp; guest timing
            vapic: {} # Virtual APIC support - reduces interrupt latency and overhead
        clock:
          timer:
            hyperv: {} # Hyper-V clock/timer - stable time source, recommended when using Hyper-V enlightenments
        resources: # requests == limits for guaranteed QoS (exclusive CPU &amp;amp; memory reservation)
          requests:
            memory: "8Gi"
            cpu: "4"
            hugepages-1Gi: "8Gi"
          limits:
            memory: "8Gi"
            cpu: "4"
            hugepages-1Gi: "8G"
        devices:
          autoattachGraphicsDevice: false # Do not attach a virtual graphics/display device (VNC/SPICE) - removes unnecessary emulation
          autoattachMemBalloon: false # Disable the VirtIO memory balloon - prevents dynamic memory changes, improves consistency
          autoattachSerialConsole: true # Attach a serial console for debugging and virtctl console access
          networkInterfaceMultiqueue: true # Enable multi-queue virtio-net so NIC traffic can use multiple RX/TX queues
          disks:
            # Disk 1: OS
            - name: osdisk
              disk:
                bus: scsi   # Keep OS disk on SCSI - simpler boot path once VirtIO storage is already in place
              cache: none
            # Disk 2: SQL System
            - name: sqlsystem
              disk:
                bus: virtio
              cache: none
              dedicatedIOThread: true # Give this disk its own IO thread - reduces contention with other disks
            # Disk 3: SQL Data
            - name: sqldata
              disk:
                bus: virtio
              cache: none
              dedicatedIOThread: true # Separate IO thread for data file I/O - improves parallelism under load
            # Disk 4: SQL Log
            - name: sqllog
              disk:
                bus: virtio
              cache: none
              dedicatedIOThread: true # Separate IO thread for log writes - helps with low-latency sequential I/O
            # Windows installer ISO
            - name: cdrom-win2025
              cdrom:
                bus: sata
                readonly: true
            # VirtIO drivers ISO
            - name: virtio-drivers
              cdrom:
                bus: sata
                readonly: true
            # SQL Server installer ISO
            - name: sql2022-iso
              cdrom:
                bus: sata
                readonly: true
          interfaces:
            - name: default
              model: virtio # High-performance paravirtualized NIC (requires NetKVM driver in the guest)
              bridge: {} # Bridge mode - VM gets an IP on the pod network (via the pod’s primary interface)
              ports:
                - port: 3389 # RDP
                - port: 1433 # SQL Server
      networks:
        - name: default
          pod: {} # Attach VM to the default Kubernetes pod network
      volumes:
        - name: osdisk
          persistentVolumeClaim:
            claimName: winos
        - name: sqlsystem
          persistentVolumeClaim:
            claimName: sqlsystem
        - name: sqldata
          persistentVolumeClaim:
            claimName: sqldata
        - name: sqllog
          persistentVolumeClaim:
            claimName: sqllog
        - name: cdrom-win2025
          persistentVolumeClaim:
            claimName: win2025-pvc
        - name: virtio-drivers
          containerDisk:
            image: kubevirt/virtio-container-disk
        - name: sql2022-iso
          persistentVolumeClaim:
            claimName: sql2022-pvc

And then I ran the tests again: –


<h1>StatefulSet</h1>

TEST RESULT : System achieved 47200 NOPM from 109554 SQL Server TPM

<h1>KubeVirt</h1>

TEST RESULT : System achieved 46563 NOPM from 108184 SQL Server TPM

BOOOOOOOOOM! OK, so that’s 98% of the TPM achieved in the StatefulSet. And there’s a bit of variance in those results so these are now pretty much the same!

Ok so it’s not the most robust performance testing ever done…and I am fully aware that testing in a lab like this is one thing, whereas running SQL Server in KubeVirt…even in a dev/test environment is a completely other situation. There are still questions over stability and resiliency BUT from this I hope that it shows that we shouldn’t be counting KubeVirt out as a platform for SQL Server, based on performance.

Thanks for reading!