Attaching databases via a dockerfile


There’s been an update posted about this topic here-

Attaching databases via a dockerfile – UPDATE


Last week I presented my session on SQL Server & Containers for the PASS Virtualization Group and during my prep I noticed that there’s some functionality available to Windows containers and not Linux containers.

One of the (if not the) main benefits of working with SQL in a container is that you can create a custom image to build container from that has all of your development databases available as soon as the container comes online.

This is really simple to do with Windows containers. Say I want to attach DatabaseA that has one data file (DatabaseA.mdf) and a log file (DatabaseA_log.ldf): –

ENV attach_dbs="[{'dbName':'DatabaseA','dbFiles':['C:\\SQLServer\\DatabaseA.mdf','C:\\SQLServer\\DatabaseA_log.ldf']}]"

Nice and simple! One line of code and any containers spun up from the image this dockerfile creates will have DatabaseA ready to go.

However this functionality is not available when working with Linux containers. Currently you cannot use an environment variable to attach a database to a SQL instance running in a Linux container.

This was a problem for me as I wanted to change things up a little for the Virtualization Group’s webinar. I wanted to show all the code in my slides running on Windows Server but do my demos on my Windows 10 desktop but working with Linux containers. I wanted to do this as I thought it would be cool to show how you can work with SQL on Linux from Windows.

I started doing some research online and there are different work arounds to attaching the database into SQL in a Linux container but they all involved separate scripts outside of the dockerfile. I wanted to keep things simple, only show minor changes from Windows containers so I had to get a bit creative.

Here’s what I came up with: –

HEALTHCHECK --interval=10s  \
	CMD /opt/mssql-tools/bin/sqlcmd -S . -U sa -P Testing11@@ \
		-Q "CREATE DATABASE [DatabaseA] ON (FILENAME = '/var/opt/sqlserver/DatabaseA.mdf'),(FILENAME = '/var/opt/sqlserver/DatabaseA_log.ldf') FOR ATTACH"

EDIT – 2018-12-11 – Finally came back to this and blogged about attaching databases via a script here

A bit more involved but it performs the same functions as the attach_dbs environment variable in the dockerfile for Windows containers. Here’s what each part of the code does: –

# Instruct docker to wait for 10 seconds (to allow SQL to initialise) and then perform a check to ensure the container is running as expected
HEALTHCHECK --interval=10s

# Use sqlcmd to connect to the SQL instance within the container
CMD /opt/mssql-tools/bin/sqlcmd -S . -U sa -P Testing11@@

# Runs a SQL script to attach the database
-Q "CREATE DATABASE [DatabaseA] ON (FILENAME = '/var/opt/sqlserver/DatabaseA.mdf'),(FILENAME = '/var/opt/sqlserver/DatabaseA_log.ldf') FOR ATTACH"

So that’s how you can get the same result, an image in which you can create containers with DatabaseA available on startup, whether you are working with Linux or Windows containers by running: –

docker build -t demoimage <pathtodockerfilelocation>

If you want to see the full dockefiles, I’ve made both the Windows and Linux versions that I use for my demos available on my GitHub here.

Thanks for reading!

Automating installation of Docker & SQL command line tools on Linux

I’ve been getting to grips with Docker SQL Containers on Linux (specifically Ubuntu 16.04) and have found that I’ve been running the same commands over and over when I’m configuring a new server.

The old adage goes that if you run anything more than once it should be automated, right?

So I’ve created a repository on GitHub that pulls together the code from Docker to install the Community Edition and the code from Microsoft to install the SQL command line tools.

The steps it performs are: –

  • Installs the Docker Community Edition
  • Installs the SQL Server command line tools
  • Pulls the latest SQL Server on Linux image from the Docker Hub

To run this yourself, first clone a copy of the repository onto the server: –

git clone https://github.com/dbafromthecold/InstallDockerOnUbuntu.git

Then navigate to the directory: –

cd InstallDockerOnUbuntu

Make the script executable: –

chmod +x installdocker.sh

Then run the script!

./installdocker.sh

N.B. – This is setup for Ubuntu 16.04 so it will not work on other distros

Contact me @dbafromthecold on twitter or email dbafromthecold@gmail.com if you have any issues or have any improvements to the script 🙂

Thanks for reading!

Running Linux Containers on Windows

Microsoft have announced the availability of SQL Server 2017 RC1 and I wanted to check it out in a container however it seems that the Windows image hasn’t been updated on the Docker Hub: –

But no matter, running Docker on Windows 10 gives me the option to run Linux containers and the SQL Server 2017 RC1 Linux container image is available on the Docker Hub: –

This post is a step-by-step guide to getting Linux containers running on your Windows 10 machine. The first thing to do is install the Docker Engine.

Installing Docker on Windows 10 is different than installing on Windows Server 2016, you’ll need to grab the Community Edition installer from the Docker Store.

Once installed, you’ll then need to switch the engine from Windows Container to Linux Containers by right-clicking the Docker icon in the taskbar and selecting “Switch to Linux Containers…” : –

The way Linux containers run on Windows is that they run in a virtual machine, you can see this by opening up Hyper-V Manager: –

Now the linux image can be pulled from the Docker Hub. To search for the image run: –

docker search microsoft/mssql-server-linux

To pull the image down: –

docker pull microsoft/mssql-server-linux:rc1

The first thing I noticed when I did this was, how quick was it to pull the image down? If you’ve pulled the SQL Server Windows images down you’ll know that it takes a bit of time. The Linux image is significantly smaller than the Windows image (1.42GB compared to ~12GB), no idea why that is tbh.

Anyway, a container can be run once the image is down: –

docker run -d -p 15789:1433 --env ACCEPT_EULA=Y --env SA_PASSWORD=Testing1122 --name testcontainer microsoft/mssql-server-linux:rc1

N.B. – both the ACCEPT_EULA and SA_PASSWORD environment variables need to be upper case for the values passed to be accepted. Linux is case sensitive!

To confirm the container is up, run: –

docker ps -a

Hmm, something’s gone wrong for me: –

I need to view the container logs in order to find out what’s happened: –

docker logs testcontainer

Oh, the VM that the container is running in only has 2048MB of memory available!

Don’t adjust the memory allocation in Hyper-V Manager however, the changes won’t persist. Instead right-click on the Docker icon in the Taskbar and choose Settings then Advanced: –

The Docker Engine will restart to apply the changes, which can be confirmed in Hyper-V Manager: –

And now the container can be started: –

docker start testcontainer

docker ps

Cool, the container is up and running! Connecting locally is different than connecting to a SQL instance in a Windows container. With Windows containers I would use the docker inspect command to find the private IP address assigned to the container and use that to connect via SSMS.

However with linux containers we use the host’s IP address/name and the port number than was specified upon container runtime: –

Enter in the sa password that was specified and: –

SQL Server 2017 RC1 on Linux running in a container on Windows 10!

I think that’s pretty cool 🙂

Thanks for reading!

Transaction log shipping in SQL Server on Linux

SQL Server on Linux has been out for a bit now and I’ve played around a little (see here) but haven’t really used it in “anger” nor will I for the foreseeable future if I’m honest. Nevertheless it’s an area that I find very interesting as I know very little when it comes to the Linux operating system and as it’s such a huge area, it’s something that I want to learn more about.

I feel the best way to learn is to actually try and do something with it. Sure, I could sit down and read articles on the web but I learn best by doing. So I began to think about what would be the first thing I’d try and do if presented with an instance of SQL Server running on Linux that I had to manage.

Right, well being a DBA, setting up backups and restores I guess but I want something a little more involved. How about setting up a warm standby instance! Log shipping! It’s perfect as it’s a fairly simple process within SQL but should teach me a bit about the Linux environment (copying files etc.) as SQL on Linux doesn’t have an Agent so this have to be done manually.

But before I go through how I set this up…


DISCLAIMERS!

  • I have published this as a purely academic exercise, I wanted to see if I could do it.
  • At no point should this be considered to have followed best practices.
  • This should NOT be used in a production environment.
  • There are probably better ways of doing this, if you have one then let me know.

Here goes!

What I’m going to do is setup two instances of SQL Server running on linux and log ship one database from one to another. So the first thing I did was get two VMs running Ubuntu 16.04.1 LTS which can be download from here.

Once both servers were setup (remember to enable ssh) I then went about getting SQL setup, I’m not going to go through the install in this post as the process is documented fully here. Don’t forget to also install the SQL Tools, full guide is here.

N.B. – when installing the tools I’ve always just run:-

sudo apt-get install mssql-tools

The link will tell you to add unixodbc-dev to the end of the statement but that’s caused me issues in the past.

You’ll also need to run:-

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc

And then log out and log straight back in otherwise you won’t be able to run sqlcmd

Anyway, once that’s setup verify that you can connect to both instances, either by sqlcmd on the server or through SSMS remotely.


Ok, now we need to create folders on both servers to hold the scripts and backups needed. So in your favourite shell (I’m using bash on windows), ssh into your first server and run: –

mkdir SQLScripts
mkdir SQLBackups

This will create two folders in your default home location, for me that’s… /home/andrew

Next thing to do is sort out access to these folders so that SQL Server can write backups to them. I found this kinda tricky if I’m honest as linux permissions are completely new to me but this is how I went about it.

When SQL Server is installed a group called mssql is created. What I’m going to do is add my user into that group and then change the ownership and group access to these folders to that group. So, run:-

sudo usermod -a -G mssql andrew

This change can then be verified by running:-

id andrew

N.B.- You’ll have to log out and then back in for this to take effect

Then we can change the permissions on the folders:-

sudo chown mssql SQLScripts
sudo chown mssql SQLBackups

sudo chgrp mssql SQLScripts
sudo chgrp mssql SQLBackups

I also need to modify what the owner and group members can do in those folders. I’ve played around with these permissions a bit and the best configuration I’ve found is set by running: –

sudo chmod 770 SQLScripts
sudo chmod 770 SQLBackups

This will allow the owner of the folder (mssql) and members of the group mssql to do what they want. More details on setting permissions in linux can be found here.

Once that’s done you can verify the change by running:-

ls -al

linuxfolderpermissions

On server 2 run all the above scripts to setup the same folders and permissions. Once that’s done we also need to setup an Archive folder (only on server 2) to move the transaction log backups into once they are restored. So run the following (same code as above really):-

cd /home/andrew/SQLBackups

mkdir Archive
sudo chown mssql Archive
sudo chgrp mssql Archive
sudo chmod 770 Archive

linuxuserpermissions


Once that’s done we can initialize a database for log shipping. So in your first instance of SQL we will create a login to run the backups, create a database, create a user for the login (with membership of the db_backupoperator role), take a full backup and then take a log backup:-

USE [master];
GO

CREATE LOGIN [logshipper] WITH PASSWORD='Testing11@@',CHECK_POLICY=OFF,CHECK_EXPIRATION=OFF;
GO

CREATE DATABASE [LogShipped];
GO

BACKUP DATABASE [LogShipped]
TO DISK = 'C:\home\andrew\SQLBackups\LogShipped.bak';
GO

BACKUP LOG [LogShipped]
TO DISK = 'C:\home\andrew\SQLBackups\LogShipped.trn';
GO

USE [LogShipped];
GO

CREATE USER [logshipper] FOR LOGIN [logshipper];
GO

ALTER ROLE [db_backupoperator] ADD MEMBER [logshipper];
GO

N.B.- note that SQL Server does recognise linux pathways. SQL thinks that the backup folder we created lives at C:\home\andrew\SQLBackups not /home/andrew/SQLBackups

Now we push these over to the secondary server so that we can restore them. To do this I’m going to use a program called scp, so back in your shell session on the first server, navigate to your SQLBackups folder and run: –

scp LogShipped.bak andrew@192.168.xx.xx:/home/andrew/SQLBackups
scp LogShipped.trn andrew@192.168.xx.xx:/home/andrew/SQLBackups

Before you’ll be able to restore the database backups we need to allow the SQL Server instance on server 2 to be able to read the files we’ve just transferred over. To do this, ssh to server 2 and run:-

cd /home/andrew/SQLBackups
chmod 666 LogShipped.bak
chmod 666 LogShipped.trn

Ok, once the files are on the secondary server,  connect to the second instance of SQL via SSMS to  restore the database and transaction log backups as normal when setting up log shipping:-

USE [master];
GO

RESTORE DATABASE [LogShipped] 
FROM DISK = 'C:\home\andrew\SQLBackups\LogShipped.bak'
WITH NORECOVERY;
GO

RESTORE LOG [LogShipped]
FROM DISK = 'C:\home\andrew\SQLBackups\LogShipped.trn'
WITH NORECOVERY;
GO

Now we need to create a login to perform the restores:-

USE [master];
GO

CREATE LOGIN [logshipper] WITH PASSWORD='Testing11@@',CHECK_POLICY=OFF,CHECK_EXPIRATION=OFF;
GO

ALTER SERVER ROLE [dbcreator] ADD MEMBER [logshipper];
GO

N.B.- I’ve noticed that even though the above permissions are correct to restore the log, this won’t work with sqlcmd. The work around I have is to make the logshipper login a member of the sysadmin role, not ideal I know.

One thing I noticed when looking into this behaviour is a note on the documentation for the sqlcmd utility here:-

SQL Server Management Studio (SSMS) uses the Microsoft.NET FrameworkSqlClient for execution in regular and SQLCMD mode in Query Editor. When sqlcmd is run from the command line, sqlcmd uses the ODBC driver. Because different default options may apply, you might see different behavior when you execute the same query in SQL Server Management Studio in SQLCMD Mode and in the sqlcmd utility.

I’m going to keep researching this to see what’s going on but for now let’s continue with the setup.

Now that the initial database and transaction log backups have been restored, move them into the Archive folder setup earlier:-

cd /home/andrew/SQLBackups

mv LogShipped.bak Archive
mv LogShipped.trn Archive

 


Ok cool, barring some sqlcmd oddness, that’s our secondary SQL instance setup.

By the way, did you get asked to enter your password to connect to the secondary server? That’s going to be a problem for us as we want to have the log shipping process running automatically.

The way I sorted this was to setup public and private keys on the servers and then transfer the public key of server 1 to server 2. This then allows passwordless file transfers between the servers.

So on both servers run:-

ssh-keygen -t rsa

Don’t enter anything in the prompts, just keep hitting enter until you see:-

linuxkeygen

Then we transfer over the public key generated on server 1 to server 2 using the scp command:-

scp ~/.ssh/id_rsa.pub andrew@192.168.xx.xx:/home/andrew

Then on server 2 we need to copy the server 1 public key into ~/.ssh/authorized keys. So in your home directory (or wherever you copied server 1’s public key to) run:-

cat id_rsa.pub >> ~/.ssh/authorized_keys
chmod 700 ~/.ssh/authorized_keys

The last line is important as it changes the settings of the keys folder to be restricted to the owner. Passwordless file transfer won’t work if access to the keys is too open.


Right, now we can create the scripts required to perform log shipping. So back on the first server go to the SQLScripts folder and run: –

nano BackupTLog.sql

This will create a new file and open it in the nano text editor (use other editors at your own peril!). In the file drop in:-

USE [master];
GO

DECLARE @SQL NVARCHAR(MAX);
DECLARE @DateStamp NVARCHAR(20);
DECLARE @DBNAME SYSNAME;

SET @DateStamp = CONVERT(NVARCHAR(10),GETUTCDATE(),112) + '_'
                                + CONVERT(NVARCHAR(2),DATEPART(HOUR,GETUTCDATE()))
                                + CONVERT(NVARCHAR(2),DATEPART(MINUTE,GETUTCDATE()))
                                + CONVERT(NVARCHAR(2),DATEPART(SECOND,GETUTCDATE()))

SET @DBName = 'LogShipped';
SET @sql = 'BACKUP LOG [' + @DBName + '] to disk = ''C:\home\andrew\SQLBackups\' +
                   @DBName + '_TL_Backup_' + @DateStamp + '.trn''';

EXEC [master].dbo.sp_executesql @sql;
GO

Nice and easy, this simply will create a time stamped transaction log of the database.

So we have the SQL script to backup the database, let’s create the script to move the transaction log backups from server 1 to server 2. So back in the SQLScripts folder on server 1:-

nano CopyFileToServer.sh

And drop in:-

cd /home/andrew/SQLBackups

file=$(ls -Art | tail -1)

rsync --chmod=666 $file andrew@192.168.xx.xx:/home/andrew/SQLBackups/

Now what this is doing is selecting the most recent file in the backups folder and then using a program called rsync to copy the file to server 2.

The reason I am using rsync is that I ran into the same issue with permissions that we corrected when copying the initial backups to server 2. The file that’s copied is owned by myself and as such the instance of SQL Server on server 2 couldn’t access it. What rsync allows you to do is setup the permissions of the copied file, so I used chmod 666 to allow everyone on server 2 to read and write the file (I know, I know).

Final script on server 1 is to run the backup and then kick off the copy, so:-

nano RunLogShipping.sh

And drop in: –

cd /home/andrew/SQLScripts

sqlcmd -S . -U logshipper -P Testing11@@ -i ./BackupTLog.sql

sleep 10

./CopyFileToServer.sh

The script navigates to the SQLScripts folder, takes a backup using sqlcmd, waits 10 seconds and then copies the file across.

Finally on server 1 we need to make the scripts executable so:-

chmod 770 BackupTLog.sql
chmod 770 CopyFileToServer.sh
chmod 770 RunLogShipping.sh

OK, so let’s create the script to restore the transaction log backups on the second server. So in the SQLScripts folder on server 2 run:-

nano RestoreTLog.sql

And then drop in:-

SET NOCOUNT ON;
 
DECLARE @FileName nvarchar(100)
DECLARE @SQL nvarchar(max)
DECLARE @TLFILE TABLE
(ID INT IDENTITY(1, 1),
 BackupFile VARCHAR(200),
 ParentId INT,
 Depth INT,
 ISFILE BIT)
 
INSERT INTO @TLFILE
(BackupFile, Depth, ISFILE)
EXEC xp_dirtree 'c:\home\andrew\SQLBackups\', 10, 1
 
SET @FileName = (SELECT TOP 1 BackupFile FROM @TLFILE  WHERE ISFILE = 1 AND DEPTH = 1 ORDER BY BackupFile DESC)
 
SET @sql = 'RESTORE LOG [LogShipped] from disk = ''c:\home\andrew\SQLBackups\' + @FileName + ''' WITH NORECOVERY'
 
EXEC sp_executeSQL @SQL;
GO

Nice and easy again, simply using xp_dirtree to find the latest file (err..see below) in the backups folder and use that to restore the database.

Now there’s a bug in the above script that stops it from selecting the most recent transaction log backup file. Instead of mucking about with xp_cmdshell I thought a simpler process would be to archive the files after they’re used (hence the Archive folder). So we need two more scripts to move the files and one to execute the restore and move.

First, the move: –

nano ArchiveTLogBackup.sh

And drop in:-

cd /home/andrew/SQLBackups

file=$(ls -Art | tail -1)

mv $file /home/andrew/SQLBackups/Archive

Very similar to the copy script created on server 1. It simply looks for the most recent file and moves it into the Archive folder. Let’s create the script to run both of them:-

nano RunLogRestore.sh

And drop in: –

sqlcmd -S . -U logshipper -P Testing11@@ -i /home/andrew/SQLScripts/RestoreTLog.sql

/home/andrew/SQLScripts/ArchiveTLogBackup.sh

And as on server 1, we need to make these scripts executable:-

chmod 770 ArchiveTLogBackup.sh
chmod 770 RestoreTLog.sql
chmod 770 RunLogRestore.sh

Cool!


So we have all our scripts and a database ready to but how are we actually going to perform log shipping? These SQL instances have no agent so the answer is crontab, a task scheduler that comes with Linux.

To open up crontab run (on server 1):-

crontab -e

You’ll probably get a menu to choose your editor, if you use anything other than nano you’re on your own 🙂

Here’s what I setup on server 1:-

crontabserver1

The code inserted is:-

*/5 * * * * /home/andrew/SQLScripts/RunLogShipping.sh

What this is going to do is run that log shipping script every 5 mins.

Now we need to setup a similar job on server 2 to restore the transferred log backup. So hop onto server 2 and run the same command:-

crontab -e

Here’s what I setup:-

crontabserver2a

The code inserted is: –

*/5 * * * * /home/andrew/SQLScripts/RunLogRestore.sh

And what this code is going to do is look for the latest file in the SQLBackups folder, restore it and move the transaction log backup into the Archive folder every 5 minutes. Because of the 10 second delay in the log shipping script, the restored database on server 2 is always going to be 5 minutes behind.

So we’re pretty much done! The last thing to do is monitor as the scripts will start to be executed automatically.


On the second instance you can run the following to monitor:-

SELECT 
	 [h].[destination_database_name]
	,[h].[restore_date]
	,[m].[physical_device_name]
FROM msdb.dbo.restorehistory h
INNER JOIN msdb.dbo.backupset s ON [h].[backup_set_id] = [s].[backup_set_id]
INNER JOIN msdb.dbo.backupmediafamily m ON [s].[media_set_id] = [m].[media_set_id]
ORDER BY [h].[restore_date] DESC

logshippingmonitoringrestores2

You will also be able to check the system log on the Linux boxes by running:-

tail /var/log/syslog

And you can limit it to the crontab output:-

grep CRON /var/log/syslog

Remember, it’ll take 10 mins for the restores to kick off as the way this has been setup is that the restore script will restore the transaction log backup taken 5 mins previously. You can see this above as the timestamp on the log backups is 5 mins behind the time of the restore.

Phew! If you’ve made it this far then fair play to you. That was long and involved but good fun to try and figure out (if at times completely infuriating! 🙂 ). I know it’s very rough around the edges but I’m genuinely chuffed that I got it working and as the whole point was to learn more about the linux operating system, I feel it’s been worthwhile.

Thanks for reading!

Killing databases in SQL Server on Linux

Bit of fun this week with something that a colleague of mine noticed when playing around with SQL Server on linux.

The first thing you do when playing with a new technology is see how you can break it right? Always good fun 🙂

So I’m going to break a database in a SQL Server instance that’s running on linux. I’m not going to go through the install process as there’s a whole bunch of resources out there that detail how to do it. See here

Once you have your instance up and running, connect it to as normal in SSMS and create a database with one table:-

CREATE DATABASE [GoingToBreak];
GO

USE [GoingToBreak];
GO

CREATE TABLE dbo.Test
(PKID INT IDENTITY(1,1) PRIMARY KEY,
 ColA VARCHAR(10),
 ColB VARCHAR(10),
 DateCreated DATETIME);
GO

Then insert some data: –

INSERT INTO dbo.Test
(ColA, ColB, DateCreated)
VALUES
(REPLICATE('A',10),REPLICATE('B',10),GETUTCDATE());
GO 1000

What we’re going to do is delete the database files whilst the instance is up and running. Something you can’t do to a database running in an instance of SQL on windows as the files are locked.

First, find where the files are located: –

USE [GoingToBreak];
GO

EXEC sp_helpfile;
GO

databasefiles

Then jump into your favourite terminal client (I’m using bash on windows) and connect to the linux server.

Then run: –

cd /var/opt/mssql/data/
ls

databasefiles2

Ok, so now to delete the files we can run: –

rm GoingToBreak.mdf GoingToBreak_log.ldf

databasefiles3

And the files are gone! That database is deader than dead!

But….wait a minute. Let’s have a look back in SSSMS.

Run the following: –

USE [GoingToBreak];
GO

SELECT * FROM dbo.Test;

Hmm, data’s returned! Ok, it could just be in the buffer pool. Let’s write some data: –

USE [GoingToBreak];
GO

INSERT INTO dbo.Test
(ColA, ColB, DateCreated)
VALUES
(REPLICATE('A',10),REPLICATE('B',10),GETUTCDATE());
GO 1000

What? It completed successfully??? Err, ok. Let’s run that select statement one more time…

SELECT * FROM dbo.Test;

Ok, that returned 2000 rows right? Hmm, what happens if we run CHECKPOINT?

CHECKPOINT

checkpoint

Errr, ok. That worked as well.

Alright, enough of this. Let’s break it for sure. Back in your terminal session run: –

sudo systemctl restart mssql-server

Once that’s complete, jump back into SSMS and refresh the connection: –

objectexplorer

Ha ha! Now it’s dead. That’s pretty weird behaviour though eh? I expect there’s a linux person out there who can explain why that happened ‘cos I’m really not sure.

EDIT – Anthony Nocentino (b|t) has come back to me and said that the files when deleted get unlinked from the directory so we can no longer see them but the SQL process will still have them open; hence being able to execute queries. Once the instance is restarted the file handle will be released, the underlying blocks and inodes get deallocated; hence the database going into recovery pending. Thanks Anthony!

One thing I do know is that SQL databases on linux will continue to allow queries to be executed against them after their underlying files have been deleted. Pretty worrying imho, you could have a problem and not even know about it until your next server restart!