In-Memory OLTP: Part 1 – Introduction

In-Memory OLTP is getting a lot of (rightly deserved imho) hype at the moment. But what does it entail exactly? If you’re a DBA with a few years experience under your belt and are looking to get into this new feature but don’t have the time to sit and read the various technical articles that are out there, this series of blog posts will get you started. I’ll include links along the way so that you can dive further in when you get a chance.

Let’s get started…

Back in December I wrote a review of Kalen Daleny’s book SQL Server Internals: In-Memory OLTP, you can read it here

Since then I’ve had a chance investigate the new functionality for myself and as part of Ed Leighton-Dick’s #SQLNewBlogger challenge in April I’ll write a 4 part series covering:-

Introduction

In-Memory OLTP is a completely new engine incorporated within the existing SQL engine. This means that even though familiar tools and code are used to create and administer memory optimised objects they behave very differently than the established disk based objects.

Memory optimised tables exist entirely in memory, meaning that there are no calls to disk for user IO operations. However there are background processes that write to disk if data in the tables is specified as durable. Their internal structure is different too, no data pages are used, they are data rows written into memory sequentially with each row containing an index pointer to the next. This means that each memory optimised table must have a minimum of one index. I’ll go through the indexes available on memory optimised tables in Part 2 of this series.

Another major difference between In-Memory OLTP and traditional disk based objects is that In-Memory OLTP support the ACID properties of transactions but with no locking needed. In order to do this In-memory OLTP uses an optimistic concurrency model called the multi-version concurrency control (MVCC) model which implements a new form of row versioning. More information on MVCC can be found here.

However, before we get into the features of In-Memory OLTP we first need to create a database that can hold memory optimised objects.

So create a database as normal:-

CREATE DATABASE [InMemoryOLTPDemo]
    ON PRIMARY
(NAME	    = N'InMemoryOLTPDemo Primary',
 FILENAME   = N'C:\SQLServer\Data\InMemoryOLTPDemo.MDF',
 SIZE	    = 5MB,
 FILEGROWTH = 1MB,
 MAXSIZE    = UNLIMITED)
    LOG ON
(NAME	    = 'InMemoryOLTPDemo Log',
 FILENAME   = N'C:\SQLServer\LogInMemoryOLTPDemo_Log.LDF',
 SIZE	    = 5MB,
 FILEGROWTH = 1MB,
 MAXSIZE    = UNLIMITED)
COLLATE Latin1_General_100_BIN2;
GO

Now add a filegroup and file that is designated as containing memory optimised objects/data:-

ALTER DATABASE [InMemoryOLTPDemo] ADD FILEGROUP MemData CONTAINS MEMORY_OPTIMIZED_DATA;

ALTER DATABASE [InMemoryOLTPDemo] 
ADD FILE
(NAME       = N'InMemoryOLTPDemo Memory Optimised',
 FILENAME   = N'C:\SQLServer\Data\InMemoryOLTPDemo_MemOp')
TO FILEGROUP MemData;
GO

N.B. – Couple of notes with this…you can only have one memory optimised filegroup per database AND it cannot be removed once created (you have to drop the database!). Also note that a binary collation was specified. This is because indexes on memory optimised tables that include character columns have to be set to a binary collation.

The following SELECT statement will return the memory optimised filegroup and corresponding files:-

SELECT
    g.name,
    g.type_desc,
    f.name,
    f.physical_name,
    f.state_desc
FROM
    sys.filegroups g
INNER JOIN
    sys.master_files f on g.data_space_id = f.data_space_id
WHERE
    f.database_id = DB_ID()
AND
    g.type = 'FX';
GO

Once the filegroup has been created with the corresponding files, memory optimised tables can now be created and populated. The syntax is nearly exactly the same as for “normal” (or disk based) tables, with a couple of extra settings and restrictions:-

CREATE TABLE [TableInMemory]
   (RecordID    INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1024),
	FirstName   SYSNAME,
	LastName    SYSNAME,
	DateCreated DATE)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);
GO

Memory optimised tables do not support clustered indexes so here I’ve used a new type of index called a HASH index. I’ll cover the new indexes supported by In-Memory OLTP objects in section 2 of this blog series.

The other difference from the norm is at the end of the script. MEMORY_OPTIMIZED = ON does exactly what it says on the tin, designates this table as memory optimised.

The second part DURABILITY = SCHEMA_ONLY indicates that in the event of a instance restart only the schema of the table will be preserved. DURABILITY = SCHEMA_AND_DATA indicates that the table will have data written to disk via a background process so that it can be recovered in the event of an instance restart (this option is the default). I’ll go through this in Part 3.

So, we’ve created a database that can contain memory optimised objects and also created a SCHEMA_ONLY durable memory optimised table. In the next post I’ll go through the new types of indexes supported by the In-Memory OLTP engine.

Finally, it’s worth mentioning the restrictions around memory optimised tables:-

  • No clustered indexes
  • No unique indexes (apart from the primary key)
  • Character columns in indexes have to be set to a binary collation
  • No foreign keys
  • No check constraints
  • No triggers
  • Only certain datatypes supported which are fully detailed here

Further Reading

In-Memory OLTP – Books Online
Exploring In-memory OLTP Engine – Rob Garrison
In-Memory OLTP Whitepaper – Kalen Delaney

2 thoughts on “In-Memory OLTP: Part 1 – Introduction

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s