In-Memory, Quo Vadis ?

In-Memory is dead, Long Live the In-Memory!

In SQL Server 2019 without that much deserved fanfare (for me at least), Microsoft has released a couple of improvements that made me think about the current status (was apparently almost dead) and the future of the In-Memory OLTP technology, that was launched in the year 2014, and besides significant improvements in SQL Server 2016 looked pretty much abandoned since. Jos de Bruine (the previous PM owner of the technology) can not be found anywhere within Microsoft, and besides no new publicly official owner for this piece of technology, any answers from any Microsoft PMs would be far more silent then vague.
This situation would worsen by the current limitations of this non-finished technology that I actually love:
– Removal of the In-Memory Filegroup (yeah, that is a kind of lame)
– Initial footprint (makes the tech useless for the major ISVs with thousands of Databases per Instance)
– Native Compiled Stored Procedure support for more than 1 core & columnstore indexes
– Alterable tables with a Clustered Columnstore Indexes
– Schema-Only Inmemory tables with Columnstore Indexes
– And so on …

With all that, by the RTM of the SQL Server 2019 one could notice 2 new features (and where one of them should habe been released by the time of the inital SQL Server 2014 or SQL Server 2016 at maximum):
– The Memory-Optimized Metadata in TempDB (new and shiny)
– Database Snapshots (long overdue)

Memory-Optimized Metadata in TempDB – the next logical step, besides naturally hekatonizing the rest of the 36 tables metadata, would be making the temporary tables becoming fully In-memory Schema-Only (thus disposible after restart) tables. Defining them with a special hint, such as the traditional (CREATE TABLE #t1 (C1 INT NOT NULL)
would be a great advantage, but even more interesting would be doing a similar syntax, such as
(CREATE TABLE #t1 (C1 INT NOT NULL) WITH (MEMORY_OPTIMIZED = ON)) because we do not need to persist the temporary table indefinitely, by the temporary purpose definition.

A Stored Procedure on the other side would greatly benefit from the following syntax, pointing that all temp tables within this procedure would have to be created with the In-Memory functionality:


A database scoped configuration for that matter, pointing that any temporary tables created within the scope of this database will have to be created in TempDB using In-Memory technology: INMEMOY_TEMPDB = ON/OFF

Notice that for this purpose first of all the situation with support of the Columnstore Indexes (the current limitation in SQL Server 2019) should be solved.

Database Snapshots

Database Snapshots have been one of the strangest & weakest points in the In-Memory story. In SQL Server 2019 we have seen the Database Snapshot for In-Memory been implemented, and although it is right now looks to be strictly limited to the reading of the Snapshot data (and no Restore functionality from the Snapshot is supported, while the problem that is exposed points to the limitation within FILESTREAM), it gives a little improvement to us all, who are using snapshots for BI / DWH scenarios.
I am honestly wondering here what was the reason to look at this functionality and wether TempDB required it or anything else under the hood that is not been exposed. Or maybe there was just some client demanding this feature and crying that they would ran over to competition if this funcitonality would not be available.
In any case – it is a practical improment that everyone can take advantage of in SQL Server 2019.

Quo Vadis

With the raise of the persistent memory and its improved support in SQL Server 2017 & 2019, it looks objectively as the technology is on the right track to gain some serious tracktion in post 2021 potential release frame. I believe it has the potential to become popular (given the hardware producers drivve to make bigger quantities and lower the prices).
Once it will gain its full potential in somewhere 2022+ timeframe, the big question that will be raised in the battle for the milliseconds of the access latency and that’s where In-Memory can get an important edge over the competition. With PMEM (Persistent Memory) lowering the access time for thinkably every single operation – the fight will be tough and the one optimising for the In-memory latch-less operations will have a significant advantage.
This will be the true opportunity that In-Memory was begging for a long-long time.

I have no doubt that the smart people at Microsoft are already looking into these matters.

Final Thoughts:

Being probably an unexplicable optimist, I believe this can be the moment of the second birth for the In-Memory. This time it can already rely on all the available experience, feedback and usage statistics, which microsoft is collecting from Azure and from SQL Server 2016+. This is a huge opportunity to make In-Memory work like it was originally positioned. I set my hopes on the SQL Server vNext (the one after the SQL Server 2019).
All we need is someone from Microsoft who will be up to this ardous challenge, which will be a kind of “against all odds”, given the general lack of love that In-Memory feature has right now between the users & bloggers.

In-Memory is dead, Long Live the In-Memory!

2 thoughts on “In-Memory, Quo Vadis ?

    1. Niko Neugebauer Post author

      Hi George,

      The goal of the Hekaton was always removing up to 90% of the overhead of the regular T-SQL queries.
      PMEM requires direct memory access that Hekaton already has.
      With the time, when every database provider is running on PMEM, the execution time and all the stack will start making the differences, so I believe it is already on the right track to have a significant advantage in the future (second wave).

      Best regards,
      Niko Neugebauer

Leave a Reply

Your email address will not be published. Required fields are marked *