Besides In-Memory being declared by many people as a “dead feature”, it looks like Sql Server is bringing some stuff into the game, steadily improving the feature and integrating it with
Microsoft is introducing something really daring in
The feature “Memory-Optimized TempDB Metadata” does what it promises – it brings the In-Memory technology into TempDb, substituting the traditional
or as it should be called “Connor Fix”, inspired by the demo presentation by Conor Cunningham and Slava Oks during PASS Summit 2019 Keynote.
Notice that TempDB suffering from the GAM & SGAM pages contention is not the same as the suffering from the Metadata contention in TempDB. In the first case you are loading data and not getting enough allocation pages access, while for the second case you are suffering from different threads blocking each other while trying to access the meta information to create or alter the objects.
If you do not posses RML utilities, you might consider downloading them. I am using the following x64 version which still pretty much serves my testing purposes.
After the installation the RML utilities can be found at the following path: “C:\Program Files\Microsoft Corporation\RMLUtils”
The code that will be testing will be simple – a stored procedure TestMyTempDB19 which will invoke a sequence of the other stored procedures called TestMyTempDb191 in a loop 10 times. The TestMyTempDb191 Stored Procedure will create just 1 temp table and insert a single row into it. Nothing fancy at all.
CREATE OR ALTER PROCEDURE dbo.TestMyTempDb191 AS BEGIN SET NOCOUNT ON; CREATE TABLE #SampleDataTable ( C1 BIGINT NOT NULL ); INSERT INTO #SampleDataTable SELECT t.RN FROM ( SELECT TOP (1) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) RN FROM sys.objects t1 ) t; END CREATE OR ALTER PROCEDURE dbo.TestMyTempDb19 AS BEGIN DECLARE @i INT = 0; WHILE( @i < 10 ) BEGIN EXECUTE dbo.TestMyTempDb191; SET @i += 1; END END
For the stress testing part, we shall start 100 threads and repeat the execution of the code within TestDb (the name of my test database 30 consecutive times, using the ostress with the following command:
ostress -Slocalhost -E -dTestDB -Q"EXECUTE dbo.TestMyTempDb19;" –n100 –r30
Let's run the initial test on our 8-core test VM on Azure.
You can see the behaviour on the left side of this text. One can easily notice that we are not using 100% of the CPU capabilities, and in practice we are not even hitting the 50% of the possible CPU performance.
Remembering some of the smartest people in Oracle & Sql Server world I will rephrase their vision as "Every Database Problem is CPU Problem." The average execution time for this workload between 10 attempts in my case was 26.259 seconds, which seems to be very much overblown, since we are just doing 30*10*100 = 30000 rows insertion, and well ... 30.000 table creation ... We are trying to mess with a lot of metadata in TempDB at the same time. The sample workload is built precisely to do a rather simple task, which might be typical for some high performance workloads, but would include additional statement and more than 1 row of insertion. :)
The key here is precisely to overload Sql Server with parallel operation at the same Metadata within TempDB.
Memory-Optimized TempDB Metadata
Activating the Memory-Optimized TempDB Metadata on the Sql Server is really easy (*cough* besides including a restart for the Sql Server Instance *cough*), we need to execute the following ALTER SERVER CONFIGURATION command:
ALTER SERVER CONFIGURATION SET MEMORY_OPTIMIZED TEMPDB_METADATA = ON;
plus of course the restart of our Sql Server Instance. To disable the Memory-Optimized TempDB Metadata, you can run the same ALTER SERVER CONFIGURATION but this time setting the switch to OFF.
Take a cristal look at the command and tell me why MEMORY_OPTIMIZED TEMPDB_METADATA are written separately. Is this because this is the first step of eventually getting the non Metadata data to become Memory Optimised ? My bets are on it!
To verify the current status of the TempDB you can either run the following command
SELECT SERVERPROPERTY('IsTempdbMetadataMemoryOptimized') as IsTempDbMetadataInMem;
To see all the objects that were Hekaton-ized (yeah, that's In-Memory OLTP-ized) can be obtained with the following query
SELECT xtpObj.object_id, obj.name, obj.create_date, obj.modify_date FROM tempdb.sys.dm_db_xtp_object_stats xtpObj RIGHT JOIN tempdb.sys.objects obj ON xtpObj.object_id = obj.object_id INNER JOIN tempdb.sys.schemas sch ON obj.schema_id = sch.schema_id WHERE obj.type_desc = 'SYSTEM_TABLE';
, where you can see the date of the object creation ... meaning when it was altered ... and one can even notice that the sysfiles1 were altered for the last time in 2003-04-08 or that the exactly half of the tables 36 out of the total 72 were altered 10 years ago for the last time (2009-04-13 or earlier), and that in the reality just 10 out of those 72 tables were Hekatonized:I confess to be a relative noob to the knowledge of the exact purpose of each of those tables and I can't give you a better insight without a serious ramp-up and a cohesive investigation.
One can assume that the those tables are SCHEMA_ONLY tables, since TempDB does not need any other durability that "survives" the system restart.
Back to our test, let's see how those queries shall perform under pressure:
ostress -Slocalhost -E -dTestDB -Q"EXECUTE dbo.TestMyTempDb19;" –n100 –r30
Anyone who has ever seen/done tuning bigger Hardware would instantly be interested, since the CPU is clearly going 100% during the processing, showing APPARENT better focus on the process and hopefully better performance. Since the granularity of the Task Manager for both cases is the same, you can easily notice that overall spent significantly less time churning the same workload and that is very true - we have spent just 11.777 Seconds on the average!
This picture shows the ginormous difference between the original 26.259 Seconds and the 11.777 Seconds obtained with the help of the Memory-Optimized TempDB Metadata. I am a kind of truly wowed by that and if you are facing such problems in productions - I hope that you are wowed by the potential as well as I am.
Do not expect any/every workload to benefit from it - you have got to face exactly the problem this feature is addressing – the locking and blocking of the Metadata inside the TempDB. This is a kind of a problem that is specific mostly to the OLTP applications, where thousands/millions of users are reaching for the same metadata within TempDB at the same time.
Just because your server is not efficient when working with the temp tables, it does not mean that this particular feature will solve all deficiencies.
Using Sp_WhoIsActive as the help to see what is going on during our transactions we can observe that while there are blocking each other transactions for the Regular TempDB configuration, the blocking does not take place on the Hekatonized TempDB Metadata:
Altering the principle query according to some of the not-so-good practices (pun intended), will give us less shiny result and less difference (just adding the Nonclustered Index Creation and dropping all the temp tables in the end, effectively avoiding potential reusage):
CREATE OR ALTER PROCEDURE dbo.TestMyTempDb191 AS BEGIN SET NOCOUNT ON; CREATE TABLE #SampleDataTable ( C1 BIGINT NOT NULL ); CREATE NONCLUSTERED INDEX C1 ON #SampleDataTable(C1); INSERT INTO #SampleDataTable SELECT t.RN FROM ( SELECT TOP (1) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) RN FROM sys.objects t1 ) t; DROP INDEX IF EXISTS C1 ON #SampleDataTable; DROP TABLE IF EXISTS #SampleDataTable; END
Looking at the Task Manager picture you will notice that there is no 100% usage of the CPU anymore, which starts to point to other behavioural problems. In the real world you will face very different challenges and do not expect it to work as perfectly as on this or any other demos. Demos are showing the potential effect, but your own business case implementation will decide how much (if any) improvement you will get from using this feature.
As I already previously mentioned in Columnstore Indexes – part 129 ("Compatibility with the New Features of Sql Server 2019"), the Columnstore Indexes are not compatible with Memory-Optimized TempDB Metadata and creating a temp table with a Columnstore Index fails:
CREATE TABLE #t (c1 int, INDEX CCI_T CLUSTERED COLUMNSTORE );
Msg 11442, Level 16, State 1, Line 2 Columnstore index creation is not support in tempdb when memory-optimized metadata mode is enabled. Msg 1750, Level 16, State 1, Line 2 Could not create constraint or index. See previous errors.
as well as the [sys].[sp_estimate_data_compression_savings], which has been enhanced in Sql Server 2019 to support Columnstore Indexes finally. Seems like an odd situation - implement support 7 years late and then not enable its support for the newest feature. :(
Remembering the "good" old limitation for the In-Memory TempDB: A single transaction may not access memory-optimized tables in more than one database - meaning that if you touch on any other database Memory-Optimized tables, you are screwed. :(
Locking and Isolation Levels control are non-existing for the Memory-Optimised Tables and so you will be automagically condemned to READ COMMITTED SNAPSHOT usage.
I think it is just the beginning.
I hope that is just the beginning of the Memory-Enhanced Database.
The potential wins for these specific scenarios are enormous.
Now, we need to put the more stuff into the Memory-Optimised tables.
That will drive the development of the Hekaton further, hopefully starting to eliminate the initial design limitations.
I hope ...