SQL Server 2016 SP1 – Programmability Surface for everyone!

This is the most important blog post about SQL Server that I have ever written. As far as I am concerned this is the most epic moment in the history of the SQL Server since its inception (yeah, you could argue that SQL Server 7.0 was one of the most defining moment as well).

The just released Service Pack 1 for SQL Server 2016 represents the most awesome and epic change of the strategy from Microsoft in relation to the programmability surface of the SQL Server.

oprah-car-giveaway-4Microsoft has just announced that it will be enabling the usage of the enterprise features across all different SQL Server SKUs.
I will write it again – to make sure that you can read it slow and interpret each of the words correctly:
Microsoft has just announced that it will be enabling the usage of the enterprise features across all different SQL Server SKUs.

Switching into Morpheus voice: Yes …

Yes, you can have In-Memory OLTP on the Standard Edition of SQL Server 2016 SP1.
Yes, you can have Columnstore Indexes on the Standard Edition of SQL Server 2016 SP1.
Also Row-Level Security, Dynamic Data Masking, Always Encrypted (the feature that has received huge attention & excitement from the users in SQL Server 2016), Change Data Capture (BI people – it’s party time!), Database Snapshot (think about all the possibilities of getting to read the data that normally is not available in the standard editions), Partitioning (Woohoo!), Compression (Indeed!), Multiple Filestream containers, Distributed partitioned views,
Polybase (yes, this feature deserves a separate line – think about getting into the Hadoop Clusters from your own Standard Edition of SQL Server),
and Fine grained auditing !

Yes, you can have Columnstore Indexes on the Web Edition of SQL Server 2016 SP1.
And you can even have Columnstore Indexes on the Express Edition of SQL Server 2016 SP1 … (sound of the dropping jaws hitting the floor)

Yes, there will be some limitations and regarding the Columnstore Indexes I will be publishing a separate blog post soon. Some of them – such as the memory sizes for the Cache Stores are described below, some will be published later – and some may be discovered much later, who knows!

Talking about the memory sizes for the Cache Stores – let us notice that the amount of memory for Columnstore Indexes (Columnstore Object Pool) will be limited for 25% of the total allowed Buffer Pool size, meaning that for the Standard Edition the maximum size for the Columnstore Object Pool will be limited to 1/4 of the 128 GB, making it 32GB of RAM. This does not mean that you can’t build Columnstore Table with a couple of Terabytes (Tip: you will need to use partitioning for that), it means that at the same time you can keep in memory 32 GB of this data.
These 32 GB are not marked within the Buffer Pool and for that you are getting this 25% as a bonus to your traditional 128GB of RAM.

The same principle applies to the In-Memory Structures, which receive another 25% of the maximum Buffer Pool Size – 32 GB of RAM for the Standard Edition, making the potential total of the extra-memory addition equals to 50% (64 GB of RAM for Standard Edition) and making the total amount of memory increasing to 192 GB for the Standard Edition for free !!!
If you are using Web Edition then you will get 16GB for Columnstore & 16GB for In-Memory, increasing your total RAM from 64 GB to 96 GB!
For the Express Edition this will mean addition of 256 MB for Columnstore and 256 MB for In-Memory, increasing the total size of RAM to 1,5 GB.

For those who will be crying about the limitations (there are some for the Express Edition such as In-Memory which can’t function without Filestream permissions and Change Data Capture, which can’t work because of the missing SQL Agent) and some of the enterprise-grade availability features – please, get a life!
As for the memory limitations in SQL Server Express and SQL Server Standard – please remind yourself that it has received a 50% improvement, besides all those incredible features like Columnstore, Partitioning, Compression & In-Memory that should elevate any of your editions to the before unseen speeds.

I am more than confident now, that soon we shall be having the same feature set in Azure SQL Database – which is super-exciting!

6 thoughts on “SQL Server 2016 SP1 – Programmability Surface for everyone!

  1. Peter de Heer

    You might want to look into the usage of the “Buffer Pool” and the “Column store object pool” as things seem not to work as advertised.

    Here is what I observed:

    1) When the first query after a cold restart uses a columnstore index, the “Column store object pool” grows and the “Buffer Pool” just a little. Just as expected and as it should be.

    2) But now flood the “Buffer pool” with data from a query that does nothing with a columnstore index. Here I witness that the “Buffer Pool” does rapidly cannibalize the “Column store object pool”. Even when there is plenty of memory for both, this happens. And limiting the “”Buffer Pool” size only seems to accelerate this trend.

    3) When we now start a query that uses only columnstore again, the grows of the “Column store object pool” is extremely slow and queries fun often 3 times slower for no apparent reason.

    There seems to odd priorities that work out very bad in practice.

    * The “Buffer pool” growth allocates a percentage of required memory from the “Column store object pool”, whenever it can and thereby trashing it.
    * The “Column store object pool” does not want to grow when there is no free memory left to grow and only grows by a very small percentage with memory it takes from the “Buffer Pool”.

    I have lost essentially days of tuning work, because of this behavior and I cannot fint any documentation that is valid. All of the Microsoft docs, which are essentially marketing documents, are deceptive and incorrect.

    Maybe you know more?

    This is what I use to see the buffer sizes:

    clerk_name = left( clerk_name, 40 )
    , total_mb = round( cast( total_kb as float ) / 1024, 3 )
    order by

    This is all I execute to cleanup all I am tracking:

    DBCC dropcleanbuffers;
    DBCC freesystemcache( ‘ALL’ );
    DBCC freeproccache;

    DBCC sqlperf( ‘sys.dm_os_wait_stats’ , clear );
    DBCC sqlperf( ‘sys.dm_os_latch_stats’, clear );

    Any input that helps is appreciated, obviously!

      1. Peter de Heer

        Thanks for your esponse (I been away for a while).

        The version I was using is the 2016 SP1 Standard edition.
        The final production will be a 2016 SP1 Web edition.

        To mitigate the problem of re-loading and decompressing data I decided to not use “columnstore_archive” compression. As we have a really fast SSD in the final system, the overhead of just reading the data is not that high and decompressing the archive compression into the “columnstore object pool” will be gone.

        It is still a shame it essentially doubles the size of the affected table and increase the backups by a third. It might be less of an issue on an enterprise edition, but I somehow doubt it and I cannot think of a good reason it works the way it does. Almost any way different would be better! I was hoping Microsoft got it finally right this time, but I was too optimistic.

        Maybe 2017 sees some improvements, I still got to check 2017 RC1. Unfortunately it came too late to be a candidate for production.

        1. Niko Neugebauer Post author

          Hi Peter,

          I suggest to make decisions on the 2017 only after RTM (release), every CTP & RC are bringing enough changes so that I am restraining from judgements until the release is declared as fit for the production.
          Still you raise a valid question and I guess I need to revisit the changes in the engine to see if the Columnstore Object Pool might be a problem for other editions.

          Best regards,
          Niko Neugebauer

Leave a Reply

Your email address will not be published. Required fields are marked *