Batch Mode – part 1 (“The Beginning”)

This is the very first blog post in a new series on the Batch Mode, welcome & enjoy!

I will be going into every detail of Batch Mode that I know off in the upcoming blog posts (yes, there will be a lot of them), but in the very first one I decided to make even more public the case I have been pushing for the last 8 months: implementing Batch Mode for RowStore.

Batch Mode

Batch Mode is a special query processing mode, which targets a very significant improvement for the execution plans where a big number of rows is being worked on.

Batch Mode looks to reduce the number of CPU instructions to process a row on average, in this way achieving some amazing overall speed improvements.

Introduced in SQL Server 2012 for the first time for the mass market and currently implemented only for the Columnstore indexes, depending on the situation you might get even over 20 times improvement in the execution time.

For what I understand in Columnstore Indexes – the biggest speed improvement for it in reality comes not from Compression & Segment Elimination but from the Batch Mode. With Segment elimination we might be able to improve the amount of data read and process, but this improvement can bring us a couple of times overall improvement, where the real speed comes from the CPU instructions optimisations.

You might ask – what is the reason of publishing this series of articles outside of Columnstore series that I am doing for over a year ?
The answer is extremely simple – to my knowledge, there are no logical reason for Batch Mode to work with Columnstore Indexes exclusively.
RowStore can greatly benefit from it.

Batch Mode for Row Store?

Reducing the number of CPU instructions per row on for some of the execution plan’s iterators for a huge query in Datawarehousing will show the benefits immediately by delivering query results in a matter of seconds instead of minutes or hours,
but for the OLTP systems even a small improvement will be multiplied by the number of queries running on this system simultaneously. The total difference is something that should be measured carefully.

I understand that in the most cases OLTP systems are running with MAXDOP = 1, but everyone of us knows a couple of hundreds of example where this is not true.
Also, please take a look at this Connect Item that Rick Kutschera from BWin.Party has submitted a couple of months ago: Clusterd Columnstore and DOP 1

Another important point is that as far as I understand the current (SQL Server 2014) implementation of the Batch Mode, the Delta-Stores are already being processed in Batch Mode which means that the algorithm itself is already able to work with a b-tree HEAPs (Row Storage), so there is no effort in re-inventing the wheel.

For all this I have opened a Connect Item yesterday which I kindly ask you to upvote:
Implement Batch Mode Support for Row Store, should you agree that Batch Mode is needed not only for the OLAP type applications but for the OLTP as well.

to be continued …

2 thoughts on “Batch Mode – part 1 (“The Beginning”)

    1. Niko Neugebauer Post author

      Mark, thank you for the comment!

      Regarding the “Batch Mode” name, as far as I believe, this is an actual bow to the Batch processing from IBM, ask any good DB2 DBA about putting some processing to Batch and they sing about it. 🙂

      As for the SIMD itself which I somehow recorded in my memory more as MMX (Yeap, those old Pentium days), I guess right now that’s where the Data Processing Industry is trying to get to,
      even though to be honest even when I was in schools in the beginning of 90’s, there was enough conversations about the future development of the instructions for PC computers and the never ending battle between CISC & RISC is never quite won in my personal opinion. There will be times when going one particular direction will bring more advantages, and there will be times when going other one shall be essential.

      I will go into a lot of details in the upcoming weeks.

Leave a Reply

Your email address will not be published. Required fields are marked *