How to use CISL – the Columnstore Indexes Scripts Library

CISL - Columnstore Indexes Scripts LibraryAround 3.5 Months ago in September of 2015, I have announced the first public release of the CISL – Columnstore Indexes Scripts Library, which allows to have a deeper insight into the database that uses or can use Columnstore Indexes.
Since that, I have released 4 more “point releases” with bug fixes and new features, I have greatly expanded the support of SQL Server with inclusion of SQL Server 2012, SQL Server 2016 and Azure SQLDatabase.

Today together with this blog post I am releasing CISL 1.0.4 which brings just a little new feature for telling the number of days that have passed since the used release of SQL Server was published.

CISL - version 104 listAt the moment, CISL is support all SQL Server models that use Columnstore Indexes : 2012, 2014, 2016 and current implementation of Azure SQLDatabase v12.
The current structure of the CISL is simple – there are 8 different scripts that are presented in 2 different forms: a simple one is a script that can be executed on any database directly, and the other form is the same script as a stored procedure that can be installed in each of the needed databases for the later more comfortable usage. Notice that because with the current strategy for Azure SQLDatabase all DMVs related to Columnstore Indexes are database based and hence if you are intending to use Stored Procedures from CISL you will need to install them in each of the databases.
Also the different scripts have some different columns across different SQL Server versions. With each release Microsoft is adding a lot of functionalities and so the CISL is adjusting to include and support them accordingly. Some scripts do not make any sense (for example SQL Server Version information on the Azure SQLDatabase) and so they do not exist at all for those particular SQL Server versions.

Here is the list of all current scripts that can be found in CISL 1.0.4:
alignment.sql (cstore_GetAlignment.sql as a Stored Procedure) – this script shows the alignment (ordering) between the different Columnstore Segments.
dictionaries.sql (cstore_GetDictionaries.sql as a Stored Procedure) – Shows detailed information about the Columnstore Dictionaries.
fragmentation.sql (cstore_GetFragmentation.sql as a Stored Procedure) – Shows the different types of Columnstore Indexes Fragmentation.
memory.sql (cstore_GetMemory.sql as a Stored Procedure) – Shows the content of the Columnstore Object Pool.
row_groups.sql (cstore_GetRowGroups.sql as a Stored Procedure) – Shows detailed information on the Columnstore Row Groups.
row_groups_details.sql (cstore_GetRowGroupsDetails.sql as a Stored Procedure) -Shows detailed information on the Columnstore Row Groups by listing details for each of the available Row Groups.
sqlserver_instance_info.sql (cstore_GetSQLInfo.sql as a Stored Procedure) – Provides with the list of the known SQL Server versions that have bugfixes or improvements over your current version + lists currently enabled trace flags on the instance & session.
suggested_tables.sql (cstore_SuggestedTables.sql as a Stored Procedure) – Lists tables which potentially can be interesting for implementing Columnstore Indexes

Let’s go and work through all of those scripts 1 by 1, with their parameters and results:

SQL Server Instance Information – sqlserver_instance_info.sql (cstore_GetSQLInfo.sql)

If you are starting to work with a SQL Server that uses Columnstore Indexes, this should be the first script you should begin with. This script will give you information on the version of SQL Server that you are using and if there are some newer updates (Service Packs and Cumulative Updates) available.
This script can list you all newer updates that are known, besides listing every single bug fix that is available after the version that you are using.
The last result set is listing all trace flags that have some specific effect or affecting in some way Columnstore Indexes in SQL Server.

There are 3 parameters that allow you to control the output and the functionalities of this script:

@showUnrecognizedTraceFlags bit = 1 – Enables showing active trace flags, even if they are not columnstore indexes related. For example if you are using some specific trace flag that has nothing to do with Columnstore Indexes, it will be shown in the output of this script.
It is activated by the default.

@identifyCurrentVersion bit = 1 – Enables identification of the currently used SQL Server Instance version. This parameter controls if the information on the current SQL Server is shown or not.
It is activated by default.

@showNewerVersions bit = 0 – Enables showing the SQL Server versions that are posterior the current version. With this parameter set to 1, you will receive the list of all Cumulative Updates and Service Packs that are available for this SQL Server version.
This parameter is disabled by default.

Let’s execute the stored procedure that I have already installed on my SQL Server 2014 SP1:

sqlserver_instance_info - default
You can see on the image that I am really running a SQL Server 2014 with Service Pack 1 here, and that since it’s release it has been over 220 days (more then 7 month) and there is a number of critical errors that were fixed since that release – mostly in the Cumulative Update 1 but also in a Cumulative Update 3 for SQL Server 2014 SP 1 as well.
At this point I clearly see that should it be a production instance, I would need to carefully consider doing an upgrade to a later Cumulative Update.
The list of currently enabled trace flags is empty and so all I want at the moment is to see which Cumulative Updates were released since Service Pack 1 of SQL Server 2014. For that purpose I will re-execute my stored procedure with the parameter @showNewerVersions set to 1:

sqlserver_instance_info - show newer versions
This time I can see that at the moment of this article being written, there are 3 Cumulative Updates for SQL Server 2014 Service Pack 1 that were published.

Let’s get back to our trace flags, and let’s disable Tuple Mover – the background process that converts closed Delta-Stores to compressed Row Groups:

Running our Stored Procedure again with the default parameters will allow us to determine this situation immediately, providing us with basic description, information on the supportability of this particular trace flag, plus the link where more information can be found:

sqlserver_instance_info - disable tuple mover
Please do not forget to re-enable the automated Tuple mover by executing the following script:

Suggested Tables – suggested_tables.sql (cstore_SuggestedTables.sql)

This script lists tables which potentially can be interesting for implementing Columnstore Indexes, showing you the way of how to get those tables converted to the desired Columnstore Index.

For testing this procedure, let’s restore a fresh copy of the free ContosoRetailDW database. I will use the following script that is pointing to use the backup from C:\Install location and putting the data files into C:\Data

Let’s run the stored procedure on the ContosoRetailDW (you will need to execute the source code first, installing it there):

suugested_tables - default
The default parameters execution allows you to receive a list with the tables that might be interesting to convert to one of the Columnstore Indexes.
Using the documentation I have listed the known cases of the SQL Server functionalities preventing Columnstore Indexes creation.
Right now those 5 tables represent interest for creating Columnstore Indees – Nonclustered Columnstore Indexes to be precise. The limitation for Nonclustered Columnstore Indexes to be non-updatable in SQL Server 2012 & SQL Server 2014 is what making these kind of indexes to be the only suggested in this situation
You will need to scroll to the right to see that the fact that we have Primary & Foreign Keys on those tables is what exactly preventing us from moving on to updatable Clustered Columnstore Indexes in the current test system with SQL Server 2014.
suugested_tables - primary & foreign keys

There are 9 parameters that allow you to control the output and the functionalities of this script:

@minRowsToConsider bigint = 500000 – This parameter defines the minimum number of rows for a table to be considered for the suggestion inclusion.
The default parameter is set to 500.000, which represents around 50% of the maximum size for a Row Group.

@minSizeToConsiderInGB Decimal(16,3) = 0.00 – Defines the minimum size in GB for a table to be considered for the suggestion inclusion.
You can use this parameter to filter out the tables that are too small to be interested for converting to Columnstore technology.
This parameter is set to 0 by default, meaning that every table will be considered.

@schemaName nvarchar(256) = NULL – this parameter allows to show data filtered down to the specified schema.
By default it is set to NULL meaning all schemas of the table will be considered.

@tableName nvarchar(256) = NULL – Allows to show data filtered down to the specified table name pattern.
By default it is set to NULL meaning that all tables in the database will be considered.

@considerColumnsOver8K bit = 1, – shows the tables which columns length sum extends over 8000 bytes and thus not supported in Columnstore Indexes.
This parameter is enabled by default, meaning that those tables that needs some significant work – such as redesigning the table.

@showReadyTablesOnly bit = 0 – this one shows only those Rowstore tables that can already get Columnstore Index without any additional work.
By default this parameter is disabled.

@showUnsupportedColumnsDetails bit = 0 – this parameter shows in a separate result set a list of all unsupported columns from the suggested tables.
By default it is disabled.

@showTSQLCommandsBeta bit = 0 – Shows a list with Commands for dropping the objects that prevent Columnstore Index creation and then creating
Notice that this command is in the early beta phase and is not production ready, but serves more as a guide in the conversion process.
This parameter is disabled by default.

@columnstoreIndexTypeForTSQL varchar(20) = ‘Clustered’ – Allows to define the type of Columnstore Index to be created if the parameter @showTSQLCommandsBeta is set to 1, with possible values of ‘Clustered’ and ‘Nonclustered’.
This parameter is set to create Clustered Columnstore Index by default.

Consider some examples:

This query will show all the tables that have over 5 million rows and occupy more than 200 MB on the disk:
suugested_tables - filtered

In the following script I am filtering out all tables that contain GEO in their name and that have over 1000 rows, but the most importantly I am requesting all incompatible columns to be listed in a separate result set:

suugested_tables - unsupported columns

In the next example we will get the list with all T-SQL commands for dropping the current functionalities that prevent the conversion of this table to Clustered Columnstore Index, and then the statement for creating the Clustered Columnstore Index itself:

suugested_tables - t-sql script

Let’s select the listed commands and execute them in a new window (this script will drop the primary key, 5 foreign keys and then create clustered columnstore index on the FactOnlineSales table):

Row Groups – row_groups.sql (cstore_GetRowGroups.sql)

This script gives you a great overview over the different internal structures within Columnstore Indexes. It provides information on all types of Row Groups with Columnstore Indexes, the total number of rows, the number of deleted rows, sizes in GB of the whole Columnstore Structure and for SQL Server 2012 & SQL Server 2014 it gives a basic idea of how many times the Columnstore Structure was accessed with Scans and Updates.

This script has 7 different parameters:

@indexType char(2) = NULL – this parameter allows to filter Columnstore Indexes by their type, with possible values (CC for ‘Clustered’, NC for ‘Nonclustered’ or NULL for both)
By default this parameter is including all Columnstore Indexes.

@compressionType varchar(15) = NULL – this parameter allows to filter by the compression type with following values ‘ARCHIVE’ for Columnstore Archival compression, ‘COLUMNSTORE’ for the default Columnstore compression or NULL for both types.
By default it is set to include all types of the compression

@minTotalRows bigint = 000000 – this parameter controls the minimum number of rows for a table to be included in the results. It is useful if you are working with a database that contains a huge number of tables or tables with a lot of partitions with Columnstore Indexes and thus you want to filter out the tables that have at least a specific number of rows.
By default it is including all tables.

@minSizeInGB Decimal(16,3) = 0.00 – specifies the Minimum size in GB for a table to be included. It is useful if you are working with a database that contains a huge number of tables or tables with a lot of partitions with Columnstore Indexes and thus you want to filter out only tables with a certain size.
By default it is including all tables.

@schemaName nvarchar(256) = NULL – this parameter allows to show data filtered down to the specified schema.
By default it is set to NULL meaning all schemas of the table will be considered.

@tableName nvarchar(256) = NULL – Allows to show data filtered down to the specified table name pattern.
By default it is set to NULL meaning that all tables in the database will be considered.

@showPartitionDetails bit = 1 – this parameter allows to control if the details of each of the available partitions will be shown or the aggregated information for the whole table will be shown.
By default the aggregated information is shown in the result set.

Let’s fire up the stored procedure cstore_GetRowGroups and invoke it with different parameters:

row_groups - default
Because we have just 1 table with Columnstore Index, you can see the results with just 1 row. Most importantly is the overview with number of different Delta-Stores – Open & Closed, Bulk Loaded Row Groups (these are the ones that appear in sys.column_store_row_groups with status = HIDDEN), and the overview of the total number of Row Groups with number of Deleted & Active Rows per partition.

Let’s say we delete 100.000 rows from this FactOnlineSales table:

row_groups - deleted rows
You can see that all those deleted rows are automatically displayed in the output.

Let’s add 1 Million Rows into our FactOnlineSales table by simply reloading 1 Million Rows from the existing data and modifying their primary key into a negative number:

row_groups - loaded 1M rows
This time we have 14 different Row Groups instead of previous 13 with 1 new Row Group which is a Delta-Store. We did not get a full compressed Row Group because we loaded less then 1048576 rows at a time.

Let’s create a new empty table CCI_Test with a Clustered Columnstore Index:

row_groups - 2 tables
We see 2 tables, let’s filter out the table without any data:

row_groups - loaded 1M rows

There are some limitations for this function (you need a VIEW_SERVER_STATE permission to use it, but in the next releases it will be updated).

Row Groups Details – row_groups_details.sql (cstore_GetRowGroupsDetails.sql)

This script shows detailed information on the Columnstore Row Groups by listing & filtering all row groups that contained within filtered tables with Columnstore Indexes.

@schemaName nvarchar(256) = NULL – this parameter allows to show data filtered down to the specified schema.
By default it is set to NULL meaning all schemas of the table will be considered.

@tableName nvarchar(256) = NULL – Allows to show data filtered down to the specified table name pattern.
By default it is set to NULL meaning that all tables in the database will be considered.

@partitionNumber bigint = 0 – this parameter allows to show details of each of the available partitions, where 0 stands for no filtering. You can set this number for filtering out a specific partition. Typically it is done in conjunction with filtering on the 2 above parameters – schema and table name.
By default no partitions will be filtered.

@showTrimmedGroupsOnly bit = 0 – this parameter helps you to filter out only those Row Groups, which size <> 1048576
By default it is not filtering any Row Groups at all.

@showNonCompressedOnly bit = 0 – this parameter filters out the comrpessed Row Groups, eliminating all the Delta-Stores & Tombstones. This parameter is great when you are debugging a specific table situation.
By default it is set not to filter out anything.

@showFragmentedGroupsOnly bit = 0 – allows to show only the Row Groups that have Deleted Rows in them. Like the parameter before, this one is useful for debugging.
By default it is not activated.

@minSizeInMB Decimal(16,3) = NULL – this parameter allows to set the minimum size in MB for a table to be included in the results and is used for filtering and debugging.
By default it is set not to filter out anything.

@maxSizeInMB Decimal(16,3) = NULL – a similar to above, this parameter allows to set the maximum size in MB for a table to be included in the results and is used for filtering and debugging.
By default it is set not to filter out anything.

Let’s dive into the FactOnlineSales table row groups, by using the stored procedure (you will need to install it in your ContosoRetailDW database first):

row_groups_details - default
Depending on a number of factors (available resources, number of cores and resource governor settings) you should get a similar (not exactly the same most probably) distribution of rows between different row groups as on the picture above.
I have 14 Row Groups as I have seen with the cstore_GetRowGroups stored procedure and I have detailed information on all of them.

Now let’s see only those Row Groups that are trimmed:

row_groups_details - trimmed
This time we have only 3 row groups shown and only those which have a number of rows not equal to 1048576 are filtered.

Let’s list all the Delta-Stores within our table:

row_groups_details - noncompressed
As expected we have only 1 Delta-Store with 1 Million Rows.

Let’s move onto display the fragmented Row Groups (these are the ones that have deleted rows):

row_groups_details - fragmented
This time we have our Row Group where we have deleted 100.000 rows. This is very useful when you want to see and analyse what is exactly going on with your table.

As already explained above we can filter out row groups based on their size and in the code below I am filtering all those Row Groups that have 11 MB or more:

row_groups_details - min size
In the result set we can find 5 Row Groups that are sized 11 MB or bigger.

Segment Alignment – alignment.sql (cstore_GetAlignment.sql)

This script is showing the alignment (ordering) between the different Columnstore Segments, for more information about this concept please visit Columnstore Indexes – part 57 (“Segment Alignment Maintenance”).

This script contains 6 configurable parameters, that allow you to drill down into the alignment of the segments within the Columnstore Index.

@schemaName nvarchar(256) = NULL – this parameter allows to show data filtered down to the specified schema.
By default it is set to NULL meaning all schemas of the table will be considered.

@tableName nvarchar(256) = NULL – Allows to show data filtered down to the specified table name pattern.
By default it is set to NULL meaning that all tables in the database will be considered.

@showPartitionStats bit = 1 – this parameter allows to control if the details of each of the available partitions will be shown or the aggregated information for the whole table will be shown.
By default the aggregated information is shown in the result set.

@showUnsupportedSegments bit = 1 – this parameter shows unsupported Segments in the result set, meaning that even those columns that are using Data Type that can not be used for column alignment are displayed in the result set. There is a specific column in the result set that gives back information on the data type if it is supported or not.
By default this parameter is activated meaning that no columns will be filtered out.

@columnName nvarchar(256) = NULL – allows to show data filtered down to 1 particular column name. This is very useful if you are analysing behaviour of the queries against 1 particular table where you do not get the same performance as before because of the lacking segment elimination.
This parameter is ignored by default.

@columnId int = NULL – this parameter allows to filter one specific column Id. This is very useful if you are analysing behaviour of the queries against 1 particular table where you do not get the same performance as before because of the lacking segment elimination.
This parameter is ignored by default.

Let’s dive into the segment analysis by executing this procedure:

alignment - default
The result set is very clear by itself, but beware that on big databases with a lot of columnstore tables that are wide (have a lot of columns), the output is very big and you will need to use additional parameters to dive into the details on the column & partition basis.
Also, notice that there is a column SalesOrderNumber which has a data type of Nvarchar and that is not supported for segment elimination for SQL Servers 2012-2016. This is why the column “Segment Elimination” is displaying this information instead of OK that is the successful message for any other column.
As for the rest of the columns in the result set – they are pretty clear with the most important one being [Segment Alignment %].

Let’s set the parameter for showing unsupported Segments to false (0) and re-execute the stored procedure:

alignment - unsupported segments removed
This time we do not have any of the segments that are not supporting Segment Elimination.

Let’s drill down further, specifying a column name that we are interested in – OnlineSalesKey for example:

alignment - specific column name
Alternatively you can select a specific column id by using @columnId parameter.

Logical Fragmentation – fragmentation.sql (cstore_GetFragmentation.sql)

This script shows the different types of Columnstore Indexes Fragmentation that Columnstore Indexes have, for more information feel free to dive into Clustered Columnstore Indexes – part 36 (“Maintenance Solutions for Columnstore”). Basically it focuses on 2 concepts – logically deleted rows in the columnstore index (deleted bitmap for SQL Server 2014 and deleted buffer & deleted table for SQL Server 2016) & the trimmed row groups (row groups that did not reached the maximum number of rows – 1048576)

This script has just a couple of parameters, in the current version just 3 to be more precise and they allow to filter out the tables analysed in the script or to show partition statistics or overall table statistics.

@schemaName nvarchar(256) = NULL – this parameter allows to show data filtered down to the specified schema.
By default it is set to NULL meaning all schemas of the table will be considered.

@tableName nvarchar(256) = NULL – Allows to show data filtered down to the specified table name pattern.
By default it is set to NULL meaning that all tables in the database will be considered.

@showPartitionStats bit = 1 – this parameter allows to control if the details of each of the available partitions will be shown or the aggregated information for the whole table will be shown.
By default the aggregated information is shown in the result set.

Let’s see what kind of fragmentation we do have

fragmentation - default
The output of this script has a number of columns and in a typical case you will need to scroll to the right to view them all. Besides the name of the table and the name of the Columnstore Index & its type you will get the partition and all the fine statistics that I have in the blog posts mentioned in the description section of this script – overall logical fragmentation, number of deleted row groups as well as their percentage, number of trimmed row groups and their respective percentage, plus the number of optimisable row groups meaning that in the ideal case those Row Groups should not exist at all.

This function gives you a great overview over the current state of each of the partitions of tables with Columnstore Indexes, and it is one of the basis for the upcoming maintenance solution inside CISL.

Dictionaries – dictionaries.sql (cstore_GetDictionaries.sql)

This script shows detailed information about the Columnstore Dictionaries, showing aggregated and detailed information about each of the dictionaries and columns in Columnstore Indexes.

@schemaName nvarchar(256) = NULL – this parameter allows to show data filtered down to the specified schema.
By default it is set to NULL meaning all schemas of the table will be considered.

@tableName nvarchar(256) = NULL – allows to show data filtered down to the specified table name pattern.
By default it is set to NULL meaning that all tables in the database will be considered.

@showDetails bit = 1 – enables showing the details of all Dictionaries
By default this parameter is set enabled.

@showWarningsOnly bit = 0 – this parameter enables to filter out the dictionaries based on the Dictionary Size (@warningDictionarySizeInMB) and Entry Count (@warningEntryCount). With this parameter enabled, only dictionaries with potential problems will be delivered in the second result set.
By default this parameter is disabled.

@warningDictionarySizeInMB Decimal(8,2) = 6. – this parameter sets the size of the dictionary, after which the dictionary should be selected. The value is in Megabytes and the default value is set to 6MB.

@warningEntryCount Int = 1000000 – this parameter enables selection of dictionaries that contain a high number of entries and thus might start dictionary pressure. This parameter should be used together with the @showWarningsOnly.
By default the number of entries is set to 1 Million.

@showAllTextDictionaries bit = 0 – this parameter enables selecting all textual dictionaries indepentantly from their warning status. This parameter should be used together with the @showWarningsOnly.
By default this parameter is disabled.

@showDictionaryType nvarchar(52) = NULL – this parameter enables to filter out dictionaries by type with possible values ‘Local’, ‘Global’ or NULL for both.
By default this parameter is including all dictionaries in the output.

@columnName nvarchar(256) = NULL – this parameter allows to filter out data base on 1 particular column name.
By default it is set to include all columns in the output.

Let’s invoke this function and see the results:

dictionaries - default
There are 2 result sets for this stored procedure. The first one delivers overall information for each of the Columnstore Indexes partition with number of rows being served (this is the sum of active & deleted rows), the total number of dictionaries, the total number of entries stored in the dictionaries, the maximum size for the local dictionary and the maximum size for a global dictionary.
These are important factors for determining dictionary pressure which results in Row Groups trimming.

The second result set delivers detailed information on every single dictionary within the Columnstore Indexes listed in the first result set.

Let’s take a more detailed look at the dictionaries that are already flagged as the ones with warnings by default configuration:

The first result set output did not changed and so here are the details of the second result set:
dictionaries - show warnings
There are just 2 dictionaries both are nvarchar based, that are really big – they are both local and they correspond the row group 1 & 2 of our Columnstore Index for FactOnlineSales table.

Running the analysis of all text dictionaries with @showAllTextDictionaries = 1:

dictionaries - text
delivers us one extra dictionary – the global one, which is quite small – 2,70 MB. Since the global dictionary is never expanded we do not have to worry about dictionary pressure for this table at the moment.

To be 100% sure that there are no other dictionaries for the SalesOrderNumber column, let’s run the following command:

The results are exactly the same with just 3 rows in the second result set.

The Memory – memory.sql (cstore_GetMemory.sql)

This script shows the content of the Columnstore Object Pool (Cache Store), this script is especially useful for SQL Server 2012 & SQL Server 2014, while in SQL Server 2016 there is a specific DMV to address the previous lack of this information. For more information on Columnstore Object Pool please visit the following 2 blog posts: Columnstore Indexes – part 38 (“Memory Structures”) & Columnstore Indexes – part 39 (“Memory in Action”).

This script is very handy for analysing Memory content while doing performance troubleshooting – it will allow you to understand which Segments and Dictionary were already InMemory and which not.

The memory.sql/cstore_GetMemory.sql script has 7 parameters that allows you to control the output:

@showColumnDetails bit = 1 – this parameter drills down into each of the columns inside the memory and shows details on the object (Segment, Dictionary, Deleted Bitmap) level.
By default it is set to show all of the information.

@showObjectTypeDetails bit = 1 – this parameter is selecting where details about the type of the object that is located in memory will be shown or not.
By default this parameter is set to show the object types.

@minMemoryInMb Decimal(8,2) = 0.0 – this parameter filters the minimum amount of memory that the Columnstore object should occupy to be presented in the output.
By default this parameter is set to include all objects.

@schemaName nvarchar(256) = NULL – this parameter allows to show data filtered down to the specified schema.
By default it is set to NULL meaning all schemas of the table will be considered.

@tableName nvarchar(256) = NULL – allows to show data filtered down to the specified table name pattern.
By default it is set to NULL meaning that all tables in the database will be considered.

@columnName nvarchar(256) = NULL – this parameter allows to filter a specific column name.
By default it is set to NULL meaning that no filtering on the column name basis will be done.

@objectType nvarchar(50) = NULL – this parameter allows to filter a specific type of the memory object. Possible values are ‘Segment’,’Global Dictionary’,’Local Dictionary’,’Primary Dictionary Bulk’,’Deleted Bitmap’. It is very useful to get rid of all objects that are not being investigated.
By default it is set to NULL meaning that no filtering on the object type basis will be done.

Let’s execute the default configuration:

The output might be a little tricky to understand at the first moment:
memory - default
The result set represents a full set of the Columnstore internal structures that can be found in memory right now. Besides table name, column id & column name you will find the data type for each of the columns, the type of the object (one of the following values ‘Segment’,’Global Dictionary’,’Local Dictionary’,’Primary Dictionary Bulk’,’Deleted Bitmap’), number of row fragments found in Columnstore Object Pool, % of total Columns Structures (valid for Segments only – meaning that if you have 45 segments in Memory and the total number of Row Groups for this partition is 50, then 90% of the structures are to be found in memory), the size in MB that those fragments occupy and then lastly the number of Scans, Updates made to this Columnstore Index partition and the last time a scan was executed against it.

Notice that at the moment only Segments have Percentage of Total Column Structures filled out.

Before advancing any further let’s run a simple test statement against our table with Columnstore Index, that will put all segments of the TotalCost column into Columnstore Object Pool:

Let’s get an overview of our table:

memory - no details
This time it looks much prettier, isn’t it ? 🙂

Let’s dive deeper into the details and check on all the Segments that are to be found in memory:

memory - segments
This time it looks more clear – we have 23.08% for the most of the Segments to be found in memory, but for the column that we have scanned completely – there are 100% segments are loaded in memory.

What about the dictionaries for this column:

memory - totalcost objects

Experimenting with other parameters of this and other scripts you can dig deep into Columnstore Indexes behaviour without having to write big queries or remembering all possible DMVs and functions and their states and codes.

By the way, stay tuned – a major release is coming before the end of 2015 🙂

2 thoughts on “How to use CISL – the Columnstore Indexes Scripts Library

Leave a Reply

Your email address will not be published. Required fields are marked *