Contact me

If you wish to contact me, then the best way to do it – is to write here. :)

48 thoughts on “Contact me

  1. Priyadarshi Alok

    I have Visual Studio 2008 version 9 and .Net Frame work 4 on 64 Bit Window 7 Operating System

    As per the installatoin of Google Analytics Step

    I have install the .ddl in below mention folder using Gacutil.exe
    all the dll are installed correctly in following location:

    C:\Windows\Microsoft.NET\assembly\GAC_MSIL

    DotNetOpenAuth.dll
    Google.Apis.Analytics.v3.dll
    Google.Apis.Authentication.OAuth2.dll
    Google.Apis.dll
    Newtonsoft.Json.dll
    SSISComponents.Dts.Pipeline.GoogleAnalyticsSource.dll
    Zlib.Portable.dll

    And then i copied these all dlls in following folder :

    C:\Program Files (x86)\Microsoft SQL Server\100\DTS\PipelineComponents

    and refreshed the SSIS tool box using command :

    C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE>devenv /ResetSettings

    But Still GoogleAnalytics Source is not visible in my SSIS ToolBook Pane.

    Please help me ,where installion step is missing .

  2. Tom

    Hi Nico,
    I’m confused by when to use MAXDOP option when you create or drop CCI.

    CREATE CLUSTERED COLUMNSTORE INDEX [IndexName] ON [TableName]
    WITH (DROP_EXISTING = OFF , MAXDOP=1 )

    DROP INDEX [IndexName] ON [TableName] WITH ( ONLINE = OFF, MAXDOP = 1)

    Question 1) Do I get same results in the end, if I use MAXDOP=1 or MAXDOP=8?, or not using the MAXDOP at all?
    I heard somewhere that using MAXDOP=1 when you creating CCI will get better Clustering.

    Question 2) Do you want better Clustering to get better query performance?
    Question 3) Performance Issues with MAXDOP=1.

    If I use MAXDOP=1 option when you create or drop CCI, it takes long time to process.
    6 hours to create CCI with MAXDOP=1, and 2 hours to drop CCI with MAXDOP=1

    If I use MAXDOp=8 option when you create or dorp CCI, process is much faster then using MAXDOP=1
    1 hour to create CCI with MAXDOP=8, and 25mins to drop CCI with MAXDOP=8

    If my SQL server can handle the CCI Create and Drop with MAXDOP=8 or more, should I be using higer MAXDOP?

    Thanks for your help

    1. Niko Neugebauer Post author

      Hi Tom,

      First of all you will need to use DROP_EXISTING = ON while re-creating a Columnstore Index over the Clustered rowstore one.
      Do not drop Columnstore Index, it makes no sense. Create a Rowstore index with DROP_EXISTING = ON ordering data on the most frequently used column, and then re-create a Columnstore Index with DROP_EXISTING = ON.

      1. You will find the detailed explanations here on the Segment Clustering: http://www.nikoport.com/2014/04/16/clustered-columnstore-indexes-part-29-data-loading-for-better-segment-elimination/
      2. Yes, it will help to get better segment elimination and thus will allow to read & process less data while getting the same results.
      3. MAXDOP = 1 is slow.
      Using MAXDOP = 8 makes your server using 8 cores, so it should be much faster … :)

      Please read this article http://www.nikoport.com/2014/04/16/clustered-columnstore-indexes-part-29-data-loading-for-better-segment-elimination/
      and let me know if something is still not clear.

      Best regards,
      Niko Neugebauer

  3. Mauricio Orellana

    Hello Niko
    a query: SSRS Report Generator 1.8 apparently task does not support exporting to Excel .xlsx format.

    Any advice to achieve this goal?

    Stay tuned.
    thanks.-

  4. Ramya

    Hi Niko,

    I am facing problem with a simple select query which involves three tables with clustered columnstore index on them. For every 15 mins, there is a ETL load running and one of the three tables gets truncated each time before new rows get inserted. This leaves the rowgroup to be open and query becomes too slow.

    It will run fine if I run rebuild index and update stats on tables.
    The problem is we are unable to run rebuild index and update stats for every 15 mins. I wanted to know your thoughts on this and if there is anyway this problem can be avoided.

    Thanks!

  5. Dale Wilson

    Hi Niko,

    I attended your Columnstore Indexes – from basics to optimised analytics at SQLBits.

    Please could you send me your slides from the training day?

    Many thanks,

    Dale

  6. Nadir

    Hi Niko,
    I also attend your precon session on SQLBits, could you share the slides and the script please !
    Your help is mush appreciated !
    Thanks

  7. Nadir

    Hi Niko,
    I also attended your precon session on SQLBits 2016, could you share the slides and the script please !
    Your help is mush appreciated !
    Thanks

  8. anil kumar

    Hello Niko,

    Hope you are doing great !

    I couldn’t submit my comment on that blog post because of some issues, Can you please help me on below questions ?

    As you mentioned in part-38 “I understand that at the moment when we are reading Columnstore data from the disk, it is being read directly into Columnstore Object Pool without decompression, and then all the respective pages from the Segments are decompressed into Buffer Pool for returning the result.” – 1) Does it mean Buffer Pool Extension can store decompressed Columnstore data in SQL Server 2016 Standard edition ?

    2) How to restrict the memory allocation to columnstore objects (Delta-Stores + Deleted Bitmaps + Decompressed columnstore data) in buffer pool ?

    3) Columnstore Object Pool can be of 32GB in SQL Server 2016 STD edition, what is going to be impact of it as most of the stuff is stored in buffer pool ?

    4) CCI and NCCI rebuild and reorganize operations are performed in buffer pool rather than in Columnstore pool ?

    Looking forward for your response. Thank you.

    1. Niko Neugebauer Post author

      Hi Anil,

      answering your questions:
      1) yes. But avoid Buffer Pool Extension for Columnstore Indexes right now, since it is focused on the OLTP scenarios. All operations are done on the page (8KB) level.
      2) There is no way to control that.
      3) It will depend on the actual data. The cap is the maximum value, it does not mean that your workload will strive to achieve that.
      4) yes, to my understanding.

      Best regards,
      Niko

  9. Joao Caxias

    Hey Niko,
    1st Congratulations on all work and knowledge share. Truthfully appreciated.
    On a scenario like this:
    Sql server 2016 on prem , git(visual studio) with all sql structure (…create tables…) how would i change a normal rowstore create table to a columnstore and have it correctly deploied everytime?
    Thank you

    1. Niko Neugebauer Post author

      Hi João,

      Sorry for the delay – you caught me on the vacations.
      Using Test or DEV environment, create Clustered Columnstore Index on the table (you might consider using CISL for that) and start experimenting and observing.

      Best regards,
      Niko

  10. Marcus Nugroho

    Hi Niko,

    I read your blogs regarding operational analytics using memory optimized table + clustered columnstore (CCI). The workload is as follow:
    – 50 million rows on the table initially (size is about 50 GB)
    Run simultaneously:
    – 300 user insertion process –> 3000 TPS
    – 500 user query (simple, medium, complex)

    Use Azure VM (32 cores, 448 GB RAM)
    resource pool is set to 90% max_memory, 90% max CPU. memory grant is 25% for each user

    Issue: we got error for some queries. some is complete successfully.
    There is insufficient system memory in resource pool ‘default’ to run this query.

    When we run on 10 insert users x 30 query users, there is no error.

    Do you have any advice regarding this error? what should I try to setting and test again?

    Thank you

    1. Niko Neugebauer Post author

      Hi Marcus,

      check you memory requirements for the queries – you might need to increase the overall memory or use Resource Governor to restrict the amount of work memory given to the single query.

      Best regards,
      Niko

  11. Ramiro

    Hi nico

    I was able to see your today presentation of column store killed Olaf start, I was thinking, you were talking the needs of creating you data for reporting and analysis as a start schema , anyhow

    What is your opinion about that , with Columnstore now or do you still see the need to store data as start schema or more as a natura relational database

    Thanks

  12. Adam Wenger

    Good afternoon Niko,

    Great information here! I was wondering if you know whether future versions of SQL Server will support Clustered ColumnStore Index MIN/MAX or predicate pushdown for DatetimeOffset data types?

    We’re currently using Datetimeoffset(4) but looks like we’d need to go to Datetime2(4) currently to get the segment elimination for our date queries that we want.

    Thanks again!

    1. Niko Neugebauer Post author

      Hi Adam,

      I have no such information …
      I see it as a bigger decision, which is probably not to be taken any time soon.
      DatetimeOffset(4) occupies 9 bytes and this would imply feature support for > 8 bytes, which I believe is not on the highest priority list right now …

      Best regards,
      Niko

  13. Gonzalo Bissio

    Hi Niko!
    I’m testing your maintenance solution… But i got one issue… In¿¿On our environment we have partitioned and non partitioned clustered columnstores.. The thing is if I use @usePartitionLevel=1 the script showed an error saying that I cannot use partition clause in a non partitioned index. Since i’m executing this in a UAT environment (because we have real data on that env).. The problem is that if i use @usePartitionLevel=0 I’ll be running the maintenance on the whole partitioned table… So, is this expected? What will be your recommendation here? By the way excellent script!!
    Regards!

  14. Doug Valdez

    Performance Benefits of Clustered Column store Index vs In-Memory Column store index?

    If the Server has enough memory for all of the DB to reside in memory will a In-Memory CCI offer much benefits over a normal CCI? (Server Memory has 2 TB and DB size is 1 TB)

    Thanks

    1. Niko Neugebauer Post author

      Hi Doug,

      Your DB size is probably already includes compression, and so I would be careful thinking that the In-Memory will take care of it all. In-Memory is not a solution for the pure memory access, it was designed to tackle locking and latching problems for concurrent access.
      Take a very detailed look at the In-Memory limitations before going that way.

      Regarding the CCI location, no – I do not think that there are any compelling arguments of choosing In-Memory CCI over CCI just on the base of the memory access. Such limitations as no inter-db access of the In-Memory is a major pain, In-Memory CCI requires you to have a nonclustered PK anyway and there is no compression, plus you won’t be able to use the Archival compression, etc …
      Nope.
      Choose In-Memory CCI when you are already working with the In-Memory and need to run aggregations in near real-time on it.

      Best regards,
      Niko

  15. René Schell

    Hi Niko,
    at the yesterdays PASS in Coblenz you’ve mentioned that you could provide us with a list regarding to “in wich cases which index should be used”.
    We would appreciate it, if you’d send us this “checklist”

    Kind regards and have fun in the USA
    René

  16. Karim T

    Hi Niko,

    We don’t have a DW but an OLTP database with some tables over 300M records. In order to use the benefits of SQL2016 batch mode Window Aggregate operator, I tried to add a fake columnstore to the bigger tables
    e.g. CREATE NONCLUSTERED COLUMNSTORE INDEX idx_cs_dummy ON dbo.TRADE_BATCH(iBatchNo) WHERE iBatchNo = -1 AND iBatchNo = -2
    Suddenly the application became extremely slow, and we had issues with tempdb. Noticed that any user/process attempting to add records to a temp table, it blocks other processes/users.
    Once I dropped all the columnstore indexes, everything started running fast, no tempdb blocking.

    I read also that you use another tip, using a left join on a dummy table that has a column store
    LEFT JOIN dbo.dummyTable ON 1 = 2 — but this requires code changes to every query

    Not sure where the problem lies. Maybe you ran into similar situation.

    Any help will be greatly appreciated.

    Thanks,
    Karim

    1. Niko Neugebauer Post author

      Hi Karim,

      this would require looking at the execution plans.
      There are many elements that can potentially go wrong – skewing of the parallelism (often), sorting (because of the memory grants – frequently on Hybrid Systems), etc

      Best regards,
      Niko

  17. Carina

    Hi Niko,
    I attended your session on Worst Practices & Less Known Limitations for Columnstore Indexes at SQLSaturday Vienna in January 2018.
    There you mentioned that in order to get rid of the deleted_rows in the segments it is necessary to rebuild the index and reorganize is not enough.
    Now I found some Microsoft statements (https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-defragmentation?view=sql-server-2017) that starting from SQL Server 2016 also reorganize removes deleted rows.
    Am I missing something, or could I now also use reorganize my indexes on a daily basis in order to lower fragmentation?

    Thanks,
    Carina

    1. Niko Neugebauer Post author

      Hi Carina,

      there are quite a number of things that won’t be allow the total removal of the deleted rows from the Deleted Bitmap:
      – imagine if every Row Group has around 550.000 rows – you won’t be able to merge them
      – imagine if you have memory pressure or dictionary pressure -read here for more (Columnstore Indexes – part 113 (“Row Groups Merging Limitations”) – http://www.nikoport.com/2017/09/24/columnstore-indexes-part-113-row-groups-merging-limitations/)
      etc

      Bear in mind that reorganisation of the columnstore without TF 6404 can lead to disastrous memory grants on bigger machines http://www.nikoport.com/2017/05/30/columnstore-indexes-part-106-memory-requirements-for-rebuild-reorganize/)

      The only guarantee of the total cleanup of the deleted structures (bitmaps, buffers, row tables) is the rebuild.

      Best regards,
      Niko

  18. Henning Prigge

    Hallo Herr Neugebauer,

    ich war im Februar auf der SQL PASS Konferenz, dort habe ich den Columnstore PreCon Day besucht.
    Wäre es möglich die Folien zu bekommen? Auf der PASS Seite konnte ich diese nicht finden.

    Viele Grüße
    H. Prigge

  19. Beth

    Niko, whenever I try to open post Columnstore #87, I am redirected to a scam web site. One of us has been hacked. is it you or is it me?

  20. Vernon

    Hi Niko

    Do you know where i can get the files or installer for the SSIS Analytics Source for sql 2012?
    I’ve searched everywhere for it, i dont see it on https://archive.codeplex.com/?p=analyticssource nor http://ssis-components.net/

    We have some legacy SSIS packages that use this component so we can analyze Google analytics data and integrate with other data, but have moved them to a new computer and dont have the files to re-setup

    Your help would be very much appreciated!

    Regards
    Vernon

Leave a Reply

Your email address will not be published. Required fields are marked *