Contact me

If you wish to contact me, then the best way to do it – is to write me at this address: niko [at] nikoport

23 thoughts on “Contact me

  1. Priyadarshi Alok

    I have Visual Studio 2008 version 9 and .Net Frame work 4 on 64 Bit Window 7 Operating System

    As per the installatoin of Google Analytics Step

    I have install the .ddl in below mention folder using Gacutil.exe
    all the dll are installed correctly in following location:

    C:\Windows\Microsoft.NET\assembly\GAC_MSIL

    DotNetOpenAuth.dll
    Google.Apis.Analytics.v3.dll
    Google.Apis.Authentication.OAuth2.dll
    Google.Apis.dll
    Newtonsoft.Json.dll
    SSISComponents.Dts.Pipeline.GoogleAnalyticsSource.dll
    Zlib.Portable.dll

    And then i copied these all dlls in following folder :

    C:\Program Files (x86)\Microsoft SQL Server\100\DTS\PipelineComponents

    and refreshed the SSIS tool box using command :

    C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE>devenv /ResetSettings

    But Still GoogleAnalytics Source is not visible in my SSIS ToolBook Pane.

    Please help me ,where installion step is missing .

  2. Tom

    Hi Nico,
    I’m confused by when to use MAXDOP option when you create or drop CCI.

    CREATE CLUSTERED COLUMNSTORE INDEX [IndexName] ON [TableName]
    WITH (DROP_EXISTING = OFF , MAXDOP=1 )

    DROP INDEX [IndexName] ON [TableName] WITH ( ONLINE = OFF, MAXDOP = 1)

    Question 1) Do I get same results in the end, if I use MAXDOP=1 or MAXDOP=8?, or not using the MAXDOP at all?
    I heard somewhere that using MAXDOP=1 when you creating CCI will get better Clustering.

    Question 2) Do you want better Clustering to get better query performance?
    Question 3) Performance Issues with MAXDOP=1.

    If I use MAXDOP=1 option when you create or drop CCI, it takes long time to process.
    6 hours to create CCI with MAXDOP=1, and 2 hours to drop CCI with MAXDOP=1

    If I use MAXDOp=8 option when you create or dorp CCI, process is much faster then using MAXDOP=1
    1 hour to create CCI with MAXDOP=8, and 25mins to drop CCI with MAXDOP=8

    If my SQL server can handle the CCI Create and Drop with MAXDOP=8 or more, should I be using higer MAXDOP?

    Thanks for your help

    1. Niko Neugebauer Post author

      Hi Tom,

      First of all you will need to use DROP_EXISTING = ON while re-creating a Columnstore Index over the Clustered rowstore one.
      Do not drop Columnstore Index, it makes no sense. Create a Rowstore index with DROP_EXISTING = ON ordering data on the most frequently used column, and then re-create a Columnstore Index with DROP_EXISTING = ON.

      1. You will find the detailed explanations here on the Segment Clustering: http://www.nikoport.com/2014/04/16/clustered-columnstore-indexes-part-29-data-loading-for-better-segment-elimination/
      2. Yes, it will help to get better segment elimination and thus will allow to read & process less data while getting the same results.
      3. MAXDOP = 1 is slow.
      Using MAXDOP = 8 makes your server using 8 cores, so it should be much faster … 🙂

      Please read this article http://www.nikoport.com/2014/04/16/clustered-columnstore-indexes-part-29-data-loading-for-better-segment-elimination/
      and let me know if something is still not clear.

      Best regards,
      Niko Neugebauer

  3. Mauricio Orellana

    Hello Niko
    a query: SSRS Report Generator 1.8 apparently task does not support exporting to Excel .xlsx format.

    Any advice to achieve this goal?

    Stay tuned.
    thanks.-

  4. Ramya

    Hi Niko,

    I am facing problem with a simple select query which involves three tables with clustered columnstore index on them. For every 15 mins, there is a ETL load running and one of the three tables gets truncated each time before new rows get inserted. This leaves the rowgroup to be open and query becomes too slow.

    It will run fine if I run rebuild index and update stats on tables.
    The problem is we are unable to run rebuild index and update stats for every 15 mins. I wanted to know your thoughts on this and if there is anyway this problem can be avoided.

    Thanks!

  5. Dale Wilson

    Hi Niko,

    I attended your Columnstore Indexes – from basics to optimised analytics at SQLBits.

    Please could you send me your slides from the training day?

    Many thanks,

    Dale

  6. Nadir

    Hi Niko,
    I also attend your precon session on SQLBits, could you share the slides and the script please !
    Your help is mush appreciated !
    Thanks

  7. Nadir

    Hi Niko,
    I also attended your precon session on SQLBits 2016, could you share the slides and the script please !
    Your help is mush appreciated !
    Thanks

  8. anil kumar

    Hello Niko,

    Hope you are doing great !

    I couldn’t submit my comment on that blog post because of some issues, Can you please help me on below questions ?

    As you mentioned in part-38 “I understand that at the moment when we are reading Columnstore data from the disk, it is being read directly into Columnstore Object Pool without decompression, and then all the respective pages from the Segments are decompressed into Buffer Pool for returning the result.” – 1) Does it mean Buffer Pool Extension can store decompressed Columnstore data in SQL Server 2016 Standard edition ?

    2) How to restrict the memory allocation to columnstore objects (Delta-Stores + Deleted Bitmaps + Decompressed columnstore data) in buffer pool ?

    3) Columnstore Object Pool can be of 32GB in SQL Server 2016 STD edition, what is going to be impact of it as most of the stuff is stored in buffer pool ?

    4) CCI and NCCI rebuild and reorganize operations are performed in buffer pool rather than in Columnstore pool ?

    Looking forward for your response. Thank you.

    1. Niko Neugebauer Post author

      Hi Anil,

      answering your questions:
      1) yes. But avoid Buffer Pool Extension for Columnstore Indexes right now, since it is focused on the OLTP scenarios. All operations are done on the page (8KB) level.
      2) There is no way to control that.
      3) It will depend on the actual data. The cap is the maximum value, it does not mean that your workload will strive to achieve that.
      4) yes, to my understanding.

      Best regards,
      Niko

Leave a Reply

Your email address will not be published. Required fields are marked *