Contact me If you wish to contact me, then the best way to do it – is to write here. :) 48 thoughts on “Contact me” Priyadarshi Alok July 15, 2014 at 9:59 am I have Visual Studio 2008 version 9 and .Net Frame work 4 on 64 Bit Window 7 Operating System As per the installatoin of Google Analytics Step I have install the .ddl in below mention folder using Gacutil.exe all the dll are installed correctly in following location: C:\Windows\Microsoft.NET\assembly\GAC_MSIL DotNetOpenAuth.dll Google.Apis.Analytics.v3.dll Google.Apis.Authentication.OAuth2.dll Google.Apis.dll Newtonsoft.Json.dll SSISComponents.Dts.Pipeline.GoogleAnalyticsSource.dll Zlib.Portable.dll And then i copied these all dlls in following folder : C:\Program Files (x86)\Microsoft SQL Server\100\DTS\PipelineComponents and refreshed the SSIS tool box using command : C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE>devenv /ResetSettings But Still GoogleAnalytics Source is not visible in my SSIS ToolBook Pane. Please help me ,where installion step is missing . Reply ↓ Niko Neugebauer Post authorJuly 21, 2014 at 10:14 pm Hi Priyadarshi Alok, you are trying to installed Google on an unsupported version – you should be using Visual Studio 2010 and SQL Server 2012. Best regards, Niko Reply ↓ Andrew Fenna July 17, 2014 at 11:20 am I attended your SQL Bits training day on columnstore indexes please can you share with me the slides and tsql scripts? Thanks Andrew Reply ↓ Niko Neugebauer Post authorJuly 21, 2014 at 10:06 pm Andrew, I have sent you the materials, thank you for attending SQLBits training day. Reply ↓ Benjamin October 13, 2015 at 10:07 am Hi Niko, We, at PartitionDB, developed a product we would humbled if you to take a look. It’s a community product made to assist SaaS Systems on how to maintain distributed database architecture. Please, by all means, feel free to contact us at email@example.com Best regards Reply ↓ Tom December 12, 2015 at 11:19 pm Hi Nico, I’m confused by when to use MAXDOP option when you create or drop CCI. CREATE CLUSTERED COLUMNSTORE INDEX [IndexName] ON [TableName] WITH (DROP_EXISTING = OFF , MAXDOP=1 ) DROP INDEX [IndexName] ON [TableName] WITH ( ONLINE = OFF, MAXDOP = 1) Question 1) Do I get same results in the end, if I use MAXDOP=1 or MAXDOP=8?, or not using the MAXDOP at all? I heard somewhere that using MAXDOP=1 when you creating CCI will get better Clustering. Question 2) Do you want better Clustering to get better query performance? Question 3) Performance Issues with MAXDOP=1. If I use MAXDOP=1 option when you create or drop CCI, it takes long time to process. 6 hours to create CCI with MAXDOP=1, and 2 hours to drop CCI with MAXDOP=1 If I use MAXDOp=8 option when you create or dorp CCI, process is much faster then using MAXDOP=1 1 hour to create CCI with MAXDOP=8, and 25mins to drop CCI with MAXDOP=8 If my SQL server can handle the CCI Create and Drop with MAXDOP=8 or more, should I be using higer MAXDOP? Thanks for your help Reply ↓ Niko Neugebauer Post authorDecember 13, 2015 at 4:17 pm Hi Tom, First of all you will need to use DROP_EXISTING = ON while re-creating a Columnstore Index over the Clustered rowstore one. Do not drop Columnstore Index, it makes no sense. Create a Rowstore index with DROP_EXISTING = ON ordering data on the most frequently used column, and then re-create a Columnstore Index with DROP_EXISTING = ON. 1. You will find the detailed explanations here on the Segment Clustering: http://www.nikoport.com/2014/04/16/clustered-columnstore-indexes-part-29-data-loading-for-better-segment-elimination/ 2. Yes, it will help to get better segment elimination and thus will allow to read & process less data while getting the same results. 3. MAXDOP = 1 is slow. Using MAXDOP = 8 makes your server using 8 cores, so it should be much faster … :) Please read this article http://www.nikoport.com/2014/04/16/clustered-columnstore-indexes-part-29-data-loading-for-better-segment-elimination/ and let me know if something is still not clear. Best regards, Niko Neugebauer Reply ↓ Mauricio Orellana January 7, 2016 at 8:13 pm Hello Niko a query: SSRS Report Generator 1.8 apparently task does not support exporting to Excel .xlsx format. Any advice to achieve this goal? Stay tuned. thanks.- Reply ↓ Niko Neugebauer Post authorJanuary 10, 2016 at 10:53 pm Hi Mauricio, Since i have no idea, I suggest filing this request on the project page. Best regards, Niko Neugebauer Reply ↓ James Anderson February 16, 2016 at 3:14 pm Hi Niko How do I report potential bugs in the CISL? Reply ↓ Ramya March 21, 2016 at 8:33 pm Hi Niko, I am facing problem with a simple select query which involves three tables with clustered columnstore index on them. For every 15 mins, there is a ETL load running and one of the three tables gets truncated each time before new rows get inserted. This leaves the rowgroup to be open and query becomes too slow. It will run fine if I run rebuild index and update stats on tables. The problem is we are unable to run rebuild index and update stats for every 15 mins. I wanted to know your thoughts on this and if there is anyway this problem can be avoided. Thanks! Reply ↓ Niko Neugebauer Post authorMarch 22, 2016 at 5:24 pm Hi Ramya, Try manually closing the Delta-Stores with ALTER INDEX … REORGANIZE WITH (COMPRESS_ALL_ROW_GROUPS = ON); You will find details here: http://www.nikoport.com/2014/07/04/clustered-columnstore-indexes-part-33-tuple-mover-that-closes-open-delta-stores/ Best regards, Niko Neugebauer Reply ↓ Dale Wilson May 19, 2016 at 7:55 am Hi Niko, I attended your Columnstore Indexes – from basics to optimised analytics at SQLBits. Please could you send me your slides from the training day? Many thanks, Dale Reply ↓ Niko Neugebauer Post authorMay 25, 2016 at 4:14 pm Hi Dale, you will receive them from the SQLBits organisers. I have sent them my materials. Please let me know if there is a problem. Best regards, Niko Reply ↓ Nadir May 26, 2016 at 7:11 pm Hi Niko, I also attend your precon session on SQLBits, could you share the slides and the script please ! Your help is mush appreciated ! Thanks Reply ↓ Nadir May 26, 2016 at 7:11 pm Hi Niko, I also attended your precon session on SQLBits 2016, could you share the slides and the script please ! Your help is mush appreciated ! Thanks Reply ↓ Niko Neugebauer Post authorMay 29, 2016 at 12:01 pm Hi Nadir, you should have received the materials from the SQLBits organisers. If there is a problem, please let me know. Best regards, Niko Neugebauer Reply ↓ Geoff February 2, 2017 at 9:17 pm Do you have a version that supports SQL Server 2016? Thanks! Reply ↓ Niko Neugebauer Post authorFebruary 3, 2017 at 1:51 pm Hi Geoff, nope – I am not the author of that plugin. Reply ↓ Charlie February 8, 2017 at 12:53 am Can this run with SQL Server 2016? Thanks. Reply ↓ Niko Neugebauer Post authorFebruary 8, 2017 at 8:00 am I do not believe so. Reply ↓ anil kumar May 3, 2017 at 5:47 am Hello Niko, Hope you are doing great ! I couldn’t submit my comment on that blog post because of some issues, Can you please help me on below questions ? As you mentioned in part-38 “I understand that at the moment when we are reading Columnstore data from the disk, it is being read directly into Columnstore Object Pool without decompression, and then all the respective pages from the Segments are decompressed into Buffer Pool for returning the result.” – 1) Does it mean Buffer Pool Extension can store decompressed Columnstore data in SQL Server 2016 Standard edition ? 2) How to restrict the memory allocation to columnstore objects (Delta-Stores + Deleted Bitmaps + Decompressed columnstore data) in buffer pool ? 3) Columnstore Object Pool can be of 32GB in SQL Server 2016 STD edition, what is going to be impact of it as most of the stuff is stored in buffer pool ? 4) CCI and NCCI rebuild and reorganize operations are performed in buffer pool rather than in Columnstore pool ? Looking forward for your response. Thank you. Reply ↓ Niko Neugebauer Post authorMay 25, 2017 at 9:30 am Hi Anil, answering your questions: 1) yes. But avoid Buffer Pool Extension for Columnstore Indexes right now, since it is focused on the OLTP scenarios. All operations are done on the page (8KB) level. 2) There is no way to control that. 3) It will depend on the actual data. The cap is the maximum value, it does not mean that your workload will strive to achieve that. 4) yes, to my understanding. Best regards, Niko Reply ↓ Joao Caxias June 28, 2017 at 7:51 am Hey Niko, 1st Congratulations on all work and knowledge share. Truthfully appreciated. On a scenario like this: Sql server 2016 on prem , git(visual studio) with all sql structure (…create tables…) how would i change a normal rowstore create table to a columnstore and have it correctly deploied everytime? Thank you Reply ↓ Niko Neugebauer Post authorJuly 13, 2017 at 6:25 pm Hi João, Sorry for the delay – you caught me on the vacations. Using Test or DEV environment, create Clustered Columnstore Index on the table (you might consider using CISL for that) and start experimenting and observing. Best regards, Niko Reply ↓ Marcus Nugroho August 5, 2017 at 7:06 am Hi Niko, I read your blogs regarding operational analytics using memory optimized table + clustered columnstore (CCI). The workload is as follow: – 50 million rows on the table initially (size is about 50 GB) Run simultaneously: – 300 user insertion process –> 3000 TPS – 500 user query (simple, medium, complex) Use Azure VM (32 cores, 448 GB RAM) resource pool is set to 90% max_memory, 90% max CPU. memory grant is 25% for each user Issue: we got error for some queries. some is complete successfully. There is insufficient system memory in resource pool ‘default’ to run this query. When we run on 10 insert users x 30 query users, there is no error. Do you have any advice regarding this error? what should I try to setting and test again? Thank you Reply ↓ Niko Neugebauer Post authorNovember 14, 2017 at 12:01 am Hi Marcus, check you memory requirements for the queries – you might need to increase the overall memory or use Resource Governor to restrict the amount of work memory given to the single query. Best regards, Niko Reply ↓ Ramiro November 2, 2017 at 1:36 am Hi nico I was able to see your today presentation of column store killed Olaf start, I was thinking, you were talking the needs of creating you data for reporting and analysis as a start schema , anyhow What is your opinion about that , with Columnstore now or do you still see the need to store data as start schema or more as a natura relational database Thanks Reply ↓ Niko Neugebauer Post authorNovember 13, 2017 at 11:56 pm Hi Ramiro, star schema is what the Columnstore Indexes & Batch Mode Queries are optimised for … Best regards, Niko Neugebauer Reply ↓ Adam Wenger December 13, 2017 at 6:16 pm Good afternoon Niko, Great information here! I was wondering if you know whether future versions of SQL Server will support Clustered ColumnStore Index MIN/MAX or predicate pushdown for DatetimeOffset data types? We’re currently using Datetimeoffset(4) but looks like we’d need to go to Datetime2(4) currently to get the segment elimination for our date queries that we want. Thanks again! Reply ↓ Niko Neugebauer Post authorDecember 27, 2017 at 12:34 pm Hi Adam, I have no such information … I see it as a bigger decision, which is probably not to be taken any time soon. DatetimeOffset(4) occupies 9 bytes and this would imply feature support for > 8 bytes, which I believe is not on the highest priority list right now … Best regards, Niko Reply ↓ Gonzalo Bissio December 18, 2017 at 1:23 pm Hi Niko! I’m testing your maintenance solution… But i got one issue… In¿¿On our environment we have partitioned and non partitioned clustered columnstores.. The thing is if I use @usePartitionLevel=1 the script showed an error saying that I cannot use partition clause in a non partitioned index. Since i’m executing this in a UAT environment (because we have real data on that env).. The problem is that if i use @usePartitionLevel=0 I’ll be running the maintenance on the whole partitioned table… So, is this expected? What will be your recommendation here? By the way excellent script!! Regards! Reply ↓ Niko Neugebauer Post authorDecember 27, 2017 at 12:36 pm Hi Gonzalo, Thank you for reporting the issue! I am logging this issue into the working log and will correct it in the next release. Best regards, Niko Reply ↓ Doug Valdez February 1, 2018 at 6:11 pm Performance Benefits of Clustered Column store Index vs In-Memory Column store index? If the Server has enough memory for all of the DB to reside in memory will a In-Memory CCI offer much benefits over a normal CCI? (Server Memory has 2 TB and DB size is 1 TB) Thanks Reply ↓ Niko Neugebauer Post authorFebruary 1, 2018 at 10:27 pm Hi Doug, Your DB size is probably already includes compression, and so I would be careful thinking that the In-Memory will take care of it all. In-Memory is not a solution for the pure memory access, it was designed to tackle locking and latching problems for concurrent access. Take a very detailed look at the In-Memory limitations before going that way. Regarding the CCI location, no – I do not think that there are any compelling arguments of choosing In-Memory CCI over CCI just on the base of the memory access. Such limitations as no inter-db access of the In-Memory is a major pain, In-Memory CCI requires you to have a nonclustered PK anyway and there is no compression, plus you won’t be able to use the Archival compression, etc … Nope. Choose In-Memory CCI when you are already working with the In-Memory and need to run aggregations in near real-time on it. Best regards, Niko Reply ↓ René Schell March 2, 2018 at 10:08 am Hi Niko, at the yesterdays PASS in Coblenz you’ve mentioned that you could provide us with a list regarding to “in wich cases which index should be used”. We would appreciate it, if you’d send us this “checklist” Kind regards and have fun in the USA René Reply ↓ Niko Neugebauer Post authorMarch 14, 2018 at 1:32 am Hi René, I have sent you per Email the slides and the checklist. Best regards, Niko Reply ↓ Hallod March 17, 2018 at 10:41 pm 4yzo8ho69sbwx8gcww06tq80sphoxah5y2ctkjliqnojnloq0i tbDwrJsF Reply ↓ Karim T April 9, 2018 at 8:09 pm Hi Niko, We don’t have a DW but an OLTP database with some tables over 300M records. In order to use the benefits of SQL2016 batch mode Window Aggregate operator, I tried to add a fake columnstore to the bigger tables e.g. CREATE NONCLUSTERED COLUMNSTORE INDEX idx_cs_dummy ON dbo.TRADE_BATCH(iBatchNo) WHERE iBatchNo = -1 AND iBatchNo = -2 Suddenly the application became extremely slow, and we had issues with tempdb. Noticed that any user/process attempting to add records to a temp table, it blocks other processes/users. Once I dropped all the columnstore indexes, everything started running fast, no tempdb blocking. I read also that you use another tip, using a left join on a dummy table that has a column store LEFT JOIN dbo.dummyTable ON 1 = 2 — but this requires code changes to every query Not sure where the problem lies. Maybe you ran into similar situation. Any help will be greatly appreciated. Thanks, Karim Reply ↓ Niko Neugebauer Post authorMay 1, 2018 at 6:17 pm Hi Karim, this would require looking at the execution plans. There are many elements that can potentially go wrong – skewing of the parallelism (often), sorting (because of the memory grants – frequently on Hybrid Systems), etc Best regards, Niko Reply ↓ Carina April 27, 2018 at 8:59 am Hi Niko, I attended your session on Worst Practices & Less Known Limitations for Columnstore Indexes at SQLSaturday Vienna in January 2018. There you mentioned that in order to get rid of the deleted_rows in the segments it is necessary to rebuild the index and reorganize is not enough. Now I found some Microsoft statements (https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-defragmentation?view=sql-server-2017) that starting from SQL Server 2016 also reorganize removes deleted rows. Am I missing something, or could I now also use reorganize my indexes on a daily basis in order to lower fragmentation? Thanks, Carina Reply ↓ Niko Neugebauer Post authorMay 1, 2018 at 6:28 pm Hi Carina, there are quite a number of things that won’t be allow the total removal of the deleted rows from the Deleted Bitmap: – imagine if every Row Group has around 550.000 rows – you won’t be able to merge them – imagine if you have memory pressure or dictionary pressure -read here for more (Columnstore Indexes – part 113 (“Row Groups Merging Limitations”) – http://www.nikoport.com/2017/09/24/columnstore-indexes-part-113-row-groups-merging-limitations/) etc Bear in mind that reorganisation of the columnstore without TF 6404 can lead to disastrous memory grants on bigger machines http://www.nikoport.com/2017/05/30/columnstore-indexes-part-106-memory-requirements-for-rebuild-reorganize/) The only guarantee of the total cleanup of the deleted structures (bitmaps, buffers, row tables) is the rebuild. Best regards, Niko Reply ↓ Henning Prigge April 29, 2018 at 6:27 pm Hallo Herr Neugebauer, ich war im Februar auf der SQL PASS Konferenz, dort habe ich den Columnstore PreCon Day besucht. Wäre es möglich die Folien zu bekommen? Auf der PASS Seite konnte ich diese nicht finden. Viele Grüße H. Prigge Reply ↓ Niko Neugebauer Post authorMay 2, 2018 at 12:21 pm Answered per email. Best regards, Niko Reply ↓ Beth May 9, 2018 at 9:20 pm Niko, whenever I try to open post Columnstore #87, I am redirected to a scam web site. One of us has been hacked. is it you or is it me? Reply ↓ Niko Neugebauer Post authorMay 10, 2018 at 8:10 am Hi Beth, works fine for me. Best regards, Niko Reply ↓ Vernon July 6, 2018 at 12:31 pm Hi Niko Do you know where i can get the files or installer for the SSIS Analytics Source for sql 2012? I’ve searched everywhere for it, i dont see it on https://archive.codeplex.com/?p=analyticssource nor http://ssis-components.net/ We have some legacy SSIS packages that use this component so we can analyze Google analytics data and integrate with other data, but have moved them to a new computer and dont have the files to re-setup Your help would be very much appreciated! Regards Vernon Reply ↓ Niko Neugebauer Post authorJuly 6, 2018 at 1:10 pm Hi Vernon, just download the archive at https://archive.codeplex.com/?p=analyticssource and to my understanding it should be there – somewhere in the sources. Otherwise try reaching out to Tillmann – he should have the executable installer definitely. Best regards, Niko Reply ↓ Leave a Reply Cancel reply Your email address will not be published. Required fields are marked *Comment Name * Email * Website Notify me of new posts by email.