Pure Storage FlashArray SQL Server Snapshot Torture. You Kinda Asked For This.

I gotta admit, some of you are really hard to convince. I’ve been saying for years that given a large enough database size (or a really small RTO) storage based snapshots should be Plan A for recovering the database in the event of a disaster. Yes, you will have a Plan B, likely native backups. And Plan C. Maybe you’ll run out of letters because you’re so paranoid.

Here’s the thing though. You want to avoid the size-of-data operation that you will incur into by performing a byte-by-agonizing-byte restore. Snapshots will potentially save you hours and hours of twiddling your thumbs. Or biting your nails. They certainly help minimize the yelling from the bosses.

Fine. Let’s get this out of the way. Snapshots are not backups. They’re just amazing, amazingly fast restores.

Look, I’m not here to fight your religious war about how snapshots should not be called backups. I’m just gonna call them fast-as-fast restores(*) and be done with it. Because let’s be honest, with Pure Storage there’s absolutely nothing faster than a storage snapshot to recover a volume. Or volume(s). You get the idea. It’s about how fast you recover, every time.

Yes, I do understand that there are a million of considerations for something to be called a “backup”. We’ll get to those little by little – don’t expect a thorough post on that debate right now. Today I want to focus on one question: Are Pure Storage FlashArray snapshots stable, trustworthy enough that I can take them without pausing I/O against my database? Can I trust that the database will come online every time from a snapshot?

From my point of view the answer to that question is: If we’re talking about SQL Server, I’m confident that it will. If we’re talking about MongoDB, may the odds ever be in your favor. So really, it depends on the database engine and how it handles I/O. We’ll talk about MongoDB more in-depth some other day, though. It’s not that ugly of a story, it’s just not as straightforward as it is with SQL Server or for that matter every other RDBMS including Oracle, DB/2, MySQL, MariaDB, PostgreSQL, etc. It’s just that NoSQL engines are…special(**).

Let’s start with one fundamental fact: SQL Server is a relational database management system, and it relies on the Write-Ahead Logging (WAL) protocol to provide ACID properties. Everything SQL Server writes that’s expected to still be there upon a system crash gets written to a log before anything else can happen. This is where Pure’s FlashArray is extremely good at things. When we take a snapshot or we clone a volume, we include in that snapshot or clone every single I/O that’s been acknowledged back to Windows. By definition, completed writes against the transaction log cannot be cached by any volatile area of storage — for example, a DRAM cache on a SAN controller — otherwise they could be lost upon a system crash. That simply doesn’t happen with FlashArray. All writes are acknowledged once they hit one of our NVRAM modules and have been replicated to another NVRAM module attached to a different controller – and guess what NV stands for? Non-Volatile. Meaning, whatever we write to them is considered a persistent write.

So we have in FlashArray every I/O that has been acknowledged as completed. You tell FlashArray to take a snapshot or clone a volume, and Purity//FA takes care of the rest. We won’t go into the details of how Purity//FA handles snaps internally but I added that to the list of things to blog. I think I still owe people blog posts since 2016, though.

Now let’s go torture a FlashArray and see if its snapshots are as good as I claim they are.

Tools Old Urbex - Free photo on Pixabay

The Setup

The setup is actually kind of simple: A single Windows Server 2019 VM running SQL Server 2019. A database that we’ll use as the “source” called FT_Demo. Same one you guys have seen @MarshaPierceDBA and I use in our demos for a long time now. It’s a bit under 2TB of actual allocated data. This time it resides on a 20TB volume as seen by Windows. The VM is running on top of vSphere 7.0 and the virtual disks are VVols.

We’re going to go ahead and create a workload against the source database, and while that is happening we will clone the volume on which the database resides 100 times on a loop without a sleep or pause instruction. We’ll then mount the 100 volumes to 100 different mount points in Windows, and we’ll attach the databases within those volumes to SQL Server. We will then not only check that the databases come online, but we’ll also run DBCC CHECKDB against them.

It is nuts.

Let’s pause for a second and really think this through, though. I’m going to clone 2TB x 100 = 200TB worth of allocated data, and effectively will have added 20TB x 100 = 2 PETABYTES worth of virtual disks to the system. The power of data reducing storage arrays is amazing. I only added a couple of MB worth of metadata to the array once everything was said and done – that’s how much storage capacity I effectively consumed. Go ahead and try this in the cloud using EBS volumes or Azure Premium Storage, and then send me a copy of your bill (I won’t be covering it, no!). I promise you it will be a lot of fun to go over it.

Bring on the Pain!

On the VM I created 100 20TB volumes, 100 folders under an anchor drive for mount points, and then the real fun began.

Image
This is how things looked when I provisioned 100x 20TB volumes for the VM.

Once the volumes were there, I went ahead and kicked off some workload against the source database. Ready for crazy stuff? Let’s go.

This is what it looked like when I was cloning the source database 100 times. So gratifying.

Here’s the script I used. It’s horribly inefficient, I know. I never really spent time optimizing it. Just wanted to get something that worked.

Here’s the looooong video of me running the torture test. Warning: There are long periods of silence while stuff runs

After all was done, I ran a query to gather row counts on the GuineaPig table across the 100 copy databases. You can see how the amount of records increase with every copy, as expected.

I started running CHECKDB on all the copy DBs. It’s gonna take a while…I’ll update the post once it finishes. I have 100 databases that came online just fine, though. And zero horror stories on my ERRORLOG. Happy to share it, just ask.

UPDATE: As I expected CHECKDB finished on all 100 copy databases with zero errors!. Happy to provide the ERRORLOGs to anyone interested. Here’s a screenshot of how the output looks like, after I gathered the output of sp_readerrorlog and pasted that in an Excel sheet:

Can you believe all of these copies only consume about 512GB of actual storage on the array?

So I need your feedback on what you just saw, please! Drop me a note on Twitter or comment below. There are obviously going to be more posts related to snapshots and how they work in Pure Storage FlashArray, as you can imagine. Tons of things to discuss. And definitely more on why this stuff is so important for recovery upon a disaster. For now, leave your comments, questions or concerns below!

Thanks for reading!

-A

UPDATE: I figured out why the VMs were slow in the lab when I recorded the video. One of my peers filled the array to over 100% capacity. Thanks, Remko 😁


(*) I really had a rather inappropriate name for these, actually. Find me at an event and I’ll tell you what it is. I’m sure you can figure it out though 😁

(**) Why I am so polite in my own blog is beyond me.

Making DBCC CHECKDB Go Faster on Pure Storage FlashArray

Earlier on I was doing some CHECKDB-related work in our old SQL Server lab and I realized that the CHECKDB process was not really pushing the array hard. I then had one of those synapses that have seemingly become less frequent as I age — I remembered that there’s a way of making DBCC CHECKDB go much faster than it typically does, at the expense of running less in-depth checks. It’s called the PHYSICAL_ONLY option. And boy did it make a difference versus running the whole thing!

I also remembered that there are some trace flags documented here to improve performance of DBCC CHECKDB with PHYSICAL_ONLY, and Microsoft also indicates that you may see some gains on full CHECKDB if you use them (Narrator voice: I didn’t). So I tested them (both TFs enabled or disabled together, not individually – others may have done this elsewhere) and I noticed a small gain against this test ~1.7TB database.

Run times below for your delight.

ScenarioTime to Complete
PHYSICAL_ONLY, No Trace Flags54:03
PHYSICAL_ONLY, Trace Flags Enabled51:24
Full, Trace Flags Enabled1:27:29
Full, Trace Flags Disabled1:27:24

You can also note in the graph below the bandwidth utilization on the array. It’s a really old //m20 and the SQL Server box is connected to it using 2x 8Gbps Fibre Channel connections, so in your case YMMV when it comes to throughput.

Word of caution, though. This will make DBCC CHECKDB run faster, but it will also hit your array harder. Make sure you have the necessary performance capacity to do this without impacting concurrent workloads. A great way of doing this is using Pure1’s Workload Planner.

Until next time,

-A

T-SQL Tuesday #126 – [email protected]

Well, it’s been years since I’ve participated in one of these, but never too late I guess 🙂

My friend Glenn Berry invited us to participate on a T-SQL Tuesday dedicated to the wonderful project called [email protected] And I couldn’t be happier that he chose that topic.

Several weeks into the COVID-19 pandemic I went through a bit of a low personally. I was struggling to concentrate, feeling tired and generally not resting well. During that period I saw my 44th birthday approach and thought it would be a good thing for me to try to make my work at home environment better. Then I guess the midlife crisis kicked in, because I spent way more money than I should have on this thing:

The Orange Folding Rig (OFR)

OFR — the Orange Folding Rig, is a ridiculously fast “personal” computer. An AMD Ryzen 3900X CPU with 12 physical cores (yes, there are double the number of logical cores), 128GB of DDR4 RAM at 3600Mhz, an m.2 SSD that screams, and an NVIDIA GeForce RTX 2080 SUPER GPU. The motherboard is an ASRock X570 Creator , and the power supply is a Seasonic PRIME PX-1000.

This thing is insane. I run VMs, Spotify, all of my work-related applications – including video recording apps like Camtasia – at the same time that the rig runs it main intended purpose: The [email protected] software. And I run it at full throttle. OFR doesn’t even break a sweat.

I gotta admit that Glenn had a lot to do with my decision to build the rig and contribute to the FAH project. Thanks, my friend. It’s an honor and a privilege to be able to help under the umbrella of team #SQLFamily.

Stay tuned though, friends. I will have news on this front very soon. Very, very heartwarming news.

Scripting out Contained Database Users

 

Today’s blog post comes from a Twitter thread on which @SQLSoldier, @sqlstudent144 and @SQLTaiob were discussing the internals of contained database users.

Unless you have been living under a rock, you’ve heard about the concept of contained users within a SQL Server database (hit the link if you have not). In this article I’d like to show you that you can, indeed, script out contained database users and recreate them on another database, as either contained users or as good old fashioned logins/server principals as well.

Why would this be useful? Well, because you would not need to know the password for the user in order to recreate it on another instance. I know there is a limited number of scenarios where this would be necessary, but nonetheless I figured I’d throw this blog post to show how it can be done.

A more obscure use case: with the password hash (which I’m about to show you how to obtain) you could also crack the password using a utility like hashcat, as highlighted on this SQLServerCentral article.

The Investigation

SQL Server uses System Base Tables to save the password hashes of logins and contained database users. For logins it uses sys.sysxlgns, whereas for contained database users it leverages sys.sysowners.

I’ll show you what I do to figure this stuff out: I create a login/contained user, and then I immediately browse the transaction log with, for example, fn_dblog.

image

It’s pretty obvious that only two base tables touched by the operation are sys.sysxlgns, and also sys.sysprivs – the latter is used to track permissions. If I connect to the DAC on my instance, I can query for the password hash of this login I’ve just created.

image

A few interesting things about this hash. This was taken on my laptop, and I happen to be running SQL Server 2014 RTM CU2, which is the latest public build of SQL Server 2014 as of time of writing. In 2008 R2 and prior versions (back to 2000), the password hashes would start with 0x0100. The reason why this changed is because starting with SQL Server 2012 password hashes are kept using a SHA512 algorithm, as opposed to SHA-1 (used since 2000) or Snefru (used in 6.5 and 7.0). SHA-1 is nowadays deemed unsafe and is very easy to crack.

For regular SQL logins, this information is exposed through the sys.sql_logins catalog view, so there is really no need to connect to the DAC to grab an SID/password hash pair. For contained database users, there is (currently) no method of obtaining SID or password hashes without connecting to the DAC.

If we create a contained database user, this is what we get from the transaction log:

image

Note that the System Base Table used in this case is sys.sysowners. sys.sysprivs is used as well, and again this is to track permissions.

To query sys.sysowners, you would have to connect to the DAC, as I mentioned previously. And this is what you would get:

image

There are other ways to figure out what SQL Server uses under the hood to store contained database user password hashes, like looking at the execution plan for a query to sys.dm_db_uncontained_entities (Thanks, Robert Davis!)

SIDs, Logins, Contained Users, and Why You Care…Or Not.

One of the reasons behind the existence of Contained Users was the concept of portability of databases: it is really painful to maintain Server Principals (Logins) synced across most shared-nothing SQL Server HA/DR technologies (Mirroring, Availability Groups, and Log Shipping). Often times you would need the Security Identifier (SID) of these logins to match across instances, and that meant that you had to fetch whatever SID was assigned to the login on the principal instance so you could recreate it on a secondary. With contained users you normally wouldn’t care about SIDs, as the users are always available (and synced, as long as synchronization takes place) across instances.

Now you might be presented some particular requirement that might specify that SIDs synced between logins on certain instances and contained database users on other databases. How would you go about creating a contained database user with a specific SID?

The answer is that you can’t do it directly, but there’s a little trick that would allow you to do it.

Create a login with a specified SID and password hash, create a user for that server principal on a partially contained database, then migrate that user to contained using the system stored procedure sp_user_migrate_to_contained, then drop the login.

[sql]

CREATE LOGIN <login_name> WITH PASSWORD = <password_hash> HASHED, SID = <sid> ;

GO

USE <partially_contained_db>;

GO

CREATE USER <user_name> FROM LOGIN <login_name>;

GO

EXEC sp_migrate_user_to_contained @username = <user_name>, @rename = N’keep_name’, @disablelogin = N‘disable_login’;

GO

DROP LOGIN <login_name>;

GO [/sql]

Here’s how this skeleton would look like in action:

image

And now I have a contained user with a specified SID and password hash. In my example above, I renamed the user after migrated it to contained so that it is, hopefully, easier to understand.

Enjoy!

On Community and Recognition

Even though way early in my career I had a SQL Server MVP as a mentor (by pure chance), it wasn’t until 2008-2009 that I sort of researched a bit more about the program, and what it really meant. I was beginning to understand how powerful networking in our community really is, and how attending and participating on SQL Saturdays and other events would make all the difference in my career.

I thought having that “badge” would be cool, but I really didn’t know how to get one.

I remember I asked K. Brian Kelley [twitter|blog], somebody who I consider a friend (even though I’ve never met him in person) how could I become a SQL Server MVP. He mentioned helping others through any vehicle (forums like MSDN’s, SQL Server Central, etc.) would help, but that I really needed to ask myself what it would mean to me to be an MVP. And I often wondered, really, what it all meant. Was it just an award given to somebody who would answer questions from others relentlessly? Would it be just some award given to members of this semi-secret inner circle?

Over time I became more and more involved with the community. I started speaking, and continued helping others, mostly through #SQLHelp on Twitter (funny how a hashtag changes your life!). I was seriously turned off by forums (still am) and didn’t care much about blogging very frequently. After all, other folks out there blog about just about every single topic on SQL Server, and I didn’t one to be just one more guy. Yet I blogged, but it was only when I thought my post could really make a difference to somebody facing a similar situation or working with an under-documented feature.

The fact that I tweeted incessantly helped get my name out there, and I kept being accepted to speak at SQL Saturdays and even the PASS Summit. Then I got my MCM certification, which really gave my name some exposure out there.

I was really becoming influential.

I eventually delivered pre-conferences at premier events like SQLBits, spoke at massive events like DevConnections and TechEd, and started toying with the idea of helping PASS even more. Then, with the help of Robert Davis [twitter|blog], and also from K. Brian Kelley himself, we launched the Security Virtual Chapter for PASS.

I was doing all this high profile stuff, which was fantastic, but I never lost sight of my roots: volunteering and helping others. I kept signing up to help PASS as member of the Program Committee, something that I have proudly done for 3 years in a row (and I’m not going to discuss current controversy on the subject here), helped moderate Lightning Talks at the Summit even when I wasn’t chosen as a speaker, spent hours talking to people at the Community Zone at the Summit and the BA Conference, moderated online web casts for the Security VC, and helped moderate online events like 24 Hours of PASS. And, all along, learning and sharing on #SQLHelp.

All the little things that not a lot of people sign up for (or avoid altogether), I happily help with.

Yet “MVPness” is not something that I feel entitled to. Not something that I demanded every quarter. I find it funny that I look forward to the MVP Summit taking place in Seattle because that means I get to hang out with that group – I have met a LOT of them along the way, and some of them I consider extremely close friends. These are people that I deeply respect. Still, I am not expecting an award by association.

I do the things I do for the SQL Server community because I love it, not because I want another three letter acronym on my resume.

Earlier on this year I was nominated for the award by several very influential folks in the community (Thank you!!). I seriously thought I wasn’t going to get it, because I thought I hadn’t done enough for the community to deserve it.

Today, I am _insanely_ honored to be recognized for my contributions to the community.

This is one badge I will wear with pride.

Today I was awarded the SQL Server MVP award.

Microsoft_MVP_logo

And now the real work begins.

This award is yours, #SQLFamily, and also goes to the unsung heroes of this community that have been working hard for YEARS and still have not been awarded; you know who you are. Never give up, my friends. Stay true to your north.

Best Advice Ever: Learn By Helping Others

I remember when back in 2001 my friend and former SQL Server MVP Carlos Eduardo Rojas was busy earning his MVP street-cred in the NNTP forums, aka Newsgroups. I always thought he was playing the Sheriff trying to put some order in a Wild Wild West town by trying to understand what these people were asking. He spent a lot of time doing this stuff – and I thought it was just plain crazy. After all, he was doing it for free. What was he gaining from all of that work?

It was not until the advent of Twitter and #SQLHelp that I realized the real gain behind helping others. Forget about the glory and the laurels of others thanking you (and thinking you’re the best thing ever – ha!), or whatever award with whatever three letter acronym might be given to you.

It’s about what you learn in the process of helping others.

See, when you teach something, it’s usually at a fixed date and time, and on a specific topic. But helping others with their issues or general questions is something that goes on 24×7, on whatever topic under the sun. Just go look at sites like DBA.StackExchange.com, or the SQLServerCentral forums. It’s questions coming in literally non-stop from all corners or the world. And yet a lot of people are willing to help you, regardless of who you are, where you come from, or what time of day it is.

And in my case, this process of helping others usually leads to me learning something new. Especially in those cases where the question isn’t really something I’m good at. The delicate part comes when you’re ready to give an answer, but you’re not sure. Often times I’ll try to validate with Internet searches and what have you. Often times I’ll throw in a question mark at the end of the answer, so as not to look authoritative, but rather suggestive. But as time passes by, you get more and more comfortable with that topic. And that’s the real gain.

 I have done this for many years now on #SQLHelp, which is my preferred vehicle for providing assistance. I cannot tell you how much I’ve learned from it. By helping others, by watching others help. It’s all knowledge and experience you gain…and you might not be getting all that in your day job today. Such thing, my dear reader, is invaluable. It’s what will differentiate yours amongst a pack of resumes. It’s what will get you places. Take it from me – a guy who, like you, knew nothing about SQL Server.

DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

Does your CHECKDB hurt, Argenis?

There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases.

Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why – as long as it was finishing okay and found no consistency errors.

Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss.

In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations.

blob

Dreaded @BlobEater

What was CHECKDB doing all the time while doing so little I/O? Eating Blobs.

It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it.

But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table.

Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns.

I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak.

(There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!)

Failed Hypotheses

Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head.

As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well.

I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter])

…Eureka?

This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence.

After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on.

A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column?

As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column).

To prove that this was the problem, I went and setup a test.

Yup, that’ll do it

The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4).

First, create a test database and a test table, which only needs to contain a sparse column:

[sql]CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000[/sql]

That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds.

Next, we run DBCC CHECKDB against the database:

[sql]DBCC CHECKDB(‘SparseColTest’) WITH NO_INFOMSGS, ALL_ERRORMSGS; [/sql]

This runs extremely fast, as least on my test rig – 198 milliseconds.

Now let’s create a filtered non-clustered index on the sparse column:

[sql]CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; [/sql]

With the index in place now, let’s run DBCC CHECKDB one more time:

[sql]DBCC CHECKDB(‘SparseColTest’) WITH NO_INFOMSGS, ALL_ERRORMSGS; [/sql]

In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds.

I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes.

Cue the “Argenis is not impressed” meme, please, Mr. LaRock.

My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow.

Happy CHECKDBing,

-Argenis

ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.