Author: argenis

DBA Best Practices – A Blog Series: Episode 2 – Password Lists

 

Digital World, Digital Locks

One of the biggest digital assets that any company has is its secrets. These include passwords, key rings, certificates, and any other digital asset used to protect another asset from tampering or unauthorized access.

As a DBA, you are very likely to manage some of these assets for your company – and your employer trusts you with keeping them safe. Probably one of the most important of these assets are passwords. As you well know, the can be used anywhere: for service accounts, credentials, proxies, linked servers, DTS/SSIS packages, symmetrical keys, private keys, etc., etc.

Have you given some thought to what you’re doing to keep these passwords safe? Are you backing them up somewhere? Who else besides you can access them?

Good-Ol’ Post-It Notes Under Your Keyboard

If you have a password-protected Excel sheet for your passwords, I have bad news for you: Excel’s level of encryption is good for your grandma’s budget spreadsheet, not for a list of enterprise passwords.

I will try to summarize the main point of this best practice in one sentence: You should keep your passwords on an encrypted, access and version-controlled, backed-up, well-known shared location that every DBA on your team is aware of, and maintain copies of this password "database" on your DBA’s workstations.

Now I have to break down that statement to you:

– Encrypted: what’s the point of saving your passwords on a file that any Windows admin with enough privileges can read?

– Access controlled: This one is pretty much self-explanatory.

– Version controlled: Passwords change (and I’m really hoping you do change them) and version control would allow you to track what a previous password was if the utility you’ve chosen doesn’t handle that for you.

– Backed-up: You want a safe copy of the password list to be kept offline, preferably in long term storage, with relative ease of restoring.

– Well-known shared location: This is critical for teams: what good is a password list if only one person in the team knows where it is?

I have seen multiple examples of this that work well. They all start with an encrypted database. Certainly you could leverage SQL Server’s native encryption solutions like cell encryption for this. I have found such implementations to be impractical, for the most part.

Enter The World Of Utilities

There are a myriad of open source/free software solutions to help you here. One of my favorites is KeePass, which creates encrypted files that can be saved to a network share, Sharepoint, etc. KeePass has UIs for most operating systems, including Windows, MacOS, iOS, Android and Windows Phone.

Other solutions I’ve used before worth mentioning include PasswordSafe and 1Password, with the latter one being a paid solution – but wildly popular in mobile devices.

There are, of course, even more "enterprise-level" solutions available from 3rd party vendors. The truth is that most of the customers that I work with don’t need that level of protection of their digital assets, and something like a KeePass database on Sharepoint suits them very well.

What are you doing to safeguard your passwords? Leave a comment below, and join the discussion!

Cheers,

-Argenis

DBA Best Practices – A Blog Series: Episode 1 – Backups

 

This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject.

Morning Coffee

When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures.

Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn’t work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking – I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often.

Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it.

Now it goes without saying that if you don’t have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here.

“But, we have a cluster…we don’t need backups”

Sadly I’ve heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table – accidentally or otherwise.

Backup, fine. How often do I take a backup?

The answer to this is something you will hear frequently when working with databases: it depends.

What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what’s called Recovery Point Objective, or RPO. If you don’t know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question “how much data loss can you withstand?” will be “zero”. In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today’s computing environments.

Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That’s what differential and transaction log backups are for.

Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what’s called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number – and that will be your starting point for discussing a solution that is cost effective.

The point that I’m trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you’ll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes.

A backup is nothing more than an untested restore

Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those?

Restoring your backups on another box – that, by the way, doesn’t have to match the specs of your production server – will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks.

If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you’re using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further – you can read more about this enhancement here.

Back to the “How Often” question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There’s not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs.

Where to back up to? Network share? Locally? SAN volume?

This is another topic where everybody has a favorite choice. So, I’ll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time – you wouldn’t need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!).

The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter – or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there’s much more to DR, and that’s material for a different blog post in this series.

T-SQL Tuesday #36 (#tsql2sday)– Post-PASS Summit Depression

I had an email thread going with a prominent member of the SQL Server community today, where he confessed that he didn’t attend any sessions during the PASS Summit last week. He spent all of this time networking and catching up with people.

I, personally, can relate.

This year’s Summit was another incarnation of that ritual of SQL Server professionals meeting to share their knowledge, experience, and just have a wonderful time while doing so.

It’s been a few days after the Summit is over, and I’m definitely dealing with withdrawal.

My name is Argenis, and I’m a #SQLFamilyHolic.

 

 

 

 

(This post is part of the T-SQL Tuesday series, a monthly series of blog posts from members of the SQL Server community – this month, Chris Yates is hosting)

DBA Best Practices: A Blog Series

 

Introduction

After the success of the “Demystifying DBA Best Practices” Pre-Conference that my good friend Robert Davis, a.k.a. SQLSoldier [Blog|Twitter] and I delivered at multiple events, including the PASS Summit 2012, I have decided to blog about some of the topics discussed at the Pre-Con. My thanks go to Robert for agreeing to share this content with the larger SQL Server community.

This will be a rather lengthy blog series – and as in the Pre-Con, I expect a lot of interaction and feedback. Make sure you throw in your two cents in the comments section of every blog post.

First topic that I’ll be discussing in this blog series: The thing of utmost importance for any Database Administrator: the data. Let’s discuss the importance of backups and a solid restore strategy.

Care to share your thoughts on this subject in the comments section below?

Database Mail and SMO are indeed supported on 64-bit, Standard Edition instances of SQL Server 2012

 

This is something that comes up rather regularly at forums, so I decided to create a quick post to make sure that folks out there can feel better about SQL Server 2012.

If you read this Web article, “Features Supported By Editions of SQL Server 2012” as of time of writing this post, you will see that the article points out that these two features are not supported on x64 Standard Edition. This is NOT correct. It is most definitely a documentation bug – one that unfortunately has caused some customers to sit on a waiting pattern before upgrading to SQL Server 2012.

Database Mail and SMO indeed work and are fully supported on SQL Server 2012 Standard Edition x64 instances. These features work as they should.

I have contacted the documentation teams internally to make sure that this is reflected on next releases of said Web article.

Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions:

image

So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born.

(If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal)

1) Do ghost records get copied over in a backup?

If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents.

Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them.

As a matter of fact, all of the allocated space for a database will be included in a full backup.

So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files!

One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind.

2) Does Compression get rid of ghost records (2008)?

The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script:

CREATE DATABASE GhostRecordsTest
GO

USE GhostRecordsTest
GO

CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,
                      myWideColumn varchar(1000) NOT NULL DEFAULT ‘Default string value’)
                       
ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)
GO

INSERT INTO myTable DEFAULT VALUES
GO 10

DELETE myTable WHERE myPrimaryKey % 2 = 0

DBCC TRACEON(2514)

DBCC CHECKTABLE(myTable)

TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5”

Until next time,

-Argenis

T-SQL Tuesday #028 – Jack of All Trades, Master of None?

TSQL2sDay150x150

I’ve lost count of how many times I’ve heard that phrase.

Are you specialized? On something? Or anything at all? Has that been a good or a bad thing? Why?

Are you the SQL guy at work? Or the one who does everything?

Do you code? And configure wireless routers at work also?

If you had to pick one thing to specialize on, what would it be?

Over the course of my career I’ve worn many many hats. I always felt I was doing fine, had a stable job, but wasn’t quite fond of my prospects for the future. Then a friend said that I should focus on one thing and be the best at it. And while I’m most certainly NOT the best at it, I’ve gotten progressively better on it, to the degree that I’ve been called an ‘Expert’ by some (hate that word!) – I’d rather be called ‘knowledgeable’. My career took off like a rocket after I specialized, and certainly choosing to focus on one thing (SQL Server, in my case) has been one of the best decisions I’ve ever made. I’ve also been careful of not forgetting my roots as a SysAdmin – and always try to keep up with changes on the Windows/SAN/Networking front, but not with the same level of intensity.

So, in this installment of T-SQL Tuesday I’d like to ask you to blog about your experience. Tell us why you specialized, or why you’d like to specialize. If you don’t think that specialization is a good thing, tell us why. Discuss. Argue your point(s).

Rules

  • Your post must be published between 00:00 GMT Tuesday March 13th, 2011, and 00:00 GMT Wednesday March14th, 2011
  • Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
  • Trackbacks should work, but if you don’t see one please link to your post in the comments section below so everyone can see your work

Optional, but highly encouraged…

  • Include a reference to T-SQL Tuesday in the title of your post
  • Tweet about your post using the hash tag #TSQL2sDay
  • Consider hosting T-SQL Tuesday yourself. Adam Machanic keeps the list.

A Few Thoughts on the SQL Server MCM Program

I took the MCM lab a couple of weeks ago and passed. I still don’t believe it – it’s been quite the journey.

There have been many decisive moments in my career; one of those was back in 2010 when I decided I would work on obtaining the certification after Brent Ozar blogged rather extensively about his experience with the program. I progressively worked on aspects of SQL Server that were out of my comfort zone – and that paid off. But simply put, I have the cert – but still don’t consider myself a true Master. And I probably never will.

I might be okay in certain areas of SQL, maybe even good in others – but I don’t master them all. I intend on getting better on those areas, but that won’t happen overnight.

SQL Server is a mammoth of a product. It’s actually a suite of products – and the MCM certification focuses on only one aspect of it: the Database Engine. No SSAS, no SSIS, no SSRS.

So you might be wondering: Is it worth it then? Why work so hard on obtaining a certification that calls you “Master” when it doesn’t test your skills on every single feature of SQL Server?

To me, it was. And then some.

The fact that I value most after going through the program? Knowing exactly what I don’t know. In other words, looking at a problem and knowing that I won’t be able to solve it right away, yet still I will have an idea of where to begin digging for information to conquer it.

You see, when I decided to go for the MCM I would have probably ranked myself a 6/10. So I went and trained myself on those areas where I felt I wasn’t strong enough. I read blogs, I watched videos, I attended SQLSkills immersion events (lucky me!), I went to SQLSaturdays, and even a couple of PASS Summits – but all of that was theory, not a whole lot of practice. And when you get to the MCM Lab, you better had practiced. You better had familiarized with that particular feature you’re being challenged with – at least a tiny bit. Learning from BOL on the fly during the MCM Lab is probably just going to be a waste of time – trust me on that one.

The other thing I learned along the way? The power of networking. If you are a SQL Server professional, in any function – DBA, Dev or what have you – allow yourself a couple of minutes to brag about what you do: our community is awesome. Better than anything I’ve seen out there. I haven’t met a single person in the SQL community that’s not willing to help a colleague.

And that’s precisely what helped me prepare the most for the MCM: helping others. Perhaps that will help you get there as well?

Cheers,

-Argenis

On the Topic of Lost SA Passwords on SQL Server 2000…

Since it looks like everything I blog about lately is showing how to get onto SQL instances to which you don’t have formal credentials, I figured what the heck – let’s do a post on SQL 2000.

Earlier on today Saurabh Sapra [twitter] sent a tweet to SQL Server MVP Thomas LaRock [blog|twitter]:

image

To which Tom replied:

image

I was flattered. But I had no posts on that subject. So in turn, I replied:

image

image

It turns out that it is straightforward. And I will show you how to do just that – recover the SA password of a SQL 2000 instance. It’s easier than you think. As a matter of fact, I’m going to show you how to recover the password for any SQL login that has a current session established against a SQL 2000 instance.

Caveat – and this is important – you cannot recover the password if the user does not have a session established. I.E., if it is not logged on to SQL. I haven’t heard of a way to recover an SA password if there isn’t a connection established to SQL. If you know how to do this, please drop me a line.

Let’s dive right into it.

On Enterprise Manager, I registered the local instance using the SA login – then dropped the BUILTIN\Administrators login.

image

Nothing shocking there – just showing you that there is at least one active connection to SQL (Enterprise Manager itself) and that there are no other SQL logins on the box.

Next, I opened up a CMD window, and created a memory dump of the sqlservr.exe process. I can do this because I am a local Windows Administrator on this box. To achieve this I used the SysInternals’ ProcDump utility. Then I took the memory dump and parsed it through strings.exe, another SysInternals utility, essentially filtering from the memory dump anything that doesn’t look like a string.

image

Now I opened dump.txt in Notepad, and searched for the host name of my SQL 2000 box immediately followed by sa. In this case my VM is named “HURRICANE”, so I searched for “HURRICANEsa”

image

And there you have it. The password is found between the string “HURRICANEsa” and the string “MS SQLEM(LOCAL)ODBC”. The SA password for this instance is “[email protected]!”.

Obviously your memory dump will look different than mine, given that your SQL client string will probably be different than the one Enterprise Manager uses – which I leveraged for this example. Also, my SQL 2000 instance had just been restarted, so I had less ‘stuff’ in memory than you probably will. You might even get a false positive. My point: YMMV.

As I mentioned above, this technique also works for any SQL login with an established connection, regardless of its privileges on the server. Moreover, this technique also works on SQL Server 2005 and (I believe – haven’t tested it) some builds of 2008 and 2008 R2. It wasn’t too long ago that the SQL team at Microsoft started to finally clear this area of memory after a session’s credentials have been verified.

Hope this was useful.

Cheers,

-Argenis

UPDATE: A couple of folks asked me which build I used to test this. I originally tested it on SP4a, and later confirmed with a more recent build, 8.00.2187 (but not the latest one that’s available only to customers with support agreements).

Leveraging Service SIDs to Logon to SQL Server 2012+ Instances with Sysadmin Privileges

Edit: I have confirmed that this is still valid for the all versions of SQL Server 2012 and above. As of this last update (Oct 2019), this works even on the yet-to-be-released SQL Server 2019. The binary paths may change across versions, but the idea is the same.

If you recall one of my previous blog posts, titled Think Your Windows Administrators Don’t Have Access to SQL Server 2008 by Default? Think Again I exploited the fact that NT AUTHORITY\SYSTEM was granted membership to the sysadmin server role by setup in SQL Server 2008 R2 and below to gain access to a SQL instance to which I had no access, since as Administrator on the box I could launch a cmd session as NT AUTHORITY\SYSTEM with Sysinternals’ psexec utility.

My friend and SQL Server MVP Jorge Segarra [blog|twitter] correctly pointed out shortly after that NT AUTHORITY\SYSTEM is not a member of the sysadmin server role in SQL Server 2012, codename Denali. And as of Release Candidate 0 for this version, this holds true.

What also holds true as of RC0 is that the Service SID for a number of services (at least three, the SQL Engine itself, SQL VSS Writer and Winmgmt) are members of the sysadmin role. And so in this post I’d like to demonstrate that it is possible to exploit one of these services’ level of access to hop onto a 2012 (or 2014) instance as sysadmin.

The target: a named SQL instance called “DENALI_RC0” on one of my desktop PCs. Having dropped my login on SQL, when I try to logon to the instance I get the usual message:

image

I picked a service to become “the victim”. The SQL VSS Writer service seemed to be a good candidate: innocuous enough. If I stop it and restart it, no big deal.

I launched regedit and browsed to HKLM\SYSTEM\CurrentControlSet\services\SQLWriter – this is what I saw:

image

Now being an Administrator of this PC as I am, I went ahead and renamed sqlwriter.exe to sqlwriter.exe.orig, and put a copy of SQLCMD.EXE on C:\Program Files\Microsoft SQL Server\90\Shared.

Then I renamed SQLCMD.EXE to sqlwriter.exe.

Obviously kicking off the SQL VSS Writer service was not going to do anything – just error out:

image

So I replaced the ImagePath for sqlwriter on the registry with this:

“C:\Program Files\Microsoft SQL Server\90\Shared\sqlwriter.exe” -S CSHQAFERNANDEZD\DENALI_RC0 -E -Q “CREATE LOGIN [CORP\Argenis.Fernandez] FROM WINDOWS; EXECUTE sp_addsrvrolemember @loginame = ‘CORP\Argenis.Fernandez’, @rolename = ‘sysadmin'”

And now I kick off the sqlwriter service again, expecting it to error out…but with a nice side effect.

image

Sure enough, launched SSMS 2012 and was able to login. And guess what, my login has sysadmin privileges.

image

And so I’m sure some of you have already yelled “SECURITY HOLE!!!!” by now – yeah, to a degree…but remember kids, if you’re a local Administrator on the box, you already own the box. Very little applications like SQL Server can do to protect themselves from a “rogue” Admin. Maybe a few adjustments to the security model for Windows’ SCM (Service Control Manager) are needed here, but I’ll let you decide on that.

UPDATE FOR SQL EXPRESS USERS: If you need to leverage this trick on a SQL Server Express install, you can use the “Winmgmt” service – Windows Management Instrumentation, aka WMI service.

Cheers,

-Argenis