Category: Uncategorized

The Myth Around 32-bit SQL Server Instances on 64-bit Operating Systems, and AWE

(I’m taking a break from my DBA Best Practices blog series to talk about this today – the series continues after the Holidays!)

I love to be proved wrong. Really, I do.

Recently we had a discussion in an distribution list where somebody asked whether a SQL Server 32-bit instance could address more than 4Gb of RAM when running on top of a 64-bit OS. One of the really smart guys in Microsoft SQL Server Support, Karthick P.K. [Blog|Blog|Twitter] replied “sure, just add more RAM and enable AWE – SQL will use that memory”. I was much convinced that this was incorrect, so I jumped in and said that AWE does nothing under those circumstances. After all, that is what I had read in the past in many different articles. I even had some SQL Server MVPs who have been around the block forever agreeing with me.

But the good news is that I was wrong.

AWE in fact does nothing on 64-bit instances of SQL Server. But on WOW64 – Windows on Windows 64-bit, the subsystem of Windows x64 that allows you to run 32-bit processes and instances of SQL Server, AWE does allow you to address memory above 4Gb. Let me show you.

Side note: AWE is now deprecated – removed in SQL Server 2012. The last version of SQL Server that supports AWE is SQL Server 2008 R2. Because of this, a SQL Server 2012 x86 instance won’t be able to enjoy over 4Gb of memory – even if running on WOW64.

I setup a vanilla VM in my lab with a 64-bit OS with 8Gb of memory. MSINFO32 looks like this on the VM:

image

I installed a 32-bit SQL Server 2008 R2 SP2 instance on the VM. @@VERSION looked like this:

image

I did not enable AWE. I did grant the “Perform Volume Maintenance Tasks” and “Lock Pages in Memory” privileges using secpol.msc (not pictured!). I wanted to see what memory consumption would look like without AWE enabled.

This screenshot shows the permissions granted to the SQL Server Service Account (NETWORK SERVICE, in this case) – used “whoami /priv”, a quick and dirty way to get the privileges of the service account using xp_cmdshell. Ugh, xp_cmdshell. It’s okay, it’s just a test.

image

I went ahead and created a test database with just one big table – big enough to use all the buffer pool memory on the instance when a size-of-data operation is executed on it. I did set Max Server Memory to 6Gb.

Here’s the CREATE table script:

[sql]CREATE TABLE [dbo].[testTable]( [id] [int] IDENTITY(1,1) NOT NULL, [wideColumn] [nvarchar](4000) NULL, PRIMARY KEY CLUSTERED ( [id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[testTable] ADD DEFAULT (replicate(N’A’,(4000))) FOR [wideColumn] GO[/sql]

Next, I went ahead and inserted 1M rows on the table:

[sql]INSERT INTO [dbo].[testTable] DEFAULT VALUES GO 1000000[/sql]

If you do the math very quickly, you will agree with me that this not so little table will be more than 6Gb in size. Let’s check anyway:

image

Cool! The table is large enough for the purpose of this test. But how does memory utilization look like after that 1M row insert?

image

That’s less than 4Gb. Next I turned on AWE, and restarted the instance. Then I queried the ERRORLOG to confirm that AWE was enabled:

image

Okay, since I had just restarted the instance my buffer pool was gone – needed to force a size of data operation on the large table. Rebuilding the clustered index did just fine (I know, that’s probably overkill). Then I checked sys.dm_os_process_memory again:

image

And there we go! A lot more than 4Gb being used now that AWE is in play. I had set Max Server Memory to 6Gb, as I detailed earlier.

It looks like there’s quite a lot of incorrect documentation regarding this, so I wanted to clear out any confusion – and learn something new along the way. Don’t you love SQL Server? Smile

Happy Holidays and Happy New Year!

-Argenis

P.S.: If you are running IA64 (Itanium) – then this doesn’t apply to you. AWE does not work at all on IA64.

DBA Best Practices – A Blog Series: Episode 2 – Password Lists

 

Digital World, Digital Locks

One of the biggest digital assets that any company has is its secrets. These include passwords, key rings, certificates, and any other digital asset used to protect another asset from tampering or unauthorized access.

As a DBA, you are very likely to manage some of these assets for your company – and your employer trusts you with keeping them safe. Probably one of the most important of these assets are passwords. As you well know, the can be used anywhere: for service accounts, credentials, proxies, linked servers, DTS/SSIS packages, symmetrical keys, private keys, etc., etc.

Have you given some thought to what you’re doing to keep these passwords safe? Are you backing them up somewhere? Who else besides you can access them?

Good-Ol’ Post-It Notes Under Your Keyboard

If you have a password-protected Excel sheet for your passwords, I have bad news for you: Excel’s level of encryption is good for your grandma’s budget spreadsheet, not for a list of enterprise passwords.

I will try to summarize the main point of this best practice in one sentence: You should keep your passwords on an encrypted, access and version-controlled, backed-up, well-known shared location that every DBA on your team is aware of, and maintain copies of this password "database" on your DBA’s workstations.

Now I have to break down that statement to you:

– Encrypted: what’s the point of saving your passwords on a file that any Windows admin with enough privileges can read?

– Access controlled: This one is pretty much self-explanatory.

– Version controlled: Passwords change (and I’m really hoping you do change them) and version control would allow you to track what a previous password was if the utility you’ve chosen doesn’t handle that for you.

– Backed-up: You want a safe copy of the password list to be kept offline, preferably in long term storage, with relative ease of restoring.

– Well-known shared location: This is critical for teams: what good is a password list if only one person in the team knows where it is?

I have seen multiple examples of this that work well. They all start with an encrypted database. Certainly you could leverage SQL Server’s native encryption solutions like cell encryption for this. I have found such implementations to be impractical, for the most part.

Enter The World Of Utilities

There are a myriad of open source/free software solutions to help you here. One of my favorites is KeePass, which creates encrypted files that can be saved to a network share, Sharepoint, etc. KeePass has UIs for most operating systems, including Windows, MacOS, iOS, Android and Windows Phone.

Other solutions I’ve used before worth mentioning include PasswordSafe and 1Password, with the latter one being a paid solution – but wildly popular in mobile devices.

There are, of course, even more "enterprise-level" solutions available from 3rd party vendors. The truth is that most of the customers that I work with don’t need that level of protection of their digital assets, and something like a KeePass database on Sharepoint suits them very well.

What are you doing to safeguard your passwords? Leave a comment below, and join the discussion!

Cheers,

-Argenis

DBA Best Practices – A Blog Series: Episode 1 – Backups

 

This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject.

Morning Coffee

When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures.

Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn’t work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking – I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often.

Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it.

Now it goes without saying that if you don’t have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here.

“But, we have a cluster…we don’t need backups”

Sadly I’ve heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table – accidentally or otherwise.

Backup, fine. How often do I take a backup?

The answer to this is something you will hear frequently when working with databases: it depends.

What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what’s called Recovery Point Objective, or RPO. If you don’t know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question “how much data loss can you withstand?” will be “zero”. In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today’s computing environments.

Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That’s what differential and transaction log backups are for.

Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what’s called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number – and that will be your starting point for discussing a solution that is cost effective.

The point that I’m trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you’ll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes.

A backup is nothing more than an untested restore

Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those?

Restoring your backups on another box – that, by the way, doesn’t have to match the specs of your production server – will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks.

If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you’re using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further – you can read more about this enhancement here.

Back to the “How Often” question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There’s not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs.

Where to back up to? Network share? Locally? SAN volume?

This is another topic where everybody has a favorite choice. So, I’ll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time – you wouldn’t need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!).

The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter – or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there’s much more to DR, and that’s material for a different blog post in this series.

T-SQL Tuesday #36 (#tsql2sday)– Post-PASS Summit Depression

I had an email thread going with a prominent member of the SQL Server community today, where he confessed that he didn’t attend any sessions during the PASS Summit last week. He spent all of this time networking and catching up with people.

I, personally, can relate.

This year’s Summit was another incarnation of that ritual of SQL Server professionals meeting to share their knowledge, experience, and just have a wonderful time while doing so.

It’s been a few days after the Summit is over, and I’m definitely dealing with withdrawal.

My name is Argenis, and I’m a #SQLFamilyHolic.

 

 

 

 

(This post is part of the T-SQL Tuesday series, a monthly series of blog posts from members of the SQL Server community – this month, Chris Yates is hosting)

DBA Best Practices: A Blog Series

 

Introduction

After the success of the “Demystifying DBA Best Practices” Pre-Conference that my good friend Robert Davis, a.k.a. SQLSoldier [Blog|Twitter] and I delivered at multiple events, including the PASS Summit 2012, I have decided to blog about some of the topics discussed at the Pre-Con. My thanks go to Robert for agreeing to share this content with the larger SQL Server community.

This will be a rather lengthy blog series – and as in the Pre-Con, I expect a lot of interaction and feedback. Make sure you throw in your two cents in the comments section of every blog post.

First topic that I’ll be discussing in this blog series: The thing of utmost importance for any Database Administrator: the data. Let’s discuss the importance of backups and a solid restore strategy.

Care to share your thoughts on this subject in the comments section below?

Database Mail and SMO are indeed supported on 64-bit, Standard Edition instances of SQL Server 2012

 

This is something that comes up rather regularly at forums, so I decided to create a quick post to make sure that folks out there can feel better about SQL Server 2012.

If you read this Web article, “Features Supported By Editions of SQL Server 2012” as of time of writing this post, you will see that the article points out that these two features are not supported on x64 Standard Edition. This is NOT correct. It is most definitely a documentation bug – one that unfortunately has caused some customers to sit on a waiting pattern before upgrading to SQL Server 2012.

Database Mail and SMO indeed work and are fully supported on SQL Server 2012 Standard Edition x64 instances. These features work as they should.

I have contacted the documentation teams internally to make sure that this is reflected on next releases of said Web article.

Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions:

image

So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born.

(If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal)

1) Do ghost records get copied over in a backup?

If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents.

Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them.

As a matter of fact, all of the allocated space for a database will be included in a full backup.

So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files!

One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind.

2) Does Compression get rid of ghost records (2008)?

The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script:

CREATE DATABASE GhostRecordsTest
GO

USE GhostRecordsTest
GO

CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,
                      myWideColumn varchar(1000) NOT NULL DEFAULT ‘Default string value’)
                       
ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)
GO

INSERT INTO myTable DEFAULT VALUES
GO 10

DELETE myTable WHERE myPrimaryKey % 2 = 0

DBCC TRACEON(2514)

DBCC CHECKTABLE(myTable)

TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5”

Until next time,

-Argenis

T-SQL Tuesday #028 – Jack of All Trades, Master of None?

TSQL2sDay150x150

I’ve lost count of how many times I’ve heard that phrase.

Are you specialized? On something? Or anything at all? Has that been a good or a bad thing? Why?

Are you the SQL guy at work? Or the one who does everything?

Do you code? And configure wireless routers at work also?

If you had to pick one thing to specialize on, what would it be?

Over the course of my career I’ve worn many many hats. I always felt I was doing fine, had a stable job, but wasn’t quite fond of my prospects for the future. Then a friend said that I should focus on one thing and be the best at it. And while I’m most certainly NOT the best at it, I’ve gotten progressively better on it, to the degree that I’ve been called an ‘Expert’ by some (hate that word!) – I’d rather be called ‘knowledgeable’. My career took off like a rocket after I specialized, and certainly choosing to focus on one thing (SQL Server, in my case) has been one of the best decisions I’ve ever made. I’ve also been careful of not forgetting my roots as a SysAdmin – and always try to keep up with changes on the Windows/SAN/Networking front, but not with the same level of intensity.

So, in this installment of T-SQL Tuesday I’d like to ask you to blog about your experience. Tell us why you specialized, or why you’d like to specialize. If you don’t think that specialization is a good thing, tell us why. Discuss. Argue your point(s).

Rules

  • Your post must be published between 00:00 GMT Tuesday March 13th, 2011, and 00:00 GMT Wednesday March14th, 2011
  • Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
  • Trackbacks should work, but if you don’t see one please link to your post in the comments section below so everyone can see your work

Optional, but highly encouraged…

  • Include a reference to T-SQL Tuesday in the title of your post
  • Tweet about your post using the hash tag #TSQL2sDay
  • Consider hosting T-SQL Tuesday yourself. Adam Machanic keeps the list.

A Few Thoughts on the SQL Server MCM Program

I took the MCM lab a couple of weeks ago and passed. I still don’t believe it – it’s been quite the journey.

There have been many decisive moments in my career; one of those was back in 2010 when I decided I would work on obtaining the certification after Brent Ozar blogged rather extensively about his experience with the program. I progressively worked on aspects of SQL Server that were out of my comfort zone – and that paid off. But simply put, I have the cert – but still don’t consider myself a true Master. And I probably never will.

I might be okay in certain areas of SQL, maybe even good in others – but I don’t master them all. I intend on getting better on those areas, but that won’t happen overnight.

SQL Server is a mammoth of a product. It’s actually a suite of products – and the MCM certification focuses on only one aspect of it: the Database Engine. No SSAS, no SSIS, no SSRS.

So you might be wondering: Is it worth it then? Why work so hard on obtaining a certification that calls you “Master” when it doesn’t test your skills on every single feature of SQL Server?

To me, it was. And then some.

The fact that I value most after going through the program? Knowing exactly what I don’t know. In other words, looking at a problem and knowing that I won’t be able to solve it right away, yet still I will have an idea of where to begin digging for information to conquer it.

You see, when I decided to go for the MCM I would have probably ranked myself a 6/10. So I went and trained myself on those areas where I felt I wasn’t strong enough. I read blogs, I watched videos, I attended SQLSkills immersion events (lucky me!), I went to SQLSaturdays, and even a couple of PASS Summits – but all of that was theory, not a whole lot of practice. And when you get to the MCM Lab, you better had practiced. You better had familiarized with that particular feature you’re being challenged with – at least a tiny bit. Learning from BOL on the fly during the MCM Lab is probably just going to be a waste of time – trust me on that one.

The other thing I learned along the way? The power of networking. If you are a SQL Server professional, in any function – DBA, Dev or what have you – allow yourself a couple of minutes to brag about what you do: our community is awesome. Better than anything I’ve seen out there. I haven’t met a single person in the SQL community that’s not willing to help a colleague.

And that’s precisely what helped me prepare the most for the MCM: helping others. Perhaps that will help you get there as well?

Cheers,

-Argenis