Feeds:
Posts
Comments

I have been noticing one very common error that occurs while trying to failover an Availability Group in SQL Server 2012 AlwaysON setup. The Availability Group fails to come online during a fail over process.

Here is the snippet of the error message.

error

Failed to bring availability group ‘availability_group‘ online. The operation timed out. Verify that the local Windows Server Failover Clustering (WSFC) node is online. Then verify that the availability group resource exists in the WSFC cluster. If the problem persists, you might need to drop the availability group and create it again.

This error is usually cause due to the lack of permission for [NT AUTHORITY\SYSTEM] account to create the high-availability group. Running the below SQL script in all the replica secondaries fixed this issue.

GRANT ALTER ANY AVAILABILITY GROUP TO [NT AUTHORITY\SYSTEM]
GO
GRANT CONNECT SQL TO [NT AUTHORITY\SYSTEM]
GO
GRANT VIEW SERVER STATE TO [NT AUTHORITY\SYSTEM]
GO

According to Microsoft, The [NT AUTHORITY\SYSTEM] account is used by SQL Server AlwaysOn health detection to connect to the SQL Server computer and to monitor health. When you create an availability group, health detection is initiated when the primary replica in the availability group comes online. If the [NT AUTHORITY\SYSTEM] account does not exist or does not have sufficient permissions, health detection cannot be initiated, and the availability group cannot come online during the creation process.

The below snippet shows that the NT AUTHORITY\SYSTEM runs sp_server_diagnostics which is a new internal procedure that runs on a continuous basis. This captures diagnostic data and health information about SQL Server to detect potential failures just like we had default trace (MSSQL\LOG\*.TRC) in the past

ALWAYSON ERROR

 

Hope this article helped you to understand the importance of the NT AUTHORITY\SYSTEM in AlwaysOn configured SQL 2012 servers.

via Failed to bring availability group ‘[availability group name]’ online.

Last week on one of our production servers (version 2011.110.3373.0), we’ve encountered a strange issue which is relatively new (only applicable for SQL Server 2012 and 2014 versions). Based on the symptoms observed and doing a quick Bing search found we encountered a product bug!

I observed multiple SPIDs, around 300+ running the same stored procedure which is doing an INSERT operation and are in a suspended status, they seemed to be hanging there for atleast like 45 minutes (usually they finish in few seconds). There is no blocking and I found nothing wrong with the SQL Server except for started seeing elevated values for PWAIT_MD_RELATION_CACHE and MD_LAZYCACHE_RWLOCK wait types. And, oh! I also noticed that tempdb data was at 99% full.

So I thought I found the root cause and tried mitigating the problem by adding extra tempdb space but still wasn’t any help. As the last resort I had to failover the services to another node, basically restarted SQL Server, to bring the server fully functional.

On further investigation we found the root cause as someone tried to create an index (online) to improve performance, but later he cancelled the create index and we hit this bug. However, this bug only effected the table on which index operation was attempted and so, everything else was operational except for that Stored Procedure running multiple SPIDs trying to INSERT to the table on which the user attempted CREATE INDEX command was blocked, impairing that part of functionality of the application.

Resolution: Cumulative Update 9 for SQL Server 2012 SP1; Cumulative Update 1 for SQL Server 2014

Here are more details about this problem: http://support.microsoft.com/kb/2926712

An auto-growth event is the process by which the SQL Server engine expands the size of a database file when it runs out of space. The amount by which a database file should grow is determined by the value that is set in the SQL configuration. While every organization has its own standards for this setting, however there are some widely followed best practices like for instance the auto-growth setting should not be in percent value

In this post I have put down two scripts that you can customize per your organizational standards. The first script helps you to identify the current settings and the second one can be used to update auto-growth settings. Both the scripts are targeted for all the databases by default, however you can call out the databases that you want to exclude from this change. Please make sure that you test these changes before pushing out to production servers.

Script 1: Identify the current database settings

select DB_NAME(mf.database_id) database_name
, mf.name logical_name
, CONVERT (DECIMAL (20,2) , (CONVERT(DECIMAL, size)/128)) [file_size_MB]
, CASE mf.is_percent_growth
WHEN 1 THEN 'Yes'
ELSE 'No'
END AS [is_percent_growth]
, CASE mf.is_percent_growth
WHEN 1 THEN CONVERT(VARCHAR, mf.growth) + '%'
WHEN 0 THEN CONVERT(VARCHAR, mf.growth/128) + ' MB'
END AS [growth_in_increment_of]
, CASE mf.is_percent_growth
WHEN 1 THEN
CONVERT(DECIMAL(20,2), (((CONVERT(DECIMAL, size)*growth)/100)*8)/1024)
WHEN 0 THEN
CONVERT(DECIMAL(20,2), (CONVERT(DECIMAL, growth)/128))
END AS [next_auto_growth_size_MB]
, CASE mf.max_size
WHEN 0 THEN 'No growth is allowed'
WHEN -1 THEN 'File will grow until the disk is full'
ELSE CONVERT(VARCHAR, mf.max_size)
END AS [max_size]
, physical_name
from sys.master_files mf

Script 2: Update database auto-growth values

/*
****MODIFICATION REQUIRED for AUTOGROWTH -- See line 64 below****
1) Use this script to change the auto growth setting of
   for all databases
2) If you want to exclude any database add the DBs in the
   WHERE Clause -- See line 50 below
3) Tested in 2012 and 2014 SQL Servers
*/

IF EXISTS(SELECT name FROM sys.sysobjects WHERE name = N'ConfigAutoGrowth' AND xtype='U')
DROP TABLE ConfigAutoGrowth
GO
CREATE TABLE DBO.ConfigAutoGrowth
(
iDBID INT,
sDBName SYSNAME,
vFileName VARCHAR(max),
vGrowthOption VARCHAR(12)
)
PRINT 'Table ConfigAutoGrowth Created'
GO
-- Inserting data into staging table
INSERT INTO DBO.ConfigAutoGrowth
SELECT
SD.database_id,
SD.name,
SF.name,
CASE SF.status & 0x100000
WHEN 1048576 THEN 'Percentage'
WHEN 0 THEN 'MB'
END AS 'GROWTH Option'
FROM SYS.SYSALTFILES SF
JOIN
SYS.DATABASES SD
ON
SD.database_id = SF.dbid
GO

-- Dynamically alters the file to set auto growth option to fixed mb
DECLARE @name VARCHAR ( max ) -- Database Name
DECLARE @dbid INT -- DBID
DECLARE @vFileName VARCHAR ( max ) -- Logical file name
DECLARE @vGrowthOption VARCHAR ( max ) -- Growth option
DECLARE @Query VARCHAR(max) -- Variable to store dynamic sql

DECLARE db_cursor CURSOR FOR
SELECT
idbid,sdbname,vfilename,vgrowthoption
FROM configautogrowth
--WHERE sdbname NOT IN ( 'master' ,'msdb' ) --<<--ADD DBs TO EXCLUDE
--AND vGrowthOption = 'Percentage' or 'Mb'

OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @dbid,@name,@vfilename,@vgrowthoption
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT 'Changing AutoGrowth option for database:- '+ UPPER(@name)

/******If you want to change the auto growth size from 100 to some other
value then just modify the filegrowth value in script at below location *********/

SET @Query = 'ALTER DATABASE '+ @name +'
MODIFY FILE (NAME = '+@vFileName+',FILEGROWTH = 5MB)' --<<--ADD AUTOGROWTH SIZE HERE

EXECUTE(@Query)

FETCH NEXT FROM db_cursor INTO @dbid,@name,@vfilename,@vgrowthoption
END
CLOSE db_cursor -- Closing the curson
DEALLOCATE db_cursor -- deallocating the cursor

GO
-- Querying system views to see if the changes are applied
DECLARE @SQL VARCHAR(8000), @sname VARCHAR(3)
SET @SQL=' USE ?
SELECT ''?'' [Dbname]
,[name] [Filename]
,CASE is_percent_growth
WHEN 1 THEN CONVERT(VARCHAR(5),growth)+''%''
ELSE CONVERT(VARCHAR(20),(growth/128))+'' MB''
END [Autogrow_Value]
,CASE max_size
WHEN -1 THEN CASE growth
WHEN 0 THEN CONVERT(VARCHAR(30),''Restricted'')
ELSE CONVERT(VARCHAR(30),''Unlimited'') END
ELSE CONVERT(VARCHAR(25),max_size/128)
END [Max_Size]
FROM ?.sys.database_files'

IF EXISTS(SELECT 1 FROM tempdb..sysobjects WHERE name='##Fdetails')
DROP TABLE ##Fdetails
CREATE TABLE ##Fdetails
(Dbname VARCHAR(50),Filename VARCHAR(50),
Autogrow_Value VARCHAR(15),Max_Size VARCHAR(30))
INSERT INTO ##Fdetails
EXEC sp_msforeachdb @SQL
SELECT * FROM ##Fdetails ORDER BY Dbname

--Dropping the staging table
DROP TABLE ConfigAutoGrowth
GO

In order to reduce Tempdb contention one of  the best practices is to maintain multiple sized Tempdb data files, matching the number of processors and up to a maximum of 8. In this post I will show you T-SQL script to identify current Tempdb configuration and number of logical processors along with adding additional Tempdb data files as required.

Script 1: Find current tempdb configuration


select DB_NAME(mf.database_id) database_name
, mf.name logical_name, mf.file_id
, CONVERT (DECIMAL (20,2)
, (CONVERT(DECIMAL, size)/128)) as [file_size_MB]
, CASE mf.is_percent_growth
WHEN 1 THEN 'Yes'
ELSE 'No'
END AS [is_percent_growth]
, CASE mf.is_percent_growth
WHEN 1 THEN CONVERT(VARCHAR, mf.growth) + '%'
WHEN 0 THEN CONVERT(VARCHAR, mf.growth/128) + ' MB'
END AS [growth_in_increment_of]
, CASE mf.is_percent_growth
WHEN 1 THEN CONVERT(DECIMAL(20,2)
,(((CONVERT(DECIMAL, size)*growth)/100)*8)/1024)
WHEN 0 THEN CONVERT(DECIMAL(20,2)
, (CONVERT(DECIMAL, growth)/128))
END AS [next_auto_growth_size_MB]
, physical_name from sys.master_files mf
where database_id =2 and type_desc= 'rows'

Script 2: Find number of logical processors

SELECT cpu_count AS logicalCPUs FROM sys.dm_os_sys_info

Script 3: Add tempdb data files as per processor count from the above query

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev2',
FILENAME = N'D:\DBA\Data\tempdev2.ndf' , SIZE =8MB , FILEGROWTH = 5MB) --<<--Update the data file location/Size/AutoGrowth
GO

ALTER DATABASE tempdb ADD FILE ( NAME = N'tempdev3',
FILENAME = N'D:\DBA\Data\tempdev3.ndf' , SIZE =8MB , FILEGROWTH = 5MB)--<<--Update the data file location/Size/AutoGrowth
GO
---ETC, add files as per processors count

Reboot/Restart of SQL services is not required for making the tempdb changes. Here is a great post that explains how to best remove extra Tempdb files.

For more information:

 

Often times it is required to find out the SQL name from the host names in a clustered environment so you can use it to connect through SSMS. There are many ways you can achieve this, however I am showing 2 methods here. Method 1 is the most common way using Failover Cluster Manager (ideal when there is one or just a few servers) and second method focuses on getting the same information using PowerShell (Ideal when you are dealing with a large number of servers)

Method 1:

The easiest way (if not the fastest) is to load up the Failover Cluster Manager and go to each Failover Clustered instance to look for the virtual SQL names. You can also easily see which FCIs are running on a particular node by selecting a node and viewing what resources it currently owns as shown in the below snippets.

Poweshell_SQLVirtualName

This is the same for Windows 2003/ 2008+ and SQL 2005/ 2008.

Method 2:

While the above method is pretty easy, its not practical to log into each server to load up FCM when you have tens of hundreds of servers. For this you can use this simple PowerShell script that can be run remotely and can be fed with as many servers as you want to. This will output the SQL virtual names including those with named instances. Please note that the PowerShell cmdlets used here require admin rights on all of the nodes in the cluster or a security exception will be thrown.

Steps:

1.  Create a notepad file “TestServers” on your desktop that contains the list of servers (Host names) that you want to run the code against

2

2. Open Powershell with Admin Privileges

3. Use the below script to load the servers list  into a Variable (Copy & paste the below code in PowerShell and click Enter – Make sure to change the location of your text file)

$machines = get-content C:\Users\Desktop\testservers.txt 

4. Use the below script to call the servers from the above variable to get the output

import-module failoverclusters
foreach ($machine in $machines) {get-clusterresource -Cluster 
$machine -ErrorAction SilentlyContinue| where-object 
{$_.ResourceType -like ?SQL Server?}| 
get-clusterparameter VirtualServerName,InstanceName | 
group-object ClusterObject | select-object @{Name = ?SQLInstance?;
Expression = {[string]::join(?\?,($_.Group | 
select-object -expandproperty Value))}}} 

Here is the session I gave for the techies at Microsoft about in-built tools available in SQL Server to analyze a query performance. This is a bit lengthy, but covers ALL out of the box tools in SQL Server to get query metrics.

Though there are several ways for importing SSIS packages into SQL Server either by creating a stored procedure or using a SSIS package itself, I find this simple code of line much easier and handy to work with.  It utilizes the “dtutil.exe” application which is installed by default with SQL Server installation. It is capable of importing and exporting packages and is found in the Binn folder of SQL Server (For eg: C:\Program Files\Microsoft SQL Server\110\DTS\Binn\dtutil.exe)

Below is a post from Suresh, which describes how dtutil works for importing and exporting one package at a time.

How to Copy or Export an SSIS Package Using Command Prompt Utility – DTUTIL

In this article we will see deploying multiple SSIS packages.

Demo for deploying multiple packages.

1. Open command prompt where the packages (.dtsx) are stored

Tip: Hold ‘Shift’ and right click to open command prompt window at the desired location.

2

2. Execute the dtutil script from cmd.

for %I in (*.dtsx) do dtutil /FILE "%I" /COPY SQL;"/Packages/%~nI" /DESTSERVER localhost

3

Note to make the below changed before you run the above script.

  • Change the package folder location where the SSIS packages will be deployed
  • Change the server name at the end of the code to reflect the destination server. We can choose ‘localhost’ if running this within the server as shown in the below figure.

4

As seen in the above figure all our packages are imported into the server.

For more dtutil Utility options please see the MSDN article HERE.

Before we get started on this topic, here is a quick fact..in SQL Server 2000, there used to be hard limit on the data that can be stored in a single row, which is 8,060 bytes. So, if the data exceeds this limit, the update or insert operation would fail!

Fortunately, in later SQL Server versions, rows are dynamically managed to exceed this limit and the combined width of the row can now exceed the 8,060 byte limit. I wanted to refresh this in our memory as this will help us to better understand the allocation units concept.

What are Allocation Units in SQL Server:

Every partition in a SQL Server table can contain 3 types of data, each stored on its own set of pages. And each of these types of pages is called an Allocation Unit. Below are the 3 types of Allocation Units.

  • IN_ROW_DATA
  • ROW_OVERFLOW_DATA
  • LOB_DATA

So, an Allocation Unit is basically just a set of particular type of pages. Now, let us try to understand each of these allocation units using a demo.

  • IN_ROW_DATA 

When the row size stays within the 8,060-byte limit, SQL Server stores all of the data in the IN_ROW_DATA allocation unit and usually this unit holds the majority of data in most of the applications.

To better explain the concept, I came up with this simple Demo:

--Create a sample db AllocationUnitsDemo
USE master
GO
CREATE DATABASE AllocationUnitsDemo
GO

--Cretae a sample table ProductDetails in the AllocationUnitsDemo db
--Total length of the row in this table is 1000 + 4000 = 5000 (< 8000)
Use AllocationUnitsDemo
GO
CREATE TABLE ProductDetails
(
ProductName varchar(1000),
ProductDesc varchar (4000), 
)
GO

--Check the allocation unit type
Use AllocationUnitsDemo
GO
SELECT type_desc, total_pages, used_pages,data_pages 
FROM sys.allocation_units
WHERE container_id = (SELECT partition_id FROM sys.partitions 
WHERE OBJECT_ID = OBJECT_ID('ProductDetails'))

Results:
In_Row_Data
  • ROW_OVERFLOW_DATA 

Remember the introduction? so, when the row exceeds the 8,060-byte limit, SQL Server then moves one or more of the variable-length columns to pages in the ROW_OVERFLOW_DATA allocation unit.

We still have a limitation here for the row size. Though the combined width of the row can exceed the 8,060 byte limit, the individual width of the  columns must be within the limit of 8,000 bytes. This means we can have a table with two columns defined as nvarchar(5000), nvarchar(5000), but we are not allowed nvarchar(10000)

Demo Continued..

--Add an extra column to the above table ProductDetails
--Make the total length of the row to become 5000 + 4000 = 9000 (>8000)
Use AllocationUnitsDemo
GO
ALTER TABLE ProductDetails ADD ProductSummary nvarchar(4000) 

--Now, Check the allocation unit type
Use AllocationUnitsDemo
GO
SELECT type_desc, total_pages, used_pages,data_pages 
FROM sys.allocation_units
WHERE container_id = (SELECT partition_id FROM sys.partitions 
WHERE OBJECT_ID = OBJECT_ID('ProductDetails'))

Results:
Row_OverFlow_Data
  • LOB_DATA 

If a column with LOB data type is defined, then SQL Server uses the LOB_DATA allocation unit. To know what data types are considered LOB and to get the list of LOB columns from a database, please refer my previous post: “SQL Server – Find all the LOB Data Type Columns in a Database Using T-SQL Script

Demo Continued..

--Add LOB data type column to the table ProductDetails
Use AllocationUnitsDemo
GO
ALTER TABLE ProductDetails ADD ProductImage Image

--Again, Check the allocation unit type
Use AllocationUnitsDemo
GO
SELECT type_desc, total_pages, used_pages,data_pages 
FROM sys.allocation_units
WHERE container_id = (SELECT partition_id FROM sys.partitions 
WHERE OBJECT_ID = OBJECT_ID('ProductDetails'))

Results:
LOB_Data
--Cleanup
Use master
GO
DROP DATABASE AllocationUnitsDemo

How many Allocation Units can a Table have?

It actually depends on the number of partitions and indexes on the table.

To simplify the concept, as shown in the below picture, assume there is one table having no indexes (HEAP) and no partitions. Having no partitions mean, all of the table’s contents are stored in a single partition, meaning every table has at-least 1 partition.

AllocationUnits_Figure1

Based on the above, we can have upto 3 allocation units for a table with no partitions and no indexes. And how about if we have partitions and Indexes? Below is the formula I came up with to get the maximum possible number of allocation units per table.

  • No of Allocation Units = No of Partitions × No of Indexes × 3

AllocationUnits_Count

So, as we see from the figures above, a table can have up to 45 million allocation units in SQL Server 2012!

In this article I will show you how to restore a database that is encrypted with Transparent Data Encryption (TDE) on to a different server. Let’s use the same database ‘TDE_Test’ that we have created in the previous post HERE

Restoring a database to a different SQL Instance is usually a straightforward task. However, this attempt will return an error as shown below for an encrypted database when restoring into a different instance.

USE [master]
RESTORE DATABASE [TDE_Test_restore] FROM  
DISK = N'C:\Backup\TDE_Test_withtde.bak' 
WITH  FILE = 1,  NOUNLOAD,  REPLACE,  STATS = 5

Output:

Msg 33111, Level 16, State 3, Line 2
Cannot find server certificate with thumbprint..
Msg 3013, Level 16, State 3, Line 2
RESTORE DATABASE is terminating abnormally

To restore successfully, we will need to physically copy the certificate (.cer) and private key (.pvk) to the destination server. As a best practice, we should immediately back up the certificate and the private key when we enable TDE. However, we can still take backup the certificate and private key now in the source server as shown below if not done earlier.

USE master; 
GO 
BACKUP CERTIFICATE TDECert1
TO FILE = 'E:\Backup\certificate_TDE_Test_Certificate.cer'
WITH PRIVATE KEY
(FILE = 'E:\Backup\certificate_TDE_Test_Key.pvk',
ENCRYPTION BY PASSWORD = 'Password12#')

Create a Master Key in destination server.

The password provided here is different from the one we used in the source server since we are creating a new master key for this server.

USE master
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'D1ffPa$$w0rd'

After a master key has been created, create a certificate by importing the certificate we created earlier. Here the ‘Decryption By Password’ parameter is same as that provided to export the certificate to a file.

CREATE CERTIFICATE TDECert2
FROM FILE = 'E:\cert_Backups\ certificate_TDE_Test_Certificate.cer'     
WITH PRIVATE KEY (FILE = 'E:\cert_Backups\certificate_TDE_Test_Key.pvk', 
DECRYPTION BY PASSWORD = 'Password12#')

Restore Database in destination server

We will now be able to restore the encrypted database backup successfully.

USE [master]
RESTORE DATABASE [TDE_Test] FROM  DISK = N'F:\Backup\TDE_Test_withtde.bak' 
WITH  FILE = 1, NOUNLOAD,  REPLACE,  STATS = 5

Listed are some of the advantages and disadvantages of using TDE.

Advantages

  • The performance impact of TDE is minor. Estimated to be around 3-5%
  • Performs real-time I/O encryption and decryption of the data and log files
  • Encrypts the Entire Database in rest
  • No architectural changes needed
  • No application code changes are required and the user experience is the same
  • Easy to implement
  • DBAs can still see the data

Disadvantages

  • No protection for data in memory
  • Not granular – Cannot just encrypt specific tables/columns
  • Not good for high CPU bottleneck servers
  • Not protected through communication/networks

Being part of the database engineering team, we were given a task to setup disaster recovery program for some of our SQL Servers hosting critical applications. After a few discussions, we have planned to make use of log shipping to sync data between the two data centers. This requires us to copy the backup files to the Disaster Recovery (DR) site for the initial setup. But in a little while, we realized that moving 30 TB of data over network at approximately 12 mbps might not be a workable option considering 45 days of wait time along with something that chews up all our bandwidth. So we rather decided to ship the hard drives in a truck to the DR site which is hundreds of miles away.

Everything was good until one of the IT Directors who was passing by our desk expressed uncertainty and asked, “Isn’t that like a huge security risk if we lose our disks to a competitor? “

I jumped in and replied him with a smile, “Sir, not when we encrypt our data with Transparent Data Encryption”

And the Boss was happy :)

What is Transparent Data Encryption?

Transparent Data Encryption is a new feature in SQL Server 2008 which allows the encryption of the entire database while providing real time encryption of data files (.mdf) and log files (.ldf). Apart from this, TDE also encrypts any backups taken while the encryption is enabled. Data is encrypted before it is written to disk and data is decrypted when it is read from disk as it is read into the memory. This will help in preventing unauthorized access to the data and backup files.

Demo

I will show you an example of how anyone can read the backup files even without restoring the backup into another database.

I have created a database called TDE_Test that contains address details as shown below.(This database basically contains a few tables imported from the AdventureWorks2012 sample database. For downloading and installing AdventureWorks2012, refer HERE)

1

Figure 1: Result set showing Address data

Let’s backup the ‘TDE_Test’ database

BACKUP DATABASE [TDE_Test] 
TO DISK = 'E:\Backup\TDE_Test__Backup.bak' 
WITH INIT, STATS = 5

Open the backup file using Notepad and search for address ‘‘5 4 4   M a g d a   W a y’, as shown in Figure 2 (note the spaces between the letters and three spaces between each word as this is Unicode text)

We can easily see that the data in backup file is readable. This means anyone with the backup file can actually read your data with little skill, even without restoring it. 

2

Figure 2: Backup file opened in Notepad – Un-encrypted format

Implementing Transparent Data Encryption

TDE setup includes the below four basic steps.

  1. Creating a Master Key
  2. Creating a Certificate Protected by the Master Key
  3. Creating a Database Encryption Key
  4. Turning on the TDE

1. Creating a master key 

It is a good practice to backup the database before proceeding with the encryption. So in case we want to reverse the implementation of TDE we have a latest copy of database to bring back.

The master key must be created within the master database

USE MASTER; 
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Password12#'
GO

2. Creating a Certificate Protected by the Master Key

The certificate is used to protect the database encryption key that we will create next. In addition, this certificate is protected by the master key created in the previous step.

USE MASTER; 
CREATE CERTIFICATE TDECert WITH SUBJECT = 'TDE Certificate'
GO

3. Creating a Database Encryption Key

Create Database Encryption Key inside the database to be encrypted, and protect it with the certificate. You can choose from one of the following encryption algorithms: AES_128, AES_192, AES_256, or TRIPLE_DES_3KEY. Details on the encryption algorithms is beyond the scope of this article. Check out this Microsoft Technet article for guidance on choosing the encryption algorithm

USE TDE_Test  

GO 
CREATE DATABASE ENCRYPTION KEY WITH 
ALGORITHM = AES_256 ENCRYPTION BY 
SERVER CERTIFICATE TDECert ; 
GO

I have also added an additional script below to backup the private key and certificate to disk drive. Since TDE also encrypts the backup files, it is always required to maintain a backup of private key and certificate along with the database backup to use at the time of restoring.

USE master; 
GO 
BACKUP CERTIFICATE TDECert1 
TO FILE = 'E:\Backup\certificate_TDE_Test_Certificate.cer'
WITH PRIVATE KEY
(FILE = 'E:\Backup\certificate_TDE_Test_Key.pvk',
ENCRYPTION BY PASSWORD = 'Password12#')

4. Turning TDE ON

Since encryption is a resource intensive process it is always a best practice to turn on TDE during off-business hours or when the server is least busy.

USE  TDE_Test
ALTER DATABASE  TDE_Test
SET ENCRYPTION ON ; 
GO

We can check the progress of Encryption using the below script from utilizing ‘sys.dm_database_encryption_keys’ DMV. A value of ‘2’ for ‘encryption_state’ indicates that encryption has begun and ‘3’ for completed

SELECT
db.name,
db.is_encrypted,
dm.encryption_state,
dm.percent_complete,
dm.key_algorithm
FROM sys.databases db JOIN sys.dm_database_encryption_keys dm
ON db.database_id = dm.database_id WHERE db.name = 'TDE_TEST'

Let’s take a backup of our encrypted database now and search for address as mentioned earlier.

BACKUP DATABASE [TDE_Test] 
TO DISK = 'E:\Backup\TDE_Test_withtde.bak' 
WITH INIT, STATS= 5

4
Figure 3: Backup file opened in Notepad – Encrypted format

Whoopee! Our data is now encrypted and secured :)

This way, we can make sure the data is secured as well protected from any unauthorized access.

As a continuation to this topic,  I have explained how to restore a TDE Encrypted database backup HERE

Technical Reviewer(s): Venkata Suresh Raavi; Jaipal Vajrala

Follow

Get every new post delivered to your Inbox.

Join 70 other followers