Always On Availability Group Latency Reports

With SQL Server 2012 we introduced Always On Availability Groups, and the Always On Availability Group Dashboard in SQL Server Management Studio (SSMS). This dashboard can be utilized by database administrators to view the current health of an Availability Group and its availability replicas and databases. While the dashboard can be configured to provide information regarding the latency between Primary and Secondary Replicas (can be calculated using the Commit LSN, Sent LSN and harden LSN values), it does not provide insights into the reason for the latency. To understand the reason for latency, requires capture and analysis of Extended Events and Performance Monitor counters. This activity can be time consuming and requires extensive knowledge of the Extended Events associated with Always On.

With the new SSMS 17.4 release, we are introducing the Availability Group Latency data collection and reporting built into the Availability Group dashboard. This feature masks the capture and analysis of the Extended Events from the end user and provides an easy to understand report detailing the time spent during the various phases of the Log Transport process.

What to use it for?

The Latency data collection functionality and the associated reports allows a database administrator to quickly discern the bottleneck in the log transport flow between the Primary and the Secondary replicas of an Availability Group. This feature does NOT answer the question “Is there latency in the Availability Group deployment?” but rather provides a way to understand why there is latency in the Availability Group Deployment. This functionality provides a way to narrow down the potential cause of latency in an Availability Group deployment.

How does it Work?

As seen below, the Availability Group Latency Data Collection functionality can be accessed from the Availability Group Dashboard.

Click here to view the rest of this post.

SSMS SQL Server Diagnostics

Microsoft is releasing the SQL Server Diagnostics (Preview) extension within SQL Server Management Studio and Developer APIs to empower SQL Server customers to achieve more through a variety of offerings to self-resolve SQL Server issues.

What this offers to our customers?

Analyze Dumps – Customers using this extension will be able to debug and self-resolve memory dump issues from their SQL Server instances and receive recommended Knowledge Base (KB) article(s) from Microsoft, which may be applicable for the fix. The memory dumps are stored in a secured and compliant manner as governed by the Microsoft Privacy Policy.

Click here to view the rest of this post.

Community driven Enhancements in SQL Server 2017

While SQL Server 2016 runs faster, SQL Server 2017 promises to run faster and empower customers to run smarter with intelligent database features like the ability to run advanced analytics using Python in a parallelized and highly scalable way, the ability to store and analyze graph data, adaptive query processing and resumable online indexing allowing customers to deploy it on platform of their choice (Windows or Linux). SQL Server is one of the most popular DBMS among SQL Community and is a preferred choice of RDBMS among customers and ISVs owing to its strong community support. In SQL Server 2017 CTP 2.0, we have released several customer delighters and community driven enhancements based on the learnings and feedback from customers and community from in-market releases of SQL Server.

Smart Differential Backup – A new column modified_extent_page_count is introduced in sys.dm_db_file_space_usage to track differential changes in each database file of the database. The new column modified_extent_page_count will allow DBAs, SQL Community and backup ISVs to build smart backup solution which performs differential backup if percentage changed pages in the database is below a threshold (say 70-80%) else perform full database backup. With large number of changes in the database, cost and time to complete differential backup is similar to that of full database backup so there is no real benefit of taking differential backup in this case but it can rather increase the restore time of database. By adding this intelligence to the backup solutions, customers can now save on restore and recovery time while using differential backups.

Consider a scenario where you previously had a backup plan to take full database backup on weekends and differential backup daily. In this case, if the database is down on Friday, you will need to restore full db backup from Sunday, differential backups from Thursday and then T-log backups from Friday. By leveraging modified_extent_page_count in your backup solution, you can now take full database backup on Sunday and lets say by Wednesday, if 90% of pages have changed, the backup solution should take full database backup rather than differential backup. Now, if the database goes down on Friday, you can restore the full db backup from Wednesday, small differential backup from Thursday and T-log backups from Friday to restore and recover the database quickly compared to the previous scenario. This feature was requested by customers and community in connect item 511305.

Click here to view the rest of this post.

SQL Server Performance Dashboard Reports

SQL Server 2012 Performance Dashboard Reports is one of most popular SQL Server monitoring solution for customers and SQL community leveraging dynamic management views (DMVs) for monitoring and reporting and available at no cost to consumers. SQL Server Performance Dashboard Reports are available as a set of custom reports in SQL Server Management Studio (SSMS) which runs against the connected instance in Object Explorer. When monitoring large enterprise deployments of SQL Server, hosting SQL Server Performance Dashboard Reports on a central reporting server can provide additional benefits making life easier for enterprise DBAs for monitoring and troubleshooting SQL Server issues. To support hosting SQL performance dashboard reports on a central SQL Server Reporting Services instance, we have customized SQL Server 2012 Performance Dashboard Reports, added new reports and uploaded in Tiger toobox github repository for customers and SQL community. The reports are tested to run against SQL Server 2012, SQL Server 2014 and SQL Server 2016 versions of target SQL Server instance and can be deployed against SQL Server 2012, SQL Server 2014 or SQL Server 2016 Reporting Services instance.

Following are some of the benefits of hosting SQL Performance dashboard reports on central SSRS reporting server.

  • Monitoring Reports accessible anytime, anywhere using browser – This removes the dependency of thick client like SQL Server Management Studio (SSMS) to be present on the workstation server allowing DBAs, DevOps audience to check the health of SQL Server and resource consumption using web browser from any workstation machine with access to the server.
  • Scheduling automatic report delivery – SSRS allows scheduled email or file share delivery of reports. This allows DBAs, application owners and database stakeholders to choose push model where by performance health reports can be scheduled to run against specified SQL Server instances at the specified time and be delivered in their mailbox to proactively monitor overall health of SQL Server instance and detect any anomaly.
  • Performance Base lining using Report Snapshots – SSRS allows you to capture scheduled point in time report snapshots at the specified time interval allowing DBAs to establish performance baselines using historical snapshots for the target SQL Server instances.
  • Linked Reports for Application owners and other stakeholders – In an enterprise environment, most application teams and stakeholders are interested to see the performance, resource consumption, blocking information and overall health of their SQL Server instance on-demand. In such scenarios, DBAs can create linked reports for the target SQL Server instances on the SSRS central server and delegate them permissions to view reports for their target SQL Server instance of interest. This allows application teams, developers to be self-sufficient to check the overall health of their SQL Server instances creating some bandwidth for DBAs who needs to be contacted only if there is an anomaly or problem detected.

Click here to view the rest of this post.

SQL Server DBCC CHECKDB Overview

SQL Server database corruption can be a problem and can cause serious damage to a database. If you’re an experienced DBA then you probably have safeguards in place to detect this, but over the years I’ve seen hundreds of SQL Servers with no detection methods at all and this is a problem. There are a few ways to detect database corruption, but this tip will focus more on DBCC CHECKDB.

You may or may not have heard of DBCC (database console commands) statements. These statements are used to perform different operations in your database and can be broken down into four categories: Maintenance, Miscellaneous, Informational, and Validation. I use some of the DBCC statements on a daily basis, but none more than DBCC CHECKDB.

What is SQL Server DBCC CHECKDB

DBCC CHECKDB, from Microsoft MSDN Library, checks logical and physical integrity of all the objects in the specified database by performing the following operations:

  • Runs DBCC CHECKALLOC on the database – Checks consistency of disk space allocation structures for a specified database.
  • Runs DBCC CHECKTABLE on every table and view in the database – Checks the integrity of all the pages and structures that make up the table or indexed view.
  • Runs DBCC CHECKCATALOG on the database – Checks for catalog consistency within the database.
  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
  • Validates the Service Broker data in the database

If you’ve ever ran DBCC CHECKDB you know it takes some time for large databases. Now that you know all of the steps that are run, you can see why it takes time to complete.

How can SQL Server DBCC CHECKDB help me?

Data corruption is bad. It can cause all sorts of issues within the database that may include incorrect data results, failed SQL statements, and in some cases can take down the entire SQL instance. DBCC CHECKDB warns you of corruption so that you can fix it before (hopefully) it gets too bad.

How do I use SQL Server DBCC CHECKDB?

DBCC CHECKDB is pretty straightforward. There are a few options you can use with the statement and we’ll go over some of those in the next section, but the basic syntax looks like this:

DBCC CHECKDB ('DatabaseName')

Pretty simple.

Automate SQL Server DBCC CHECKDB

Obviously, you don’t want to log in every morning and run this statement on each database, so you can automate this process using a few different methods:

  • SQL Server Maintenance plans – Maintenance plans are part of SQL Server out of the box (unless you’re running Express Edition). I don’t like using maintenance plans for the most part, but I don’t mind using them for this type of task. In the Maintenance Plan toolbox you’ll need to use the Check Database Integrity task. The only configurable option is to include indexes so it’s not really user friendly, but in some cases this is all you need. Again, we’ll talk about other options in the next section.

Custom scripts – Custom scripts are usually what I use and offer the best flexibility as far as adding the options you want. My go-to scripts are already created and free to use from Ola Hallengren. He’s done a wonderful job of creating these and sharing them to the world. Thanks Ola!

Click here to view the rest of this post.

SQL Server Index Fragmentation Overview

We’ve all heard about database/index fragmentation (and if you haven’t, continue reading), but what is it? Is it an issue? How do I know if it resides in my database? How do I fix it? These questions could be a tip all in itself, but I’ll try to give you an idea of each in this post.

Without going into a lot of detail, SQL Server stores data on 8KB pages. When we insert data into a table, SQL Server will allocate one page to store that data unless the data inserted is more than 8KB in which it would span multiple pages. Each page is assigned to one table. If we create 10 tables then we’ll have 10 different pages.

As you insert data into a table, the data will go to the transaction log file first. The transaction log file is a sequential record meaning as you insert, update, and delete records the log will record these transactions from start to finish. The data file on the other hand is not sequential. The log file will flush the data to the data file creating pages all over the place.

Now that we have an idea of how data is stored, what does this have to do with fragmentation?

There are two types of fragmentation: Internal Fragmentation and External Fragmentation.

SQL Server Internal Fragmentation

SQL Server Internal Fragmentation is caused by pages that have too much free space. Let’s pretend at the beginning of the day we have a table with 40 pages that are 100% full, but by the end of the day we have a table with 50 pages that are only 80% full because of various delete and insert statements throughout the day. This causes an issue because now when we need to read from this table we have to scan 50 pages instead of 40 which should may result in a decrease in performance. Let’s see a quick and dirty example.

Let’s say I have the following table with a Primary Key and a non-clustered index on FirstName and LastName:

IndexFragmentationI’ll talk about ways to analyze fragmentation later in this tip, but for now we can right click on the index, click Properties, and Fragmentation to see fragmentation and page fullness. This is a brand new index so it’s at 0% fragmentation.

IndexFragmentation

Click here to view the rest of this post.

Creating a Backup plan on SQL Express using Ola Hallengren’s scripts

SQL Express doesn’t have the SQL Server Agent so we can’t schedule jobs like normal. Follow this post to create a backup plan that will back up all the databases using Windows Task Scheduler.

  • Download CommandExecute – https://ola.hallengren.com/scripts/CommandExecute.sql
  • Download DatabaseBackup – https://ola.hallengren.com/scripts/DatabaseBackup.sql
  • Execute both of these stored procedures against the target server

Backups with Ola 1

  • Open Notepad on the target server and copy the following code changing the path to where the backups will be stored and the cleanup time. The cleanup time is specified in hours :
EXECUTE dbo.DatabaseBackup
@Databases = 'USER_DATABASES',
@Directory = 'C:\Backups',
@BackupType = 'FULL',
@Compress = 'Y', 
@CleanupTime = 48
  • Save the file with a .sql extension

Backups with Ola 2

  • Start the Task Scheduler by clicking on StartMenu/All Programs/Accessories/System Tools/Task Schedule
  • Click Create Basic Task to start the Scheduled Task Wizard
  • Type a name for the Task

Backups with Ola 3

  • Choose Daily from the scheduling options

Backups with Ola 4

  • Click Next, specify the information about the time to run the task. Set Start Time to an appropriate value when the load on the server is low. Set the recur every option to 1 day and click Next

Backups with Ola 5

  • Click Browse. Browse to SQLCMD.exe (C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\110\Tools\Binn\SQLCMD.EXE) and then click Open
  • Type the following content to the Add arguments text box and then click Next
    • –S ServerName –i “c:\SQLScripts\SQLBackups.sql”

Backups with Ola 7

  • Select the checkbox to Open the Advanced Properties for this task and click Finish

Backups with Ola 8

  • Check Run whether user is logged in or not and Run with highest privileges under Security Options then click OK.

Backups with Ola 9

**Note**

You may have to run this in the command prompt until it succeeds and copy exactly from CMD into the arguments section

Capturing Deadlocks with Extended Events

I’ve been noticing a lot of deadlocks on my server. What’s the best way to track down the queries so I can fix the problem?

There are a few different ways that you can capture deadlock information. You can setup a trace flag (1222) to write the deadlock information to the error log, setup a Profiler trace to capture the deadlock graph, or setup an Extended Event to capture all sorts of information.

I’m going to focus on setting up an Extended Event in this post since MS continues to say Profiler will not be released in future versions. Extended Events are the future so why not start using them now?

In SSMS, drill down to Management, Extended Events. Right click on Sessions and click New Session Wizard:

Deadlocks with Extended Events 1

Click next on the Introduction screen and give the Session a name. I’m going to name this session Deadlocks:

Deadlocks with Extended Events 2

Click next. On the Choose Template screen you can choose a predefined template (like Profiler) or you can create your own events by choosing “Do not use a template”. For this post, let’s create our own:

Deadlocks with Extended Events 3

Click next and you’ll see hundreds of events (like Profiler). We only want to capture deadlock data so let’s scroll down to the very bottom and choose xml_deadlock_report. Click on the event and click the right arrow to move it into the Selected Events box:

Deadlocks with Extended Events 4

You can choose other events if needed, but for the simplicity of this post I’m just going to use this one. Click next. The Capture Global Fields page allows us to select what fields we want to capture. These are unique to each event selected. For this example, I’ll choose the following fields:

  • Callstack
  • Client_app_name
  • Client_hostname
  • Database_id
  • Database_name
  • Plan_handle
  • Process_id
  • Sql_text
  • Transaction_id
  • Transaction_sequence

Deadlocks with Extended Events 5

Click next. On this page you can apply filters if needed. I’ll setup a filter so that I only capture data from the RollTide database. There are hundreds of different filters that can be configured so that you don’t pull back data that is not needed:

Deadlocks with Extended Events 6

Click next to the Session Data Storage page. This page allows you to save data to a file or work with only the most recent data. I don’t want to keep thousands upon thousands of events so I’ll choose “Work with only the most recent data”

Deadlocks with Extended Events 7

The next page summarizes all the options we have selected. You can also script this session if you need to create it on other servers or save it for later. Click Finish to create the new session.

The last page allows you to start the session immediately and watch live data. For this post, I’ll choose both:

Deadlocks with Extended Events 8

You should see the new session under Extended Events and the Live Data tab should appear:

Deadlocks with Extended Events 9

Deadlocks with Extended Events 10

Once a deadlock occurs it should show the deadlock in the Live Data window:

Deadlocks with Extended Events 11

This view shows all of the fields we selected including the XML report. If you click on the Deadlock tab, you’ll see the graph:

Deadlocks with Extended Events 12

You can also use this query to see detailed information including the Deadlock graph and Event XML

SELECT
DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), CURRENT_TIMESTAMP), DeadlockEventXML.value('(event/@timestamp)[1]', 'datetime2')) AS [EventTime],
DeadlockEventXML.value('(//process[@id[//victim-list/victimProcess[1]/@id]]/@hostname)[1]', 'nvarchar(max)') AS HostName,
DeadlockEventXML.value('(//process[@id[//victim-list/victimProcess[1]/@id]]/@clientapp)[1]', 'nvarchar(max)') AS ClientApp,
DB_NAME(DeadlockEventXML.value('(//process[@id[//victim-list/victimProcess[1]/@id]]/@currentdb)[1]', 'nvarchar(max)')) AS [DatabaseName],
DeadlockEventXML.value('(//process[@id[//victim-list/victimProcess[1]/@id]]/@transactionname)[1]', 'nvarchar(max)') AS VictimTransactionName,
DeadlockEventXML.value('(//process[@id[//victim-list/victimProcess[1]/@id]]/@isolationlevel)[1]', 'nvarchar(max)') AS IsolationLevel,
DeadlockEventXML.query('(event/data[@name="xml_report"]/value/deadlock)[1]') AS DeadLockGraph,
DeadlockEventXML
FROM
(
SELECT
XEvent.query('.') AS DeadlockEventXML,
Data.TargetData
FROM
(
SELECT
CAST(target_data AS XML) AS TargetData
FROM sys.dm_xe_session_targets st
JOIN sys.dm_xe_sessions s ON s.address = st.event_session_address
WHERE s.name = 'Deadlocks' AND
st.target_name = 'ring_buffer'
) AS Data
CROSS APPLY TargetData.nodes('RingBufferTarget/event[@name="xml_deadlock_report"]') AS XEventData(XEvent)
) AS DeadlockInfo 

Correct SQL Server TempDB Spills in Query Plans Caused by Outdated Statistics

Statistics are an integral part of SQL Server and query performance. In short, the query optimizer uses statistics to create query plans that will improve the overall performance of the queries ran. Each statistic object is created on a list of one or more table columns and includes a histogram displaying the distribution of values in the first column. The histogram can have up to 200 steps, but no more regardless of the number of rows in the column or index.

In this post we’ll take a look at one specific performance issue that you might find in an execution plan of a query. If you’ve ever noticed the following warning, then this post is for you:

1 - TempDB spills
Within the AdventureWorks2014 DB, I’ll use the following query for my example:

SELECT BusinessEntityID, FirstName, LastName, EmailPromotion
FROM [AdventureWorks2014].[Person].[Person]
WHERE EmailPromotion > 0
ORDER BY LastName

Looking at this query I can already tell contention may be present so I’ll go ahead and add a covering index:

CREATE NONCLUSTERED INDEX [IX_Person_EmailPromotion_INCLUDES]
ON [Person].[Person] ([EmailPromotion])
INCLUDE ([BusinessEntityID],[FirstName],[LastName])

When adding the index above, statistics were automatically created and updated. Since the addition of this index I’ve added a few thousand rows to the Person table.

Let’s run the query and make sure the “Include Actual Execution Plan” button is selected.

2 - TempDB spills

After the query executes let’s take a look at the execution plan by clicking on the tab in the Results pane:

3 - TempDB spills
These warnings were added to SQL Server Management Studio 2012 so if you’re using an older version you may not see this. The spill data to TempDB warning means that the query was not granted enough memory to finish the operation and spilled over into the TempDB to complete the operation. We all know reading from memory is much faster than reading from disk and this is exactly what is happening here. The query read as much as it could from memory before moving over to the TempDB disk.

Click here to view the rest of this post.