Efficient SQL Server Patching Using DBATools.IO

Maintaining SQL Server infrastructure can be time-consuming, especially when it comes to patching multiple servers. A manual approach of remoting into each server individually, checking versions, and applying updates takes significant effort and is prone to errors. This is where DBATools.IO, a powerful open-source PowerShell module, comes in handy. It simplifies and automates SQL Server tasks, including patching. In this post, we’ll walk through a PowerShell script that uses DBATools.IO to patch SQL Servers, saving time and effort compared to manual methods.

What is DBATools.IO?

DBATools.IO is a collection of PowerShell cmdlets designed to make SQL Server management easier for DBAs. It provides over 500 functions for database migration, backups, performance tuning, patching, and more. For those managing multiple SQL Servers, DBATools.IO is an essential tool in automating repetitive tasks, improving efficiency, and reducing human error.

Breaking Down the PowerShell Script

Let’s look at each part of the script and understand what it’s doing:

Set-ExecutionPolicy RemoteSigned

This line allows PowerShell scripts to run on your machine. The RemoteSigned policy permits scripts downloaded from the internet to run if they are signed by a trusted publisher, while local scripts don’t need a signature.

#Servers
$ServerList = 'Server01', 'Server02', 'Server03'

Here, a list of SQL Servers that need to be patched is specified. Instead of remoting into each server one by one, this list tells the script which servers it needs to patch, simplifying the process.

$Credential = Get-Credential -UserName .\Brady -Message 'Enter Password'

The Get-Credential cmdlet prompts you to enter your credentials (username and password), which will be securely passed into the subsequent commands. This allows the script to authenticate when accessing the SQL Servers without exposing passwords directly in the script.

import-module DBATools
import-module sqlserver

These commands import the necessary modules. DBATools provides SQL Server management functions, while sqlserver allows interaction with SQL Server instances via PowerShell.

Test-DbaBuild -Latest -Update

This command updates the local index of SQL Server builds to ensure you are working with the most up-to-date version information. It checks for the latest SQL Server cumulative updates and stores that information locally for reference. Read more about this command and available parameters here.

$version = '2022CU15'

Here, the desired SQL Server version is set, in this case, 2022CU15. This variable is used later to specify which version to patch the servers to.

Get-DbaBuildReference -Update

This command updates the build reference for SQL Server patches, ensuring the script uses the most current information available for the version you’re updating to. Read more about this command and available parameters here.

Update-DbaInstance -ComputerName $ServerList -Version $version -Path '\\fileshare\DBA\SQL Server Installs\Patches\2022' -Credential $Credential -Restart

This is the heart of the script: the actual patching process. The Update-DbaInstance cmdlet performs the following tasks:

  • -ComputerName $ServerList: Specifies the servers to patch (as listed in $ServerList).
  • -Version $version: Defines the SQL Server version to patch to, which was previously set as 2022CU15.
  • -Path: Specifies the network path where the patch files are located, in this case, a fileshare (\\fileshare\DBA\SQL Server Installs\Patches\2022).
  • -Credential $Credential: Provides the credentials (username/password) to access and update the servers.
  • -Restart: Ensures that the servers are automatically restarted after the patch is applied, as needed.

Final look at the entire script:

Set-ExecutionPolicy RemoteSigned

#Servers
$ServerList = 'Server01', 'Server02', 'Server03'

#Credential to pass into the Update-DbaInstance command
$Credential = Get-Credential -UserName .\Brady -Message 'Enter Password'

import-module DBATools
import-module sqlserver

#Update the index of Builds
Test-DbaBuild -Latest -Update

#Set the version that you want to update to
$version = '2022CU15'

# Update the build index
Get-DbaBuildReference -Update

# Patch SQL Server
Update-DbaInstance -ComputerName $ServerList -Version $version -Path '\\fileshare\DBA\SQL Server Installs\Patches\2022' -Credential $Credential -Restart

Time-Saving Benefits of Using DBATools.IO for SQL Patching

Traditionally, patching multiple SQL Servers would require manually remoting into each server, applying the updates, and rebooting if necessary. This is not only tedious but also introduces the possibility of human error, such as applying the wrong patch version or missing a server in the process.

With this DBATools.IO script:

  • Automation: The entire patching process is automated, eliminating the need for manual intervention. You can patch multiple servers with a single command.
  • Consistency: The same patch is applied across all servers, ensuring version consistency across your environment.
  • Efficiency: Instead of remoting into each server, the script performs the patching in parallel, saving time and reducing downtime.
  • Error Reduction: By automating the process, the risk of missing a step or applying the wrong patch is greatly reduced.
  • Credential Management: The script securely passes credentials without needing manual authentication on each server.

In a production environment where uptime is critical, and efficiency is key, using a single PowerShell script to patch multiple SQL Servers at once can significantly cut down on maintenance time. DBATools.IO is a powerful ally in making SQL Server management smoother, faster, and more reliable.

By leveraging tools like DBATools.IO, DBAs can streamline repetitive tasks, allowing them to focus on higher-value activities like optimization and troubleshooting.

Performance Tuning Series – Regular Maintenance

SQL Server performance isn’t just about the initial setup or database design—it requires continuous maintenance to ensure it runs smoothly over time. Neglecting regular maintenance can lead to fragmentation, slow queries, data integrity issues, and ultimately, downtime. By implementing a comprehensive maintenance strategy, you can ensure that your SQL Server databases remain optimized and healthy, allowing them to perform efficiently even as workloads and data volumes grow.

Why Regular Maintenance is Critical

Over time, SQL Server databases accumulate various inefficiencies that can degrade performance. These include fragmented indexes, outdated statistics, growing transaction logs, and unused or bloated data. Regular maintenance tasks help to mitigate these issues, ensuring that SQL Server can continue to execute queries quickly, handle transactions efficiently, and maintain data integrity.

Key Regular Maintenance Tasks for SQL Server

1. Index Rebuilding and Reorganization

Indexes can become fragmented over time as data is inserted, updated, or deleted. Fragmentation occurs when the logical order of pages in an index no longer matches the physical order on disk. This leads to slower reads, as SQL Server must perform additional I/O to retrieve scattered data.

  • Rebuild Indexes: Index rebuilds recreate the index from scratch, removing fragmentation and improving query performance. This process locks the table, so it should be scheduled during periods of low activity.
  • Reorganize Indexes: Reorganizing indexes is a less intrusive process that defragments them without locking the table. This can be done during regular operations but is less effective than a full rebuild.

Best Practice:

  • Schedule regular index maintenance based on the level of fragmentation. Use SQL Server’s sys.dm_db_index_physical_stats DMV to check fragmentation levels:
    • 0-10% fragmentation: No action needed.
    • 10-30% fragmentation: Use index reorganization.
    • Above 30% fragmentation: Perform a full index rebuild.
  • Automate index maintenance using SQL Server Agent jobs or a dedicated maintenance tool to ensure this task is performed regularly without manual intervention.

2. Update Statistics

SQL Server uses statistics to estimate the distribution of data values in a table, which helps the query optimizer choose the most efficient execution plan. As data is modified, these statistics can become outdated, leading to suboptimal query plans and slower performance.

  • Best Practice:
    • Regularly update statistics on your tables and indexes to ensure that the query optimizer has the most accurate information. Use the UPDATE STATISTICS command or enable SQL Server’s auto-update statistics feature.
    • For large tables, use sampled statistics to balance performance with accuracy. Full scans of very large tables can be resource-intensive.
    • If queries slow down unexpectedly, manually update statistics to resolve potential performance issues caused by outdated statistics.

3. Backup and Recovery Management

Regular backups are essential for data protection and business continuity. However, improper backup strategies can lead to bloated transaction logs, excessive disk usage, and even performance degradation during peak times.

  • Best Practice:
    • Implement a full backup strategy based on your business’s recovery point objectives (RPO) and recovery time objectives (RTO). Schedule full backups regularly (e.g., daily) and transaction log backups more frequently (e.g., every 15-30 minutes) for critical databases.
    • Use differential backups between full backups to reduce the load on storage and improve recovery times.
    • Regularly test your backups by restoring them to a separate environment to ensure that they can be successfully recovered when needed.
    • Ensure that backup schedules avoid peak activity times to prevent any impact on performance.

4. Transaction Log Management

SQL Server’s transaction log records every modification made to the database. If not properly managed, the transaction log can grow excessively large, consuming valuable disk space and degrading performance.

  • Best Practice:
    • Use the Full Recovery Model for critical databases to ensure point-in-time recovery, but regularly back up the transaction logs to prevent them from growing too large.
    • For less critical databases or databases that don’t require point-in-time recovery, consider using the Simple Recovery Model, which automatically truncates the transaction log after each checkpoint.
    • Monitor transaction log size and schedule log backups frequently to avoid excessive growth. Use the sys.dm_db_log_space_usage DMV to monitor log space consumption.

5. Integrity Checks (DBCC CHECKDB)

Database corruption can occur for various reasons, such as hardware failures or improper shutdowns. SQL Server provides the DBCC CHECKDB command to detect and repair corruption in your databases.

  • Best Practice:
    • Run DBCC CHECKDB regularly to ensure data integrity. This process checks for physical and logical corruption in database files.
    • Schedule DBCC CHECKDB during off-peak hours to avoid performance impacts, as this operation can be resource-intensive.
    • If DBCC CHECKDB identifies corruption, address the issue immediately. Use repair options like REPAIR_ALLOW_DATA_LOSS as a last resort, and restore from backups when possible.

6. Tempdb Maintenance

Tempdb is a shared system database that is heavily used by SQL Server for temporary objects, intermediate query results, and sorting. Over time, tempdb can become a performance bottleneck if it is not properly managed.

  • Best Practice:
    • Ensure that tempdb has multiple data files, especially in high-concurrency environments. Best practice is to configure one data file per logical CPU core (up to 8 cores), which helps reduce contention on system pages (like PFS, GAM, and SGAM).
    • Place tempdb on fast storage (preferably SSD or NVMe) to handle its high I/O workload.
    • Regularly monitor tempdb space usage to avoid running out of space, which can lead to system crashes. Use sys.dm_db_task_space_usage and sys.dm_db_session_space_usage to track space consumption.

7. Cleanup of Unused or Outdated Data

Over time, databases may accumulate unused data, which can increase table sizes and slow down queries. Regularly cleaning up obsolete data ensures your database remains efficient.

  • Best Practice:
    • Implement a data retention policy that defines how long data should be kept before being archived or deleted. This policy should reflect business requirements while keeping database sizes manageable.
    • Periodically archive old data that is not frequently accessed into separate databases or storage systems.
    • Use automated scripts to clean up old or unused records, freeing up space and reducing index bloat.

8. Monitor and Tune Performance

SQL Server provides several tools for monitoring performance, including Dynamic Management Views (DMVs) and Extended Events. Regular monitoring can help identify performance bottlenecks before they affect end-users.

  • Best Practice:
    • Regularly monitor key performance metrics like CPU usage, memory usage, disk I/O, and query execution times to ensure the system is operating within optimal thresholds.
    • Use SQL Server Profiler or Extended Events to capture detailed information about query performance and diagnose slow-running queries.
    • Leverage DMV’s to analyze query patterns and suggest optimizations like new indexes.
    • Continuously review and tune your queries, indexes, and database schema based on real-time performance data.

9. Automating Maintenance Tasks

Manually managing routine maintenance tasks can be time-consuming and prone to error. SQL Server provides built-in automation tools, such as SQL Server Agent, to schedule and manage maintenance operations.

  • Best Practice:
    • Set up automated maintenance jobs for tasks like index rebuilding, statistics updates, transaction log backups, and integrity checks. This ensures that these critical operations are performed consistently and without manual intervention.
    • Regularly review and adjust job schedules to avoid conflicts during peak business hours.
    • Use maintenance plans in SQL Server Management Studio (SSMS) or third-party tools for more advanced scheduling and management of maintenance tasks.

Conclusion

Regular maintenance is essential for keeping your SQL Server database healthy and optimized for performance. Tasks like index rebuilding, updating statistics, managing transaction logs, performing integrity checks, and cleaning up obsolete data all contribute to the overall efficiency and reliability of your SQL Server environment. By automating maintenance operations, monitoring key performance metrics, and regularly tuning the system, you can ensure that your SQL Server databases continue to deliver optimal performance as your workload and data volumes grow.

Maintenance Windows for Azure SQL

The cloud has revolutionized the way we manage and maintain databases, offering new features and capabilities to enhance reliability and minimize disruptions. One such feature that has garnered significant attention is the maintenance window for Azure SQL Database.

Microsoft’s maintenance window feature allows database administrators to schedule planned maintenance events for Azure SQL Database resources during non-peak hours, effectively reducing the impact on mission-critical production workloads. By configuring a maintenance window, we can ensure that updates, patches, and other maintenance activities occur at a time that minimizes disruptions to our business operations.

The true power of this feature lies in its flexibility and customization options. While Azure SQL Database follows a default maintenance window designed to align with typical business hours, administrators can tailor the maintenance schedule to suit their unique requirements. Whether it’s opting for weekday or weekend maintenance slots, the ability to choose a time frame that aligns with our specific operational needs is a game-changer.

Moreover, the maintenance window feature comes with built-in advance notifications, allowing us to stay informed and prepared for upcoming maintenance events. This proactive approach enables us to plan accordingly, minimizing potential downtime and ensuring a smooth transition during the maintenance process.

It’s worth noting that the maintenance window is available for most service level objectives (SLOs) within Azure SQL Database, ensuring that organizations with diverse performance and availability requirements can benefit from this feature.

As database administrators, we understand the critical importance of maintaining a reliable and high-performing database infrastructure. The maintenance window feature in Azure SQL Database empowers us to strike the right balance between keeping our systems up-to-date and minimizing disruptions to business operations.

I encourage my fellow database professionals to explore this feature and leverage its capabilities to optimize their maintenance strategies. By doing so, we can enhance the reliability and availability of our database systems, ultimately contributing to the success of our organizations.

Click here to read more…

Get Ahead of Year-End with Bulletproof Database Solutions

As we approach the final stretch of the year, businesses across all industries are gearing up for the increased demands that come with Q4. From handling higher transaction volumes to meeting year-end reporting requirements, your SQL databases will be put to the test. At SQL Freelancer, we understand the critical role that a robust, secure, and optimized database plays in ensuring your operations run smoothly during this peak season.

Why Your Q4 Success Depends on Your Database

The last quarter of the year can be a make-or-break time for many businesses. Whether you’re dealing with a surge in online orders, closing out the fiscal year, or preparing for tax season, your database needs to perform at its best. Any hiccups in your database can lead to delays, data loss, or even security breaches, all of which can severely impact your bottom line.

This is where SQL Freelancer comes in. Our Q3 marketing campaign, “Prepare for Q4 with SQL Expertise,” is designed to help you proactively address potential issues before they arise. By partnering with us, you can ensure that your SQL solutions are ready to handle whatever Q4 throws your way.

What We Offer:

  • Comprehensive Database Audits: We’ll perform a thorough analysis of your current database setup, identifying any weaknesses or areas for improvement.
  • Security Enhancements: Protect your data from breaches with our advanced security measures tailored to your specific needs.
  • Performance Optimization: Ensure your databases run efficiently under heavy loads, reducing latency and improving user experience.
  • Scalability Planning: Prepare for growth by ensuring your databases can scale seamlessly with your business needs.

Don’t Let Your SQL Databases Fall Behind

As you gear up for Q4, don’t let your SQL databases be an afterthought. With SQL Freelancer by your side, you can be confident that your solutions are secure, optimized, and ready for anything. We’re committed to helping you prepare, protect, and excel—so you can focus on what you do best.

Partner with the Leader in SQL Solutions

At SQL Freelancer, we pride ourselves on being a leader in SQL solutions. Our team of experts is dedicated to providing top-notch service, tailored to your unique business needs. Don’t leave your Q4 success to chance—partner with us and get ahead of the curve.

Ready to get started? Contact us today to learn how we can help you prepare for Q4 with bulletproof database solutions.

Change SQL Collation without reinstalling SQL

I ran into an issue where SQL Server was installed with the wrong collation and a lot of user databases were already attached. I could easily backup the databases, uninstall, reinstall, and restore the databases back, but this could take literally all day. There is a better and much faster way to make this change. This post will go over it….

First, backup all databases (duh)

Next, we’ll verify the current collation. On this server it’s set to Latin1_General_CI_AS and I want to change it to SQL_Latin1_General_CP1_CI_AS.

Next, we’ll double check and make sure we have backups of all databases ?

Open SQL Configuration Manager and turn off all SQL Services:

Open Command Prompt (as administrator) and browse to the BINN directory and type the following command.

sqlservr -m -T4022 -T3659 -q”SQL_Latin1_General_CP1_CI_AS”

-m – starts SQL Server in single user admin mode
-T4022 – bypasses all startup procedures
-T3659 – undocument trace flag. enables logging of all errors to the errorlog during startup
-q – new collation

Before hitting Enter, let’s triple check and make sure those backups exist.

Hit Enter.

SQL will run through its startup routine:

Voila. Recovery is complete.

Close Command Prompt and start the SQL services back up.

Back in SQL Server Management Studio, verify that the collation has changed.

SQL Server DBCC CHECKDB Overview

SQL Server database corruption can be a problem and can cause serious damage to a database. If you’re an experienced DBA then you probably have safeguards in place to detect this, but over the years I’ve seen hundreds of SQL Servers with no detection methods at all and this is a problem. There are a few ways to detect database corruption, but this tip will focus more on DBCC CHECKDB.

You may or may not have heard of DBCC (database console commands) statements. These statements are used to perform different operations in your database and can be broken down into four categories: Maintenance, Miscellaneous, Informational, and Validation. I use some of the DBCC statements on a daily basis, but none more than DBCC CHECKDB.

What is SQL Server DBCC CHECKDB

DBCC CHECKDB, from Microsoft MSDN Library, checks logical and physical integrity of all the objects in the specified database by performing the following operations:

  • Runs DBCC CHECKALLOC on the database – Checks consistency of disk space allocation structures for a specified database.
  • Runs DBCC CHECKTABLE on every table and view in the database – Checks the integrity of all the pages and structures that make up the table or indexed view.
  • Runs DBCC CHECKCATALOG on the database – Checks for catalog consistency within the database.
  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
  • Validates the Service Broker data in the database

If you’ve ever ran DBCC CHECKDB you know it takes some time for large databases. Now that you know all of the steps that are run, you can see why it takes time to complete.

How can SQL Server DBCC CHECKDB help me?

Data corruption is bad. It can cause all sorts of issues within the database that may include incorrect data results, failed SQL statements, and in some cases can take down the entire SQL instance. DBCC CHECKDB warns you of corruption so that you can fix it before (hopefully) it gets too bad.

How do I use SQL Server DBCC CHECKDB?

DBCC CHECKDB is pretty straightforward. There are a few options you can use with the statement and we’ll go over some of those in the next section, but the basic syntax looks like this:

DBCC CHECKDB ('DatabaseName')

Pretty simple.

Automate SQL Server DBCC CHECKDB

Obviously, you don’t want to log in every morning and run this statement on each database, so you can automate this process using a few different methods:

  • SQL Server Maintenance plans – Maintenance plans are part of SQL Server out of the box (unless you’re running Express Edition). I don’t like using maintenance plans for the most part, but I don’t mind using them for this type of task. In the Maintenance Plan toolbox you’ll need to use the Check Database Integrity task. The only configurable option is to include indexes so it’s not really user friendly, but in some cases this is all you need. Again, we’ll talk about other options in the next section.

Custom scripts – Custom scripts are usually what I use and offer the best flexibility as far as adding the options you want. My go-to scripts are already created and free to use from Ola Hallengren. He’s done a wonderful job of creating these and sharing them to the world. Thanks Ola!

Click here to view the rest of this post.

SQL Server Index Fragmentation Overview

We’ve all heard about database/index fragmentation (and if you haven’t, continue reading), but what is it? Is it an issue? How do I know if it resides in my database? How do I fix it? These questions could be a tip all in itself, but I’ll try to give you an idea of each in this post.

Without going into a lot of detail, SQL Server stores data on 8KB pages. When we insert data into a table, SQL Server will allocate one page to store that data unless the data inserted is more than 8KB in which it would span multiple pages. Each page is assigned to one table. If we create 10 tables then we’ll have 10 different pages.

As you insert data into a table, the data will go to the transaction log file first. The transaction log file is a sequential record meaning as you insert, update, and delete records the log will record these transactions from start to finish. The data file on the other hand is not sequential. The log file will flush the data to the data file creating pages all over the place.

Now that we have an idea of how data is stored, what does this have to do with fragmentation?

There are two types of fragmentation: Internal Fragmentation and External Fragmentation.

SQL Server Internal Fragmentation

SQL Server Internal Fragmentation is caused by pages that have too much free space. Let’s pretend at the beginning of the day we have a table with 40 pages that are 100% full, but by the end of the day we have a table with 50 pages that are only 80% full because of various delete and insert statements throughout the day. This causes an issue because now when we need to read from this table we have to scan 50 pages instead of 40 which should may result in a decrease in performance. Let’s see a quick and dirty example.

Let’s say I have the following table with a Primary Key and a non-clustered index on FirstName and LastName:

IndexFragmentationI’ll talk about ways to analyze fragmentation later in this tip, but for now we can right click on the index, click Properties, and Fragmentation to see fragmentation and page fullness. This is a brand new index so it’s at 0% fragmentation.

IndexFragmentation

Click here to view the rest of this post.

Creating a Backup plan on SQL Express using Ola Hallengren’s scripts

SQL Express doesn’t have the SQL Server Agent so we can’t schedule jobs like normal. Follow this post to create a backup plan that will back up all the databases using Windows Task Scheduler.

  • Download CommandExecute – https://ola.hallengren.com/scripts/CommandExecute.sql
  • Download DatabaseBackup – https://ola.hallengren.com/scripts/DatabaseBackup.sql
  • Execute both of these stored procedures against the target server

Backups with Ola 1

  • Open Notepad on the target server and copy the following code changing the path to where the backups will be stored and the cleanup time. The cleanup time is specified in hours :
EXECUTE dbo.DatabaseBackup
@Databases = 'USER_DATABASES',
@Directory = 'C:\Backups',
@BackupType = 'FULL',
@Compress = 'Y', 
@CleanupTime = 48
  • Save the file with a .sql extension

Backups with Ola 2

  • Start the Task Scheduler by clicking on StartMenu/All Programs/Accessories/System Tools/Task Schedule
  • Click Create Basic Task to start the Scheduled Task Wizard
  • Type a name for the Task

Backups with Ola 3

  • Choose Daily from the scheduling options

Backups with Ola 4

  • Click Next, specify the information about the time to run the task. Set Start Time to an appropriate value when the load on the server is low. Set the recur every option to 1 day and click Next

Backups with Ola 5

  • Click Browse. Browse to SQLCMD.exe (C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\110\Tools\Binn\SQLCMD.EXE) and then click Open
  • Type the following content to the Add arguments text box and then click Next
    • –S ServerName –i “c:\SQLScripts\SQLBackups.sql”

Backups with Ola 7

  • Select the checkbox to Open the Advanced Properties for this task and click Finish

Backups with Ola 8

  • Check Run whether user is logged in or not and Run with highest privileges under Security Options then click OK.

Backups with Ola 9

**Note**

You may have to run this in the command prompt until it succeeds and copy exactly from CMD into the arguments section

Managing the size of the SQL Server SSIS catalog database

The SSIS catalog is the central point for working with Integration Services (SSIS) projects that you’ve deployed to the SSIS server. When the SSIS catalog is implemented, the SSISDB database is automatically created. The scope of this post will focus on the SSISDB growth more than the SSIS catalog itself.

SSIS projects, packages, parameters, environments, and operational history are all stored in the SSISDB database so if you have hundreds of SSIS packages or packages that run every few minutes you could see how the database storing all the historical information would grow exponentially.

Also included when you enable this feature is a SQL Server Agent Job called SSIS Server Maintenance job:

SQL Freelancer SSIS Catalog 1
Inside this job are two steps, SSIS Server Operation Records Maintenance and SSIS Server Max Version Per Project Maintenance, that will help clean up the database. By default, this job is set to run at 12:00am nightly which is sufficient:

SQL Freelancer SSIS Catalog
Looking at the first step, SSIS Server Operations Records Maintenance, you will notice that it executes a stored procedure named internal.cleanup_server_retention_window. This sounds like it could be the stored procedure that cleans up history:

SQL Freelancer SSIS Catalog
Let’s browse out to the stored procedure in Management Studio and take a look at the code:

SQL Freelancer SSIS Catalog
You can see from the very beginning of the stored procedure in the BEGIN TRY statement it first looks to see if Operation cleanup is enabled and if cleanup is enabled then it looks for the Retention Window:

SQL Freelancer SSIS Catalog
Click here to view the rest of this post.

SQL Server Maintenance Plans Reporting and Logging

This post will focus on the reporting and logging option of maintenance plans.

When a maintenance plan executes it’s nice to know the results especially in case of a failure. You can view the results in a few different ways that include the following:

  • Maintenance Plan history
  • SQL Server Agent Job history
  • SQL Server Error Log
  • sp_readerrorlog
  • Maintenance Plan Reporting and Logging

Maintenance Plan Reporting and Logging Options

The Maintenance Plan reporting and logging option is enabled by default, but a lot of DBA’s and developers don’t even realize it is an option, much less that it’s enabled.

To configure this option, open a maintenance plan and on the top bar beside Manage Connections…. you’ll notice a little chart/paper icon. It’s not hidden, but it doesn’t jump out at you and that’s probably why a lot of DBA’s don’t pay any attention.

SQL Freelancer SQL Server Maintenance Plan Reporting and Logging

If you click the icon you’ll notice there are a few options to choose from:

SQL Freelancer SQL Server Maintenance Plan Reporting and Logging

Let’s go over each one of these:

Generate a text file report

This option allows you to enable or disable the text file report.

Create a new file

This option allows you to create a new report file each time a maintenance plan is executed. Create a new file is the default option and the default folder location is the folder you specified SQL Server to use for the LOG folder. You can specify a different location if preferred.

Here is a screenshot of Windows Explorer where a new file is created each execution:

SQL Freelancer SQL Server Maintenance Plan Reporting and Logging

You may notice that if you run a maintenance plan throughout the day that it could quickly fill up your drive with these 1kb files. Luckily, we don’t have to go in and check our file system and delete these files manually. SQL Server has a task that will automate this for us (see below).

While we are at this point, go ahead and check one of your servers. Check the location of a maintenance plan text file and then check the folder on the server. Or maybe you’ve noticed these files while browsing through your LOG folder and wasn’t sure where they came from.

Append to file

This option allows you to create one text file and append the results to that file. This will reduce the number of files, but will increase the size of the file and it makes it more difficult to read in my opinion.

Click here to view the rest of this post.