本文共 26950 字,大约阅读时间需要 89 分钟。
Database availability is critical to every enterprise and conversely, unavailability directly can create a severe negative impact to the business in today’s world. As database administrators, it is important that we ensure we take all possible steps to minimize data loss. While it is naïve to think that our databases are invincible because of all such precautions, we can always aim to bring the databases back into operation as quickly as possible by meeting the RPO and RTO. This way, the business is unaffected, and we also meet our SLAs.
数据库可用性对每个企业都至关重要,相反,不可用性直接会给当今的企业带来严重的负面影响。 作为数据库管理员,重要的是我们确保采取所有可能的步骤来最大程度地减少数据丢失。 尽管由于所有这些预防措施而认为我们的数据库是不可战胜的,但我们始终可以通过满足RPO和RTO的目标,尽快使数据库恢复运行。 这样,业务不会受到影响,并且我们还满足我们的SLA。
Apart from good knowledge of the system, it is important we have a well-defined data and tested backup-and-restore strategy. Testing the backup-and-restore procedure to ensure quick recovery with minimal administrative overhead is one means towards that end. While advanced techniques such as AlwaysOn and Mirroring help ensure higher availability, disaster recovery is all about well-defined and tested backup and restoration methods.
除了对系统有充分的了解外,重要的是我们要有一个定义良好的数据和经过测试的备份和还原策略。 为此目的,测试备份和还原过程以确保以最少的管理开销进行快速恢复是一种方法。 尽管AlwaysOn和Mirroring等先进技术有助于确保更高的可用性,但灾难恢复只涉及定义良好且经过测试的备份和还原方法。
When planning for disaster recovery techniques, here are some points about volatility that we need to keep in mind:
在规划灾难恢复技术时,需要牢记以下有关波动性的几点:
We must plan our backup-and-restore strategy keeping these aspects in mind. And to ensure the strategy is close to airtight, we need to try out all possible combinations of backup and restoration, observe the nuances in the results, and tune the strategy accordingly. So, we must:
我们必须牢记这些方面来计划我们的备份和还原策略。 为了确保该策略几乎是密不透风的,我们需要尝试备份和还原的所有可能组合,观察结果中的细微差别,并相应地调整策略。 因此,我们必须:
In this article, we touch base on the aforementioned points, and decide on the requirements as well, based on attributes such as the database sizes, their internal details, and the frequency of changes. Let us now go ahead and look at the considerations that are important to build a backup strategy.
在本文中,我们基于上述几点进行探讨,并根据数据库大小,内部细节和更改频率等属性来确定需求。 现在,让我们继续研究构建备份策略的重要考虑因素。
By the end of the article, we would be equipped with a couple of scripts that would give us all vital information in a nice tabular format, that would help us with planning and decision-making when it comes to backups and their restoration.
到本文结尾,我们将配备一些脚本,这些脚本将以很好的表格格式向我们提供所有重要信息,这将有助于我们在备份和还原时进行计划和决策。
Take a look at the business requirements, as each database may have different requirements based on the application it serves. The requirements may be based on:
看一下业务需求,因为每个数据库根据其服务的应用程序可能有不同的需求。 这些要求可能基于:
This post demonstrates one of the ways to gather an inventory of database backup information. The output of the script includes various columns that show the internals of a database, backup size, and gives the latest completion statuses and the corresponding backup sizes. Though the output is derived from the msdb database, having the data consolidated at one place gives us better control and provides us with greater visibility into the database process. A clear understanding of these parameters is a good way to forecast storage requirements. We can schedule a job to pull the data to a central repository and then develop a report on capacity planning and forecasting on a broader spectrum of database backup. That way, we’d have an insight into the sizing of every backup type, such as Full, Differential and Log. With such information, we can easily decide on the type of backup required at a very granular level, i.e., at the database level.
Going this way, we can optimize our investments in storage and backup. It helps us decide on alternatives if needed. The base for every decision is accurate data. The accuracy of information gives us a baseline to decide the viable options to back up the SQL databases. The idea behind this implementation is to get the useful data using a technique native to SQL, and have brainstorming sessions based on that, to identify the best design and solution for the backup problem.这篇文章演示了一种收集数据库备份信息清单的方法。 脚本的输出包括各个列,这些列显示数据库的内部信息,备份大小,并提供最新的完成状态和相应的备份大小。 尽管输出是从msdb数据库派生的,但是将数据整合到一个位置可以使我们更好地控制,并使我们对数据库过程具有更大的可视性。 清楚了解这些参数是预测存储需求的好方法。 我们可以安排工作以将数据提取到中央存储库,然后针对更广泛的数据库备份范围制定容量规划和预测报告。 这样,我们就可以洞悉每种备份类型(例如完全备份,差异备份和日志备份)的大小。 有了这些信息,我们可以轻松地在非常细微的级别(即数据库级别)上决定所需的备份类型。
通过这种方式,我们可以优化在存储和备份方面的投资。 如果需要,它可以帮助我们确定替代方案。 每个决定的基础都是准确的数据。 信息的准确性为我们提供了一个基准来确定备份SQL数据库的可行选项。 此实现背后的想法是使用SQL固有的技术获取有用的数据,并在此基础上进行集思广益的会议,以确定备份问题的最佳设计和解决方案。There are many ways to gather data in a central server repository such as using T-SQL and PowerShell. In this section, I’m going to discuss the data gathering tasks using PowerShell cmdlets.
有很多方法可以在中央服务器存储库中收集数据,例如使用T-SQL和PowerShell。 在本节中,我将讨论使用PowerShell cmdlet进行数据收集的任务。
The pre-requisites are
前提条件是
New cmdlets have been introduced with the SQL Server module, which is going to replace SQLPS, by retaining the old functionality of SQLPS with added set of rich libraries. It is safe to remove the SQLPS and load the SQL Server module.
SQL Server模块中引入了新的cmdlet,它将通过添加一组丰富的库来保留SQLPS的旧功能,从而替换SQLPS。 删除SQLPS并加载SQL Server模块是安全的。
You can remove the SQLPS using below command:
您可以使用以下命令删除SQLPS:
Remove-Module SQLPS
删除模块SQLPS
Load the SQL Server Module using the below command:
使用以下命令加载SQL Server模块:
Import-Module SqlServer
导入模块SqlServer
The new cmdlets, Read-SQLTableData and Write-SqlTableData are introduced to load the data as SQLTABLE
引入了新的cmdlet Read-SQLTableData和Write-SqlTableData来将数据加载为SQLTABLE
With these cmdlets, a query can be executed and the results can be stored in a very simple way. It’s like storing everything as a table.
使用这些cmdlet,可以执行查询,并可以以非常简单的方式存储结果。 就像将所有内容存储为表格一样。
Invoke-Sqlcmd -Query "SELECT name AS object_name ,SCHEMA_NAME(schema_id) AS schema_name ,type_desc ,create_date ,modify_dateFROM sys.objectsWHERE modify_date > GETDATE() - 10ORDER BY modify_date;" -OutputAs DataTables | Write-SqlTableData -ServerInstance hqdbst11 -DatabaseName SQLShackDemo - SchemaName dbo -TableName SQLSchemaChangeCapture -Force
Let’s run the T-SQL to see the output yielding the desired data; the T-SQL is available at . The output of the T-SQL is given below.
让我们运行T-SQL来查看产生所需数据的输出。 T-SQL在可用。 T-SQL的输出如下。
Let us now try to get the org-level information that would help us with planning. We’re going to use another T-SQL script, coupled with PowerShell. The PowerShell script is available at .
现在,让我们尝试获取可以帮助我们进行计划的组织级信息。 我们将使用另一个T-SQL脚本以及PowerShell。 PowerShell脚本位于 。
The PowerShell output window shows the data in a simple manner. With very few lines of code, the data is pulled and stored in a central repository (SqlShackDemo, in the BackupInfo table).
PowerShell输出窗口以简单的方式显示数据。 只需很少的代码行,就可以将数据提取并存储在中央存储库(SqlShackDemo,位于BackupInfo表中)中。
The T-SQL code is shown in gray. It’s just those few lines that do all the magic.
T-SQL代码以灰色显示。 只有这几行才能发挥全部魔力。
As administrators, we should understand the business impact that any amount of downtime causes. Business impact is measured based on the cost of the downtime and the loss of data, against the cost of reducing the downtime and minimizing data loss.
作为管理员,我们应该了解停机时间造成的业务影响。 业务影响的衡量依据是停机时间和数据丢失的成本,而不是减少停机时间和最大程度地减少数据丢失的成本。
One of the ways of cost optimization is proper backup and restoration. To optimize costs, decision-making is required, and for that, data is required. We fetch the necessary data, and store it in a centralized repository, which is a great place to understand how the system is functioning. This data also helps us generate projections, pertaining to capacity planning and forecasting in terms of database backup.
成本优化的方法之一是适当的备份和还原。 为了优化成本,需要决策,为此,需要数据。 我们获取必要的数据,并将其存储在集中式存储库中,这是了解系统运行方式的好地方。 这些数据还帮助我们生成有关容量备份和数据库备份方面的预测的预测。
Having a good understanding of the data and various test procedures for backup will help an administrator prevent various forms of failure by helping him maintain a low MTTR (Mean-Time-To-Recover) and MTBF (Mean-Time-Between-Failures).
充分了解数据和备份的各种测试过程,将有助于管理员维持较低的MTTR(平均恢复时间)和MTBF(平均间隔时间),从而帮助防止各种形式的故障。
Also, the use of the SQL Server PowerShell module along with the new cmdlets has made our life a lot easier, in that the PowerShell script output can be stored as Stables, with minimal efforts. The SQL Server module replaces the SQLPS—the old functionality still exists, but has been merged with a bunch of new functionalities.
此外,将SQL Server PowerShell模块与新的cmdlet一起使用使我们的工作变得更加轻松,因为只需花费很少的精力即可将PowerShell脚本输出存储为Stable。 SQL Server模块取代了SQLPS,旧功能仍然存在,但已与许多新功能合并。
The implementation discussed in this post gives us an ample amount of data that helps decide a good backup strategy and planning methodology, that meets our business needs.
这篇文章中讨论的实现为我们提供了大量数据,这些数据可帮助您确定满足我们业务需求的良好备份策略和规划方法。
DECLARE @Database VARCHAR(50) = 'BI_DW' --@Database VARCHAR(50) = NULL -- collect base data SELECT a.[database_name] , a.backup_set_id , a.backup_start_date AS backup_start_date_time , a.backup_finish_date AS backup_finish_date_time , a.[type] , CASE a.[type] WHEN 'D' THEN 'full' WHEN 'L' THEN 'log' WHEN 'I' THEN 'differential' ELSE 'other' END AS backup_type , CASE a.[type] WHEN 'D' THEN 'Weekly' WHEN 'L' THEN 'Hourly' WHEN 'I' THEN 'Daily' ELSE 'other' END AS backup_freq_def , CAST(a.backup_finish_date AS DATE) AS backup_finish_date , a.backup_size/1024/1024.00 AS backup_size_MB , a.compressed_backup_size/1024/1024.00 AS cbackup_size_MB --, ROW_NUMBER()OVER(PARTITION BY a.[database_name], a.[type] ORDER BY a.backup_finish_date) AS Row_Seq , ROW_NUMBER()OVER(ORDER BY a.[database_name], a.[type], a.backup_finish_date DESC) AS Row_Seq -- this makes the last queries easier than the previous method INTO #Base FROM msdb.dbo.backupset a WHERE ( a.[database_name] = @Database OR @Database IS NULL ) AND a.backup_finish_date >= DATEADD(day,-21,CAST(GETDATE() AS DATE)) -- get 21 days so get at least 2 weeks of FULL backups to compare. filter to 14 for the rest in next query. -- create details from base SELECT a.[database_name] , a.backup_finish_date , a.[type] , a.backup_start_date_time , a.backup_finish_date_time , b.backup_finish_date_time AS prev_backup_finish_date_time , a.backup_type , CAST(a.backup_size_MB AS DECIMAL(10,2)) AS backup_size_MB , CAST(b.backup_size_MB AS DECIMAL(10,2)) AS prev_backup_size_MB , x.backup_size_MB_CHANGE , CAST(a.cbackup_size_MB AS DECIMAL(10,2)) AS cbackup_size_MB , CAST(b.cbackup_size_MB AS DECIMAL(10,2)) AS prev_cbackup_size_MB , x.cbackup_size_MB_CHANGE , a.Row_Seq INTO #Details FROM #Base a INNER JOIN #Base b -- do INNER, don't show the row with no previous records, nothing to compare to anyway except NULL ON b.[database_name] = a.[database_name] AND b.[type] = a.[type] AND b.Row_Seq = a.Row_Seq + 1 -- get the previous row (to compare difference in size changes CROSS APPLY ( SELECT CAST(a.backup_size_MB - b.backup_size_MB AS DECIMAL(20,2)) AS backup_size_MB_CHANGE , CAST(a.cbackup_size_MB - b.cbackup_size_MB AS DECIMAL(10,2)) AS cbackup_size_MB_CHANGE ) x WHERE a.backup_finish_date >= DATEADD(day,-14,CAST(GETDATE() AS DATE)) -- limit result set to last 14 days of backups -- Show details SELECT a.[database_name] , a.backup_finish_date , a.[type] , a.backup_start_date_time , a.backup_finish_date_time , a.prev_backup_finish_date_time , a.backup_type , a.backup_size_MB , a.prev_backup_size_MB , a.backup_size_MB_CHANGE , a.cbackup_size_MB , a.prev_cbackup_size_MB , a.cbackup_size_MB_CHANGE FROM #Details a ORDER BY a.Row_Seq -- can order by this since all rows are sequenced, not partitioned first. -- Show latest backups types per database with size changes from previous backup SELECT a.[database_name] , a.backup_finish_date , a.[type] , a.backup_start_date_time , a.backup_finish_date_time , a.prev_backup_finish_date_time , a.backup_type , a.backup_size_MB , a.prev_backup_size_MB , a.backup_size_MB_CHANGE , a.cbackup_size_MB , a.prev_cbackup_size_MB , a.cbackup_size_MB_CHANGE --, x.Row_Seq FROM #Details a CROSS APPLY ( SELECT TOP 1 -- get the latest backup record per database per type b.Row_Seq FROM #Details b WHERE b.[database_name] = a.[database_name] AND b.[type] = a.[type] ORDER BY b.backup_finish_date_time DESC ) x WHERE x.Row_Seq = a.Row_Seq -- only display the rows that match the latest backup per database per type ORDER BY a.Row_Seq -- get the latest backups ONLY per database, also show the other types in the same row -- Show latest backups types per database with size changes from previous backup SELECT a.[database_name] --, a.backup_finish_date AS latest_backup_finish_date --, a.backup_type AS latest_backup_type --, a.backup_finish_date_time --, a.prev_backup_finish_date_time , ISNULL( ( a.backup_type + ' – ' + LTRIM(ISNULL(STR(ABS(DATEDIFF(DAY, GETDATE(),a.backup_finish_date_time))) + ' days ago', 'NEVER')) + ' – ' + CONVERT(VARCHAR(20), a.backup_start_date_time, 103) + ' ' + CONVERT(VARCHAR(20), a.backup_start_date_time, 108) + ' – ' + CONVERT(VARCHAR(20), a.backup_finish_date_time, 103) + ' ' + CONVERT(VARCHAR(20), a.backup_finish_date_time, 108) + ' (' + CAST(DATEDIFF(second, a.backup_finish_date_time,a.backup_finish_date_time) AS VARCHAR(4)) + ' ' + 'seconds)' ), '-' ) AS LastBackup , a.backup_size_MB AS Last_backup_size_MB , a.prev_backup_size_MB , a.backup_size_MB_CHANGE , a.cbackup_size_MB AS Last_cbackup_size_MB , a.prev_cbackup_size_MB , a.cbackup_size_MB_CHANGE --, x.Row_Seq , f.* -- full backup sizes: lastest, previous, and change , d.* -- differential backup sizes: lastest, previous, and change , l.* -- log backup sizes: lastest, previous, and change FROM #Details a CROSS APPLY ( SELECT TOP 1 -- get the latest backup record per database b.Row_Seq FROM #Details b WHERE b.[database_name] = a.[database_name] ORDER BY b.backup_finish_date_time DESC ) x CROSS APPLY ( SELECT TOP 1 -- get the latest backup record per database for full b.backup_size_MB AS Last_FullBackup_size_MB , b.prev_backup_size_MB AS Prev_FullBackup_size_MB , b.backup_size_MB_CHANGE AS FullBackup_size_MB_CHANGE , b.cbackup_size_MB AS Last_CFullBackup_size_MB , b.prev_cbackup_size_MB AS Prev_CFullBackup_size_MB , b.cbackup_size_MB_CHANGE AS CFullBackup_size_MB_CHANGE FROM #Details b WHERE b.[database_name] = a.[database_name] AND b.backup_type = 'full' ORDER BY b.backup_finish_date_time DESC ) f CROSS APPLY ( SELECT TOP 1 -- get the latest backup record per database for differential b.backup_size_MB AS Last_DiffBackup_size_MB , b.prev_backup_size_MB AS Prev_DiffBackup_size_MB , b.backup_size_MB_CHANGE AS DiffBackup_size_MB_CHANGE , b.cbackup_size_MB AS Last_CDiffBackup_size_MB , b.prev_cbackup_size_MB AS Prev_CDiffBackup_size_MB , b.cbackup_size_MB_CHANGE AS CDiffBackup_size_MB_CHANGE FROM #Details b WHERE b.[database_name] = a.[database_name] AND b.backup_type = 'differential' ORDER BY b.backup_finish_date_time DESC ) d CROSS APPLY ( SELECT TOP 1 -- get the latest backup record per database for log b.backup_size_MB AS Last_LogBackup_size_MB , b.prev_backup_size_MB AS Prev_LogBackup_size_MB , b.backup_size_MB_CHANGE AS LogBackup_size_MB_CHANGE , b.cbackup_size_MB AS Last_CLogBackup_size_MB , b.prev_cbackup_size_MB AS Prev_CLogBackup_size_MB , b.cbackup_size_MB_CHANGE AS CLogBackup_size_MB_CHANGE FROM #Details b WHERE b.[database_name] = a.[database_name] AND b.backup_type = 'log' ORDER BY b.backup_finish_date_time DESC ) l WHERE x.Row_Seq = a.Row_Seq -- only display the rows that match the latest backup per database ORDER BY a.Row_Seq DROP TABLE #BaseDROP TABLE #Details
The T-SQL is executed across all the SQL instances listed in the input file
T-SQL在输入文件中列出的所有SQL实例上执行
$filepath='\\hq6021\c$\serverlist.csv' Import-CSV $filepath -Header ServerName | Foreach-Object{ write-output "processing $($_.ServerName)" Invoke-Sqlcmd -ServerInstance $_.ServerName -Database master -OutputAs DataTables -Query `" DECLARE --@Database VARCHAR(50) = 'BI_DW' @Database VARCHAR(50) = NULL , @DaysBack INT = 8 , @StartDate DATE SET @StartDate = ( SELECT DATEADD(day,-1* @DaysBack,CAST(GETDATE() AS DATE)) ) DECLARE @Base AS TABLE -- match table data types except for fields not in table( DBName VARCHAR(128) , StartDateTime DATETIME , FinishDateTime DATETIME , TypeCode CHAR(1) , TypeDesc VARCHAR(15) , FinishDate DATE , SizeMB NUMERIC(35,13) , cSizeMB NUMERIC(35,13) , RowSeq INT) -- collect base data INSERT INTO @Base SELECT a.[database_name] AS DBName , a.backup_start_date AS StartDateTime , a.backup_finish_date AS FinishDateTime , a.[Type] AS TypeCode , CASE a.[Type] WHEN 'D' THEN 'full' WHEN 'L' THEN 'log' WHEN 'I' THEN 'differential' ELSE 'other' END AS TypeDesc , CAST(a.backup_finish_date AS DATE) AS FinishDate , a.backup_size/1024/1024.00 AS SizeMB , a.compressed_backup_size/1024/1024.00 AS cSizeMB , ROW_NUMBER() OVER(ORDER BY a.[database_name], a.[Type], a.backup_finish_date DESC) AS RowSeq FROM msdb.dbo.backupset a WHERE ( a.[database_name] = @Database OR @Database IS NULL ) AND a.backup_finish_date >= DATEADD(week,-1,@StartDate) -- get extra week so 2 weeks of FULL backup come through -- create details from base; WITH CTE_DetailsAS( SELECT a.DBName , a.FinishDate , a.TypeCode , a.StartDateTime , a.FinishDateTime , b.FinishDateTime AS FinishDateTime_prev , a.TypeDesc , x.SizeMB , x.SizeMB_prev , x.SizeMB_chng , x.cSizeMB , x.cSizeMB_Prev , x.cSizeMB_chng , a.RowSeq FROM @Base a INNER JOIN @Base b -- do INNER, don't show the row with no previous records, nothing to compare to anyway except NULL ON b.DBName = a.DBName AND b.TypeCode = a.TypeCode AND b.RowSeq = a.RowSeq + 1 -- get the previous row to compare difference in size changes CROSS APPLY ( SELECT CAST(a.SizeMB - b.SizeMB AS DECIMAL(20,2)) AS SizeMB_chng , CAST(a.cSizeMB - b.cSizeMB AS DECIMAL(10,2)) AS cSizeMB_chng , CAST(a.SizeMB AS DECIMAL(10,2)) AS SizeMB , CAST(b.SizeMB AS DECIMAL(10,2)) AS SizeMB_prev , CAST(a.cSizeMB AS DECIMAL(10,2)) AS cSizeMB , CAST(b.cSizeMB AS DECIMAL(10,2)) AS cSizeMB_Prev ) x WHERE a.FinishDateTime >= @StartDate -- limit result set to last 14 days of backups) -- get the latest backups ONLY per database, also show the other types in the same row -- Show latest backups types per database with size changes from previous backup -- include total log since last differential, and average differential since last full SELECT @@SERVERNAME Servername ,CONVERT(VARCHAR(25), a.DBName) AS dbName ,CONVERT(VARCHAR(10), DATABASEPROPERTYEX(a.DBName, 'status')) [Status] ,( SELECT COUNT(1) FROM sys.sysaltfiles WHERE DB_NAME(dbid) = DB.name AND groupid !=0 ) DataFiles ,( SELECT SUM((size*8)/1024) FROM sys.sysaltfiles WHERE DB_NAME(dbid) = DB.name AND groupid!=0 ) [DataMB] ,( SELECT COUNT(1) FROM sys.sysaltfiles WHERE DB_NAME(dbid) = DB.name AND groupid=0 ) LogFiles ,( SELECT SUM((size*8)/1024) FROM sys.sysaltfiles WHERE DB_NAME(dbid) = DB.name AND groupid=0 ) [LogMB] ,( SELECT SUM((size*8)/1024) FROM sys.sysaltfiles WHERE DB_NAME(dbid) = DB.name AND groupid!=0)+(SELECT SUM((size*8)/1024) FROM sys.sysaltfiles WHERE DB_NAME(dbid) = DB.name AND groupid=0 ) TotalSizeMB ,CONVERT(sysname,DatabasePropertyEx(name,'Recovery')) RecoveryModel , CONVERT(VARCHAR(20), crdate, 103) + ' ' + CONVERT(VARCHAR(20), crdate, 108) AS [Creationdate] -- , a.DBName , ISNULL(a.TypeDesc + ' – ' + LTRIM(ISNULL(STR(ABS(DATEDIFF(DAY, GETDATE(),a.FinishDateTime))) + ' days ago', 'NEVER')) + ' – ' + CONVERT(VARCHAR(20), a.StartDateTime, 103) + ' ' + CONVERT(VARCHAR(20), a.StartDateTime, 108) + ' – ' + CONVERT(VARCHAR(20), a.FinishDateTime, 103) + ' ' + CONVERT(VARCHAR(20), a.FinishDateTime, 108) + ' (' + CAST(DATEDIFF(second, a.FinishDateTime,a.FinishDateTime) AS VARCHAR(4)) + ' seconds)','-' ) AS LastBackup , a.SizeMB AS SizeMB_last , a.SizeMB_prev , a.SizeMB_chng , a.cSizeMB AS cSizeMB_last , a.cSizeMB_prev , a.cSizeMB_chng , f.* -- full backup sizes: lastest, previous, and change , d.* -- differential backup sizes: lastest, previous, and change , l.* -- log backup sizes: lastest, previous, and change , ( SELECT -- total log size since last differential sum(c.SizeMB) FROM CTE_Details c WHERE c.TypeDesc = 'log' AND c.DBName = a.DBName AND c.FinishDateTime >= d.diff_FinishDateTime_last -- since last differential GROUP BY c.DBName , c.TypeDesc ) AS Log_Total_SizeMB_SinceLastDiff , ( SELECT -- average differential size since last full sum(c.SizeMB) FROM CTE_Details c WHERE c.TypeDesc = 'differential' AND c.DBName = a.DBName AND c.FinishDateTime >= f.full_FinishDateTime_last -- since last differential GROUP BY c.DBName , c.TypeDesc ) AS Diff_Avg_SizeMB_SinceLastFull FROM CTE_Details a INNER JOIN ( SELECT -- do this rather than Cross apply, doesn't perform query for each row. MAX(b.FinishDateTime) AS FinishDateTime , b.DBName FROM CTE_Details b GROUP BY b.DBName ) x ON x.DBName = a.DBName AND x.FinishDateTime = a.FinishDateTime --CROSS APPLY ( SELECT TOP 1 -- get the latest backup record per database -- b.RowSeq -- FROM CTE_Details b -- WHERE b.DBName = a.DBName -- ORDER BY -- b.FinishDateTime DESC -- ) x OUTER APPLY ( SELECT TOP 1 -- get the latest backup record per database for full b.SizeMB AS Full_SizeMB_last , b.SizeMB_prev AS Full_SizeMB_prev , b.cSizeMB_chng AS Full_SizeMB_chng , b.cSizeMB AS Full_cSizeMB_last , b.cSizeMB_Prev AS Full_cSizeMB_prev , b.cSizeMB_chng AS Full_cSizeMB_chng , b.FinishDateTime AS Full_FinishDateTime_last FROM CTE_Details b WHERE b.DBName = a.DBName AND b.TypeDesc = 'full' ORDER BY b.FinishDateTime DESC ) f OUTER APPLY ( SELECT TOP 1 -- get the latest backup record per database for differential b.SizeMB AS Diff_SizeMB_last , b.SizeMB_prev AS Diff_SizeMB_prev , b.cSizeMB_chng AS Diff_SizeMB_chng , b.cSizeMB AS Diff_cSizeMB_last , b.cSizeMB_Prev AS Diff_cSizeMB_prev , b.cSizeMB_chng AS Diff_cSizeMB_chng , b.FinishDateTime AS Diff_FinishDateTime_last FROM CTE_Details b WHERE b.DBName = a.DBName AND b.TypeDesc = 'differential' ORDER BY b.FinishDateTime DESC ) d OUTER APPLY ( SELECT TOP 1 -- get the latest backup record per database for log b.SizeMB AS Log_SizeMB_last , b.SizeMB_prev AS Log_SizeMB_prev , b.cSizeMB_chng AS Log_SizeMB_chng , b.cSizeMB AS Log_cSizeMB_last , b.cSizeMB_Prev AS Log_cSizeMB_prev , b.cSizeMB_chng AS Log_cSizeMB_chng , b.FinishDateTime AS Log_FinishDateTime_last FROM CTE_Details b WHERE b.DBName = a.DBName AND b.TypeDesc = 'log' ORDER BY b.FinishDateTime DESC ) l inner join sys.sysdatabases DB on db.name=a.DBName --WHERE x.RowSeq = a.RowSeq -- only display the rows that match the latest backup per database "`|Write-SqlTableData -ServerInstance hqdbt01 -DatabaseName SQLShackDemo -SchemaName dbp -TableName BackupInfoTable -Force }
翻译自:
转载地址:http://pwnwd.baihongyu.com/