How to minimizeplan ahead the effect of running sp_delete_backuphistory? Measure the gains too!Optimization:...

Unreliable Magic - Is it worth it?

How to be diplomatic in refusing to write code that breaches the privacy of our users

How do I extract a value from a time formatted value in excel?

Grabbing quick drinks

Did Dumbledore lie to Harry about how long he had James Potter's invisibility cloak when he was examining it? If so, why?

'Given that' in a matrix

How did Arya survive the stabbing?

How does it work when somebody invests in my business?

Can "Reverse Gravity" affect Meteor Swarm?

Can the discrete variable be a negative number?

Class Action - which options I have?

What is the opposite of 'gravitas'?

Customer Requests (Sometimes) Drive Me Bonkers!

Is a stroke of luck acceptable after a series of unfavorable events?

India just shot down a satellite from the ground. At what altitude range is the resulting debris field?

How is it posible to add a double into an ArrayList of Integer? (Java)

I'm in charge of equipment buying but no one's ever happy with what I choose. How to fix this?

Is there a good way to store credentials outside of a password manager?

Describing a person. What needs to be mentioned?

What is the best translation for "slot" in the context of multiplayer video games?

How to write papers efficiently when English isn't my first language?

How can a function with a hole (removable discontinuity) equal a function with no hole?

Inappropriate reference requests from Journal reviewers

Why doesn't a table tennis ball float on the surface? How do we calculate buoyancy here?



How to minimizeplan ahead the effect of running sp_delete_backuphistory? Measure the gains too!


Optimization: Delete with Top and minimize the where clause date rangeHow to automatically stagger transaction log shipping to minimize bandwidth usage peaks?How to avoid using variables in WHERE clauseHow to give the least permissions for CRUD-managing and running SQLServerAgent jobs as well as Maintenance Plans?How to measure or find cost of creating a query plan?how to minimize the effect of updating statistics on the plan cache?A methodology for tracing a query that sporadically runs for hours instead of secondsRunning queries one after the other in psql/scripting environment?Query performance - lots of temptables, loops, updates and calculations for statistical reportMeasure Agent Job failure and running jobs with 'execution_status'













2















while running the following query:



-- DATEADD (datepart , number , date )  

DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt


I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.



enter image description here



Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?



I use dateadd to calculate 6 months.



Within sp_delete_backuphistory the following tables are trimmed:




sp_delete_backuphistory must be run from the msdb database and affects
the following tables:



backupfile



backupfilegroup



backupmediafamily



backupmediaset



backupset



restorefile



restorefilegroup



restorehistory











share|improve this question


















  • 1





    The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning

    – EzLo
    1 hour ago











  • @EzLo thank you for the link MSDB Performance Tuning

    – marcello miorelli
    55 mins ago
















2















while running the following query:



-- DATEADD (datepart , number , date )  

DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt


I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.



enter image description here



Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?



I use dateadd to calculate 6 months.



Within sp_delete_backuphistory the following tables are trimmed:




sp_delete_backuphistory must be run from the msdb database and affects
the following tables:



backupfile



backupfilegroup



backupmediafamily



backupmediaset



backupset



restorefile



restorefilegroup



restorehistory











share|improve this question


















  • 1





    The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning

    – EzLo
    1 hour ago











  • @EzLo thank you for the link MSDB Performance Tuning

    – marcello miorelli
    55 mins ago














2












2








2








while running the following query:



-- DATEADD (datepart , number , date )  

DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt


I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.



enter image description here



Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?



I use dateadd to calculate 6 months.



Within sp_delete_backuphistory the following tables are trimmed:




sp_delete_backuphistory must be run from the msdb database and affects
the following tables:



backupfile



backupfilegroup



backupmediafamily



backupmediaset



backupset



restorefile



restorefilegroup



restorehistory











share|improve this question














while running the following query:



-- DATEADD (datepart , number , date )  

DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt


I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.



enter image description here



Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?



I use dateadd to calculate 6 months.



Within sp_delete_backuphistory the following tables are trimmed:




sp_delete_backuphistory must be run from the msdb database and affects
the following tables:



backupfile



backupfilegroup



backupmediafamily



backupmediaset



backupset



restorefile



restorefilegroup



restorehistory








sql-server backup delete scripting monitoring






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 1 hour ago









marcello miorellimarcello miorelli

5,9802163143




5,9802163143








  • 1





    The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning

    – EzLo
    1 hour ago











  • @EzLo thank you for the link MSDB Performance Tuning

    – marcello miorelli
    55 mins ago














  • 1





    The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning

    – EzLo
    1 hour ago











  • @EzLo thank you for the link MSDB Performance Tuning

    – marcello miorelli
    55 mins ago








1




1





The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning

– EzLo
1 hour ago





The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning

– EzLo
1 hour ago













@EzLo thank you for the link MSDB Performance Tuning

– marcello miorelli
55 mins ago





@EzLo thank you for the link MSDB Performance Tuning

– marcello miorelli
55 mins ago










2 Answers
2






active

oldest

votes


















3














My gripes with this proc go back a long way:




  • The Annals of Hilariously Bad Code, Part 1: Critique the Code

  • The Annals of Hilariously Bad Code, Part 2


The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.



I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.



As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).



CREATE   PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON

CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)

INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date

INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date

INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)

BEGIN TRANSACTION

DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit

DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit

COMMIT TRANSACTION
RETURN

Quit:
ROLLBACK TRANSACTION

END





share|improve this answer





















  • 1





    +1 Now that's my man! Keep training those biceps and improving those procedures! I will accept the answer straight away, I am sure you have done all the testing!

    – marcello miorelli
    33 mins ago






  • 1





    @marcellomiorelli 😂

    – Erik Darling
    30 mins ago



















1














Here is something you could try.




  • Restore a backup of your MSDB database to a test server and call it
    something like MSDB_TEST.

  • Once restored, go into the sp_delete_backuphistory stored procedure
    in the MSDB_TEST database and search/replace msdb. with
    msdb_test. and alter it.

  • Capture the current row count of the tables you are interested in.

  • Now, run the altered version of the sp_delete_backuphistory stored procedure in the MSDB_TEST database.

  • Compare the current row counts to the previously capture ones.






share|improve this answer
























  • +1 for always being helpful - Yes that could be a way - one that I was trying to avoid to be honest

    – marcello miorelli
    53 mins ago








  • 1





    @marcellomiorelli - understood - I'm afraid that's all I have for this problem.

    – Scott Hodgin
    50 mins ago











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f233375%2fhow-to-minimize-plan-ahead-the-effect-of-running-sp-delete-backuphistory-measur%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









3














My gripes with this proc go back a long way:




  • The Annals of Hilariously Bad Code, Part 1: Critique the Code

  • The Annals of Hilariously Bad Code, Part 2


The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.



I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.



As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).



CREATE   PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON

CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)

INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date

INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date

INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)

BEGIN TRANSACTION

DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit

DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit

COMMIT TRANSACTION
RETURN

Quit:
ROLLBACK TRANSACTION

END





share|improve this answer





















  • 1





    +1 Now that's my man! Keep training those biceps and improving those procedures! I will accept the answer straight away, I am sure you have done all the testing!

    – marcello miorelli
    33 mins ago






  • 1





    @marcellomiorelli 😂

    – Erik Darling
    30 mins ago
















3














My gripes with this proc go back a long way:




  • The Annals of Hilariously Bad Code, Part 1: Critique the Code

  • The Annals of Hilariously Bad Code, Part 2


The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.



I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.



As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).



CREATE   PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON

CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)

INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date

INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date

INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)

BEGIN TRANSACTION

DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit

DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit

COMMIT TRANSACTION
RETURN

Quit:
ROLLBACK TRANSACTION

END





share|improve this answer





















  • 1





    +1 Now that's my man! Keep training those biceps and improving those procedures! I will accept the answer straight away, I am sure you have done all the testing!

    – marcello miorelli
    33 mins ago






  • 1





    @marcellomiorelli 😂

    – Erik Darling
    30 mins ago














3












3








3







My gripes with this proc go back a long way:




  • The Annals of Hilariously Bad Code, Part 1: Critique the Code

  • The Annals of Hilariously Bad Code, Part 2


The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.



I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.



As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).



CREATE   PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON

CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)

INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date

INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date

INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)

BEGIN TRANSACTION

DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit

DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit

COMMIT TRANSACTION
RETURN

Quit:
ROLLBACK TRANSACTION

END





share|improve this answer















My gripes with this proc go back a long way:




  • The Annals of Hilariously Bad Code, Part 1: Critique the Code

  • The Annals of Hilariously Bad Code, Part 2


The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.



I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.



As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).



CREATE   PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON

CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)

INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date

INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date

INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)

BEGIN TRANSACTION

DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit

DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit

DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit

DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit

COMMIT TRANSACTION
RETURN

Quit:
ROLLBACK TRANSACTION

END






share|improve this answer














share|improve this answer



share|improve this answer








edited 26 mins ago

























answered 36 mins ago









Erik DarlingErik Darling

22.2k1269111




22.2k1269111








  • 1





    +1 Now that's my man! Keep training those biceps and improving those procedures! I will accept the answer straight away, I am sure you have done all the testing!

    – marcello miorelli
    33 mins ago






  • 1





    @marcellomiorelli 😂

    – Erik Darling
    30 mins ago














  • 1





    +1 Now that's my man! Keep training those biceps and improving those procedures! I will accept the answer straight away, I am sure you have done all the testing!

    – marcello miorelli
    33 mins ago






  • 1





    @marcellomiorelli 😂

    – Erik Darling
    30 mins ago








1




1





+1 Now that's my man! Keep training those biceps and improving those procedures! I will accept the answer straight away, I am sure you have done all the testing!

– marcello miorelli
33 mins ago





+1 Now that's my man! Keep training those biceps and improving those procedures! I will accept the answer straight away, I am sure you have done all the testing!

– marcello miorelli
33 mins ago




1




1





@marcellomiorelli 😂

– Erik Darling
30 mins ago





@marcellomiorelli 😂

– Erik Darling
30 mins ago













1














Here is something you could try.




  • Restore a backup of your MSDB database to a test server and call it
    something like MSDB_TEST.

  • Once restored, go into the sp_delete_backuphistory stored procedure
    in the MSDB_TEST database and search/replace msdb. with
    msdb_test. and alter it.

  • Capture the current row count of the tables you are interested in.

  • Now, run the altered version of the sp_delete_backuphistory stored procedure in the MSDB_TEST database.

  • Compare the current row counts to the previously capture ones.






share|improve this answer
























  • +1 for always being helpful - Yes that could be a way - one that I was trying to avoid to be honest

    – marcello miorelli
    53 mins ago








  • 1





    @marcellomiorelli - understood - I'm afraid that's all I have for this problem.

    – Scott Hodgin
    50 mins ago
















1














Here is something you could try.




  • Restore a backup of your MSDB database to a test server and call it
    something like MSDB_TEST.

  • Once restored, go into the sp_delete_backuphistory stored procedure
    in the MSDB_TEST database and search/replace msdb. with
    msdb_test. and alter it.

  • Capture the current row count of the tables you are interested in.

  • Now, run the altered version of the sp_delete_backuphistory stored procedure in the MSDB_TEST database.

  • Compare the current row counts to the previously capture ones.






share|improve this answer
























  • +1 for always being helpful - Yes that could be a way - one that I was trying to avoid to be honest

    – marcello miorelli
    53 mins ago








  • 1





    @marcellomiorelli - understood - I'm afraid that's all I have for this problem.

    – Scott Hodgin
    50 mins ago














1












1








1







Here is something you could try.




  • Restore a backup of your MSDB database to a test server and call it
    something like MSDB_TEST.

  • Once restored, go into the sp_delete_backuphistory stored procedure
    in the MSDB_TEST database and search/replace msdb. with
    msdb_test. and alter it.

  • Capture the current row count of the tables you are interested in.

  • Now, run the altered version of the sp_delete_backuphistory stored procedure in the MSDB_TEST database.

  • Compare the current row counts to the previously capture ones.






share|improve this answer













Here is something you could try.




  • Restore a backup of your MSDB database to a test server and call it
    something like MSDB_TEST.

  • Once restored, go into the sp_delete_backuphistory stored procedure
    in the MSDB_TEST database and search/replace msdb. with
    msdb_test. and alter it.

  • Capture the current row count of the tables you are interested in.

  • Now, run the altered version of the sp_delete_backuphistory stored procedure in the MSDB_TEST database.

  • Compare the current row counts to the previously capture ones.







share|improve this answer












share|improve this answer



share|improve this answer










answered 1 hour ago









Scott HodginScott Hodgin

17.9k21634




17.9k21634













  • +1 for always being helpful - Yes that could be a way - one that I was trying to avoid to be honest

    – marcello miorelli
    53 mins ago








  • 1





    @marcellomiorelli - understood - I'm afraid that's all I have for this problem.

    – Scott Hodgin
    50 mins ago



















  • +1 for always being helpful - Yes that could be a way - one that I was trying to avoid to be honest

    – marcello miorelli
    53 mins ago








  • 1





    @marcellomiorelli - understood - I'm afraid that's all I have for this problem.

    – Scott Hodgin
    50 mins ago

















+1 for always being helpful - Yes that could be a way - one that I was trying to avoid to be honest

– marcello miorelli
53 mins ago







+1 for always being helpful - Yes that could be a way - one that I was trying to avoid to be honest

– marcello miorelli
53 mins ago






1




1





@marcellomiorelli - understood - I'm afraid that's all I have for this problem.

– Scott Hodgin
50 mins ago





@marcellomiorelli - understood - I'm afraid that's all I have for this problem.

– Scott Hodgin
50 mins ago


















draft saved

draft discarded




















































Thanks for contributing an answer to Database Administrators Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f233375%2fhow-to-minimize-plan-ahead-the-effect-of-running-sp-delete-backuphistory-measur%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Fairchild Swearingen Metro Inhaltsverzeichnis Geschichte | Innenausstattung | Nutzung | Zwischenfälle...

Pilgersdorf Inhaltsverzeichnis Geografie | Geschichte | Bevölkerungsentwicklung | Politik | Kultur...

Marineschifffahrtleitung Inhaltsverzeichnis Geschichte | Heutige Organisation der NATO | Nationale und...