Archive for January, 2013

Resolution for the Maintenance plan error “The INSERT statement conflicted with the FOREIGN KEY constraint

January 26, 2013 3 comments

Topic: Resolution for the Maintenance plan error “The INSERT statement conflicted with the FOREIGN KEY constraint “FK_sysmaintplan_log_subplan_id”.” At SQL server 2008 R2.



We have faced the strange issue at sql server 2008 R2 and suddenly start to failed all the  backup maintenance plan due to above  error.


Reason: This is caused by multi connections stored within maintenance plan. When we create or edit a maintenance plan from another instance, it creates another connection. It happened cause sometime we are register the sql server from other box and want to modify the current maintenance plan as it automatically saves a new  connection and tries to insert job history in new connection causing FK Violation and eventually job fails.


Resolution: Please upgrade the service pack 2 in sql server 2008 R2 environment to resolve the issue permanently or recreate the new maintenance plan as like old.  To avoid such incident Please login to same sql server instance whenever we wish to update\modify any main plan and don’t use other box  to connect the server.

Resolution for Microsoft OLE DB Provider for SQL Server: Creating or altering table ‘Test’ failed because the minimum row size would be 8204, including 255 bytes of internal overhead. This exceeds the maximum allowable table row size of 8060 bytes.

January 26, 2013 Leave a comment

We have faced the issue few weeks back and workaround follow for the same as like below.

Workaround: As per the below error log  it is clearly indicate that we  are trying to create a table that has a row with FIXED data columns in excess of 8060 bytes. The Row-overflow data and rows in excess of 8060 bytes is only possible using VARIABLE data types..

.if we change our table syntax to the following, we’ll notice it should work without issue: create table dbo. Test (x char(60) not null, y varchar(8000) not null)

some important facts

  • Table row can have more than 8060      bytes. (2GB Max)
  • varchar, nvarchar, varbinary,      sql_variant, or CLR user-defined type columns can have max 8000 bytes.
  • varchar(max), nvarchar(max),      varbinary(max), text, image or xml data type columns have no restrictions.
  • All the other data type columns      (other than mentioned in above three points) width addition must be still      under 8060 byte row limit.
  • Index can only be created which      falls with-in 8060 byte row limit.

Note: The length of individual columns must still fall within the limit of 8,000 bytes for varchar, nvarchar, varbinary, sql_variant, and CLR user-defined type columns. Only their combined lengths can exceed the 8,060-byte row limit of a table.

Categories: SQL Server 2008

How do I know which advanced futures are used in Database at SQL 2008 enterprise edition

January 26, 2013 Leave a comment

How do I know which advanced futures are used in Database at SQL 2008 enterprise edition


As we are know that all futures are available at enterprise edition but the same will not available at standard edition. Due to business requirement if we have to move the database enterprise to standard then It can be important to know which Enterprise features have been enabled. To retrieve details of which SQL Server Enterprise functions have been enabled, you just need to issue the simple SQL statement given here:


SELECT * FROM sys.dm_db_persisted_sku_features( Only applicable at SQL 2008 also)


Futures are available at enterprise edition which not available at standard.


Compression Indicates at least one table or index uses data compression

Partitioning Indicates the database contains partitioned tables, indexes, schemes, or functions

Transparent Data Encryption Indicates the database has been encrypted with transparent data encryption

Change Capture Indicates the database has change data capture enabled


If one table is compressed in entire database then it is indicate




feature_name                   feature_id

Compression                     100

Categories: SQL Server 2008

Modified UPDATESTATISTIC script with Full Scan option for performance improvement

January 26, 2013 Leave a comment

Topic: Modified UPDATESTATISTIC script with Full Scan option  for performance improvement



As we are aware about that sp system stored procedure sp_updatestats does not  have ability to update  statistics with the FULLSCAN option so here query can’t determine the actual execution plan. I have observed that most of index are skipped. Due to the same since couple of months we have faced severe performance issue for most of the server in our project. I have modified script which use the Full scan parameter Exec sp_MSForEachtable ”update statistics ? with fullscan”.


I have tested it in our Citagus test environment and it is working as expected. I have captured the index statics  report before and after to make sure that script is working perfectly. If the statistics are not updated since long days so it will take some more time during initial implementation stage but the completion time will be decreased once it start to run daily basis cause only those tables are updated which is required. Please check it from your end and expedite with your comments. You can make the below script as store procedure to implement it as a job or can be implement direct to job step to ensure it should be follow on specific schedule as per your environment.


Note: Script can be applicable for SQL 2005 and SQL 2008





@dbname varchar(1000),

@parentname varchar(255),

@SQLString VARCHAR (1000),

@ctrl CHAR (2),

@command varchar(1000)

SET @ctrl = CHAR (13) + CHAR (10)


select [name]

from sys.sysdatabases where name not in





order by 1

OPEN UpdateCursor

FETCH NEXT FROM UpdateCursor INTO @dbname



select @command =

use [‘+@dbname+’]

Exec sp_MSForEachtable ”update statistics ? with fullscan”

exec (@command)

FETCH NEXT FROM UpdateCursor INTO @dbname


CLOSE UpdateCursor



Rectified the Page corruption issue by page restore option

January 26, 2013 1 comment

Rectified the Page corruption issue by page restore option

The database corruption is very common issue in DBA world but we can restore the specific page which is corrupted by page restore option. We have implemented successfully in our project and written the below article If the DBCC CHECKDB  job has failed and as an output  we have received the below output.

Object ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data): Page (1:94299) could not be processed. See other errors for details.
Msg 8939, Level 16, State 98, Line 1
Table error: Object ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data), page (1:94299). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed.
CHECKDB found 0 allocation errors and 2 consistency errors in table ‘ DimSalesReason’ (object ID 2088535921).
CHECKDB found 0 allocation errors and 2 consistency errors in database ‘ AdventureWorksLT2008’.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (AdventureWorksLT2008).


Work around:

From this you can see what page is corrupted (1:94299)

The first thing to do is check if it is data in a heap, in a clustered index, or in a non-clustered index. In the above text you can see it is index ID 0. You could also examine the page (1:94299 in database ‘yourdb’) as follows:

DBCC TRACEON (3604, -1)
DBCC PAGE(‘ AdventureWorksLT2008’, 1, 94299, 3)
In the output you will see something like:

Metadata: IndexId = n

Note: If n is greater than 1 it is a non-clustered index and can safely be dropped and recreated. If n is 0 or 1 you have data corruption and need to perform one of the options described below.
Option  1: Restoring specific corrupted  page  from a backup
If the recovery model is FULL (or BULK_LOGGED, with some limitations), you can back up the tail of the log, perform a restore (with no recovery) from the last clean full backup, followed by subsequent log backups and finally the tail of the log.

If only a few pages are affected you have the option of selectively restoring only the bad pages, as follows:

RESTORE DATABASE AdventureWorksLT2008 PAGE = ‘1:94299’
FROM DISK = ‘E:\ AdventureWorksLT2008.bak’
Option 2: If the recovery model is simple you don’t have that option, and have to accept that a restore from the last full backup will result in subsequent transactions being lost. In this case, or if you have no backups at all, you may decide that an automatic repair is the only option.





Categories: Backup and Restore

Delete specific file by use of Sql server xp_cmdshell command

January 26, 2013 Leave a comment

Topic: Delete specific file by use of Sql server xp_cmdshell command


We are use centralize backup server to keep the backup and required some maintenance to delete all the old file. Here use the below script to delete the backup file by use of xp_cmdshell command from sql server.


Delete all .sql files in the C:\Backup directory and its subfolders where the file modified date is older than 1/20/2013.


EXEC xp_cmdshell ‘FORFILES /p c:\BACKUP /s /m *.sql /d 1/20/2013 /c “CMD /C del /Q /F @FILE”‘


Delete all .sql files in the C:\Backup directory and its subfolders where the file modified date is more than 30 days old.


EXEC xp_cmdshell ‘FORFILES /p c:\BACKUP /s /m *.sql /d -30 /c “CMD /C del /Q /F @FILE”‘


Delete all .sql files in the C:\Backup directory and its subfolders where the file modified date is more than 30 days old and the file name starts with an “F_”.


EXEC xp_cmdshell ‘FORFILES /p c:\BACKUP /s /m F_*.sql /d -30 /c “CMD /C del /Q /F @FILE”‘

Rebuild Job failure issue at SQL server 2008 R2 environment incorrect settings: ‘QUOTED_IDENTIFIER’

January 26, 2013 2 comments

We have received the below error:




ALTER INDEX failed because the following SET options have incorrect settings: ‘QUOTED_IDENTIFIER’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations. [SQLSTATE 42000] (Error 1934).  The step failed.





Please pasted the below three lines on the rebuild job step










These are the two widely used SET options in SQL Server. Most developers explicitly set these options while creating Stored Procedures, Triggers and User Defined Functions but many are unclear on why we need to explicitly SET them? And why they are special compared to other options?

The default behavior is ON in any database. As per BOL 2008 this option will always be set to ON in the future releases of SQL Server and any explicit SET to OFF will result an error. So avoid explicitly setting this option in future development work.

These two SET options are special because whenever a stored procedure or a Trigger or a User Defined Function is created or modified with these options explicitly SET; SQL Server remembers those settings in the associated object metadata. And every time the object (stored procedure, Trigger..etc.) is executed SQL server uses the stored settings irrespective of what the current user session settings are. So the behavior of the stored procedure is not altered by the calling session settings and the usage of the SET option behavior inside the SP is always guaranteed.

We can get any procedure or trigger or function settings for these options from the sys..sql_modules metadata table.

SELECT uses_ansi_nulls, uses_quoted_identifier  FROM sys.sql_modules WHERE object_id = object_id('SampleProcedure')
Categories: SQL Server 2008