Microsoft SQL Server – TempDB usage per active session


Lists the TempDB usage per each active session.
It helps identifying the sessions that use the tempdb heavily with internal objects.

When the internal objects usage is high, the session is probably using big hash tables or spooling in worktables. It could be a symptom of an inefficient plan or a missing index.

Shrinking a TempDB full of internal objects will probably have no effect, because the engine will not release the deallocated space. The only possible alternative to restarting the service or Kill the specified SPID after received the approval from business owner.

It helps to identify the root cause of the problem why Tempdb is rapidly increased.

Script: I already tested in our Citagus test environment and it work as expected.,

;WITH task_space_usage AS (

— SUM alloc/delloc pages

SELECT session_id,

request_id,

SUM(internal_objects_alloc_page_count) AS alloc_pages,

SUM(internal_objects_dealloc_page_count) AS dealloc_pages

FROM sys.dm_db_task_space_usage WITH (NOLOCK)

WHERE session_id <> @@SPID

GROUP BY session_id, request_id

)

SELECT TSU.session_id,

TSU.alloc_pages * 1.0 / 128 AS [internal object MB space],

TSU.dealloc_pages * 1.0 / 128 AS [internal object dealloc MB space],

EST.text,

— Extract statement from sql text

ISNULL(

NULLIF(

SUBSTRING(

EST.text,

ERQ.statement_start_offset / 2,

CASE WHEN ERQ.statement_end_offset < ERQ.statement_start_offset THEN 0 ELSE( ERQ.statement_end_offset – ERQ.statement_start_offset ) / 2 END

), ”

), EST.text

) AS [statement text],

EQP.query_plan

FROM task_space_usage AS TSU

INNER JOIN sys.dm_exec_requests ERQ WITH (NOLOCK)

ON TSU.session_id = ERQ.session_id

AND TSU.request_id = ERQ.request_id

OUTER APPLY sys.dm_exec_sql_text(ERQ.sql_handle) AS EST

OUTER APPLY sys.dm_exec_query_plan(ERQ.plan_handle) AS EQP

WHERE EST.text IS NOT NULL OR EQP.query_plan IS NOT NULL

ORDER BY 3 DESC, 5 DESC

By: Rajib Kundu

Posted in Microsoft SqlServer Administration | Comments (51)

6 Jun

Should I take the backup with or without compress? Which option is best?

In our day to day life the business owner frequently asked that question .

Explanation:

File Size Summary
The SQL 2008 Backup without compression was 3.5GB
The SQL 2008 Backup with compression was 986MB. The reduced space is 2.5GB; the compressed file is only 28% of the original size!

Note: Not all databases will have this compression factor or CPU usage due to systems and data types etc.

The SQL 2008 Backup without compression took approx 6 minutes and the SQL 2008 Backup with compression took approx 3 minutes. You might be asking – how is that possible?

Ans: Well the CPU is used to compress the data before it is written to disk, so LESS data is written to disk. That is the reason to take the less time to do the backup job with compress features.

Script to check SQL Server Cluster Nodes


Following Script will help your to check SQL Server Cluster Nodes, which will query clustered server instance and determine how many nodes exist in the SQL Server cluster

For SQL Server 2000

– Returns the list of nodes on which the virtual server can run. Such information is useful in failover clustering environments.

– If the current server instance is not a clustered server, fn_virtualservernodes returns an empty rowset.

SELECT * FROM ::fn_virtualservernodes()

 

For SQL Server 2005 /2008 / 2008 R2

– determine the nodes on a SQL Server clustered server instance:

SELECT * FROM sys.dm_os_cluster_nodes

For SQL Server 2005/2008/2008 R2 and SQL Server 2011 (Denali)

– VIEW SERVER STATE permission are required to execute this function

– current server instance is not a clustered server, fn_virtualservernodes returns an empty rowset.

SELECT * FROM fn_virtualservernodes()

 

Following Script will help your to check SQL Server Cluster Nodes, which will query clustered server instance and determine how many nodes exist in the SQL Server cluster

 

For SQL Server 2000

 

– Returns the list of nodes on which the virtual server can run. Such information is useful in failover clustering environments.

– If the current server instance is not a clustered server, fn_virtualservernodes returns an empty rowset.

 

SELECT * FROM ::fn_virtualservernodes()

 

For SQL Server 2005 /2008 / 2008 R2

 

– determine the nodes on a SQL Server clustered server instance:

 

SELECT * FROM sys.dm_os_cluster_nodes

 

For SQL Server 2005/2008/2008 R2 and SQL Server 2011 (Denali)

 

– VIEW SERVER STATE permission are required to execute this function

– current server instance is not a clustered server, fn_virtualservernodes returns an empty rowset.

 

SELECT * FROM fn_virtualservernodes()

Performance Analysis of Logs (PAL) Tool


Logman

Manages and schedules performance counter and event trace log collections on local and remote systems.

Data Collection

First, we need to identify what Objects and Counters we need. We can automate creating one perfmon log file for each day. Then move the old files to a network share for later reference.

Cut and paste the contents below to a file name “SQL2005BaselineCounters.config” and save the file in “E:\perflogs”. You can choose any local drive/folder, make sure there is enough space in it. Try not to use the same storage that your SQL Server databases use, because this can slow down your I/O throughput.

Note in the below code I am collecting SQL Server data for an instance named “MSSQL$DEV”. This will need to be replaced for your server. If you just have a default instance this will just be “SQLServer

“\Memory\Available MBytes”
“\Memory\Free System Page Table Entries”
“\Memory\Pages Input/sec”
“\Memory\Pages/sec”
“\MSSQL$DEV:Access Methods\Full Scans/sec”
“\MSSQL$DEV:Access Methods\Page Splits/sec”
“\MSSQL$DEV:Access Methods\Workfiles Created/sec”
“\MSSQL$DEV:Access Methods\Worktables Created/sec”
“\MSSQL$DEV:Buffer Manager\Buffer cache hit ratio”
“\MSSQL$DEV:Buffer Manager\Checkpoint pages/sec”
“\MSSQL$DEV:Buffer Manager\Free pages”
“\MSSQL$DEV:Buffer Manager\Lazy writes/sec”
“\MSSQL$DEV:Buffer Manager\Page life expectancy”
“\MSSQL$DEV:Buffer Manager\Page reads/sec”
“\MSSQL$DEV:Buffer Manager\Page writes/sec”
“\MSSQL$DEV:Buffer Manager\Stolen pages”
“\MSSQL$DEV:General Statistics\Logins/sec”
“\MSSQL$DEV:General Statistics\Logouts/sec”
“\MSSQL$DEV:General Statistics\User Connections”
“\MSSQL$DEV:Latches\Average Latch Wait Time (ms)”
“\MSSQL$DEV:Locks(_Total)\Average Wait Time (ms)”
“\MSSQL$DEV:Locks(_Total)\Lock Requests/sec”
“\MSSQL$DEV:Locks(_Total)\Number of Deadlocks/sec”
“\MSSQL$DEV:Memory Manager\Target Server Memory (KB)”
“\MSSQL$DEV:Memory Manager\Total Server Memory (KB)”
“\MSSQL$DEV:SQL Statistics\Batch Requests/sec”
“\MSSQL$DEV:SQL Statistics\SQL Compilations/sec”
“\MSSQL$DEV:SQL Statistics\SQL Re-Compilations/sec”
“\Paging File(_Total)\% Usage”
“\Paging File(_Total)\% Usage Peak”
“\PhysicalDisk(_Total)\Avg. Disk Read Queue Length”
“\PhysicalDisk(_Total)\Avg. Disk sec/Read”
“\PhysicalDisk(_Total)\Avg. Disk sec/Transfer”
“\PhysicalDisk(_Total)\Avg. Disk sec/Write”
“\PhysicalDisk(_Total)\Avg. Disk Write Queue Length”
“\Process(sqlservr)\% Privileged Time”
“\Process(sqlservr)\% Processor Time”
“\Processor(_Total)\% Privileged Time”
“\Processor(_Total)\% Processor Time”
“\System\Context Switches/sec”
“\System\Processor Queue Length”

Fig 1

Next step is to create a counter log in perfmon which will use the above counters. From a command prompt, execute the statement below on your SQL Server box.

logman create counter SQL2005Perf -f bin -b 01/01/2009 00:00:05 -E 01/01/2009 23:59:00 -si 05 -v mmddhhmm -o “E:\perflogs\SQL2005Perf” -cf “E:\perflogs SQL2005BaselineCounters.config” -u domain\username *

You will be prompted for the password

Fig 2

This will create a new counter log named “SQL2005Perf”. To verify this, open perfmon and click “counter logs” under Performance logs and alerts.

Fig 3

Here is the notes for each option used above:

  • SQL2005Perf – name of the counter log
  • -f bin – binary format, with a .blg extension
  • -b 01/01/2009 00:00:05 – begin data and time, input a datetime to start the counter log at a later time, if you input a datetime that is past, the counter log will start immediately
  • -E 01/01/2009 23:59:00 – end date and time
  • -si 05 – 5 second interval
  • -v mmddhhmm – filename will be named as SQL2005Perf_mmddhhmm.blg
  • -o “E:\perflogs\SQL2005Perf” – output folder name
  • -cf “E:\perflogs\SQL2005BaselineCounters.config” – config file name with the counters
  • -u domain\username * – the * will prompt for the password for the domain user account.

You can manually start and stop the above counter logs with below commands:

Logman start SQL2005Perf 

Logman stop SQL2005Perf

To get a list of all your counter logs you can run this command.

Logman query

Automate Data Collection

 

It would be nice if we have performance data for each day. This can be done by stopping the counter log at 11:59:59 PM and again starting it at 12:00:01 AM daily. Once you stop the counter log the log file is closed and starting the counter log creates a new log file.

Lets create a scheduled task which will do this.

Lets first create a batch file “StopCopyStart.bat” that will be executed by the scheduled task. Copy the contents below and save it in the folder “E:\perflogs”. Make sure to change the “Servername”, “SourceServerName” to match your servers.

logman stop sql2005perf
sleep 1
move E:\perflogs\SQL2005Perf\*.blg \\Servername\E$\perflogs\SourceServerName
sleep 1
logman start sql2005perf

Fig 4

From the command prompt execute this statement to create the scheduled task.

schtasks /create /tn “Stop Copy And Start SQL2005Perf Perfmon Counter Log” /tr E:\perflogs\StopCopyStart.bat /sc daily /st 23:59:58 /ed 01/01/2099 /ru system

Fig 5

Here are the notes for each option used above:

  • schtasks /create – create a new scheduled task
  • /tn “Stop Copy And Start SQL2005Perf Perfmon Counter Log” – scheduled task name
  • /tr E:\perflogs\StopCopyStart.bat – run this batch file
  • /sc daily /st 23:59:59 – schedule time
  • /ed 01/01/2099 – end date
  • /ru system user account

A new scheduled task with name “Stop Copy And Start SQL2005Perf Perfmon Counter Log” will now be created. From the command prompt type “schtasks”, this will list all the scheduled tasks.

Next Steps

  • The idea is to move the performance data .blg file from each of the SQL Server to a common share. Create one folder for each server and move the .blg file there. So, the user account to execute the scheduled task should have permissions on the share where we move the performance log file.
  • You should do some housekeeping on this network share to delete files older than 30 or 60 days.
  • Always have one week ( one file for each day of week ) performance data for each server to be used as a baseline. If you have any performance problems you can compare with the baseline.

Performance Analysis of Logs (PAL) Tool

PAL (Performance Analysis of Logs) tool is a powerful tool that reads in a performance monitor counter log and analyzes it using known thresholds.

Features

Thresholds files for most of the major Microsoft products such as IIS, MOSS, SQL Server, BizTalk, Exchange, and Active Directory.

An easy to use GUI interface which makes creating batch files for the PAL.ps1 script.

A GUI editor for creating or editing your own threshold files.

Creates an HTML based report for ease of copy/pasting into other applications.

Analyzes performance counter logs for thresholds using thresholds that change their critieria based on the computer’s role or hardware specs.

For more information

http://pal.codeplex.com/

Once the PAL is installed on your local system or shared server you can analyze the perfmon countests.

  • Copy all the perfmon files to C:\PAL location.(any folder)
  • Edit the PAL_reports_creation.bat and make shore all the file names should be same as the files which you have copied to c:\PAL.
  • Run the batch file bat1.bat and all the reports will generate and copied to output_path.

Create batch file and paste the script for each server.

Below script is for server1 if you have 5 servers then you need to have 5 times below script and set the parameters as per the server configurations.

Batch file:-

——————————————————————————————————————-

CScript “C:\Program Files\PAL\PAL v1.3.5\PAL.vbs” /LOG:”server1_perfmon_log_path.csv” /THRESHOLDFILE:”C:\Program Files\PAL\PAL v1.3.5\SQLServer2005.xml” /INTERVAL:”AUTO” /ISOUTPUTHTML:True /ISOUTPUTXML:False /OUTPUTDIR:”output_path” /HTMLOUTPUTFILENAME:”[LogFileName]_PAL_ANALYSIS_[DateTimeStamp]_[GUID].htm” /NumberOfProcessors:12 /ThreeGBSwitch:False /SixtyFourBit:True /TotalMemory:16 /KernelDumpOnCDrive:True

————————————————————————————————————————–

Is it possible that SQL 2005 backup can restore in SQL 2000?


As per Microsoft there is no workaround. SQL Server 2005 does not support downgrades or restores to SQL Server 2000. But if server contains SQL 2000 and 2005 both instances then we can restore the SQL 2005 backup on SQL 2000 instance.

Or

There is no “Restore” functionality, but there is a workaround to copy 2005 databases to 2000:

Right-click on DB -> tasks -> generate scripts
select DB and click “script all objects…”, hit next
select any options you want, specifically changing “script for server version” to SQL Server 2000
next through and run the script

Now just export data from the 2005 database to the newly-created 2000 database.
Right-click on DB -> tasks -> export…
set source and hit next
set destination and hit next
select “copy data from one or more tables”, hit next
select all, check “optimize for many tables” and “run in a transaction”
you may have to edit each table mapping and check “enable identity insert”

Very interesting feature of Backup compression


I come across very interesting feature of Backup compression.

If you have taken the backup of the database by TSQL statement like with compress command that time the compress is workout but the size is different. If you would like to take the full Backup compress futures then enable advance option and set the server level setting for backup compression on. After On the option then if take the backup then the size is different.

I have taken Adventure database backup without the server settings option that time the size is 135235 KB but the database is backed after enable the option then the size is 34906 kb.

 

To enable the option
USE master;
GO
EXEC sp_configure 'show advanced option', '1';
RECONFIGURE
GO
EXEC sp_configure 'backup compression default', '1';
RECONFIGURE WITH OVERRIDE;
GO
EXEC sp_configure 'show advanced option', '0';
RECONFIGURE
GO

Proactive Monitoring method Database Log space status to avoid log file full error


As per the customer requirement we have to make a script that notified us whenever a database (including tempdb) has less than 50% unused transaction log space. We monitored for this condition proactively discover long running, transaction log consuming queries. Such queries can take a long time to rollback and can sometimes be a result of a user error – for example an accidental delete of many records. The DBA at this customer knew he had to take action when log space was low, to investigate such potential user error.

With the monitoring scripts used by us the alarm went off regularly on tempdb. Because of the checkpoint behavior described above, unused transaction log space is only reclaimed at 70% log usage of tempdb. The customer still wanted to headroom of the 50% limit, so he had to be able to distinguish between the “by design” checkpoint at 70% and a troublemaking, long running transaction preventing log space from being reclaimed.

The solution for the customer was to monitor to the column log_reuse_wait_desc in sys.databases in combination with the unused transaction log space. If this column has the value different than CHECKPOINT and NOTHING, the log usage is a result of a long running log space consuming transaction, in tempdb.

The script below illustrates how this is done:

SELECT name
, db.log_reuse_wait_desc
, ls.cntr_value AS size_kb
, lu.cntr_value AS used_kb
, CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT)
AS used_percent
, CASE WHEN CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT) > .5 THEN
CASE
/* tempdb special monitoring */
WHEN db.name = ‘tempdb’
AND log_reuse_wait_desc NOT IN (‘CHECKPOINT’, ‘NOTHING’) THEN ‘WARNING’
/* all other databases, monitor foor the 50% fill case */
WHEN db.name <> ‘tempdb’ THEN ‘WARNING’
ELSE ‘OK’
END
ELSE ‘OK’ END
AS log_status
FROM sys.databases db
JOIN sys.dm_os_performance_counters lu
ON db.name = lu.instance_name
JOIN sys.dm_os_performance_counters ls
ON db.name = ls.instance_name
WHERE lu.counter_name LIKE ‘Log File(s) Used Size (KB)%’
AND ls.counter_name LIKE ‘Log File(s) Size (KB)%’

SQL server store procedure new features in SQL 2008-2012


Stored procedures today:

One of the biggest stored-procedure improvements in SQL Server 2008 and 2008 R2 is table-valued parameters. This parameter type groups multiple rows of data into sets and then sends them all at once to the database server, reducing round trips between client and server, and it doesn’t create temporary tables or numerous parameters. Table-valued parameters don’t require any locking to initially populate client data. They also enable the client to specify sort order.

The table-valued structure is easy to understand and use, yet it can accommodate complex business logic. You use table-valued parameters the same way you use classic stored-procedure parameters. First, declare a user-defined table type and create a stored procedure with the input parameter of that table type. Next, declare a variable of the table type and reference it. Use an INSERT statement to populate the table variable instead of a SET or SELECT statement. Finally, plug the filled table variable into the stored procedure as an input parameter.

SQL Server 2008 also introduced a MERGE statement that allows for multiple data manipulation language (DML) operations in a single T-SQL statement. With the appropriate indexes on joined tables, you’ll gain increased query performance. That’s because a single MERGE statement reduces the number of times the data in source and target tables is processed. In SQL Server 2005 and earlier versions, each INSERT, DELETE, and UPDATE statement had to process the data one time. MERGE is useful, say, when you are synchronizing two tables (see “Sidebar title TK”).

In SQL Server 2008 and 2008 R2 stored procedures, grouping operations have been enhanced by a new GROUPING SETS syntax. A simple GROUP BY clause returns one row for each combination of all column values, whereas GROUPING SETS returns one row for each unique value in each column. Without this new feature, the latter would take multiple GROUP BY statements for each column combined in a UNION structure. That would inevitably require more resources, like disk I/O operations, memory and runtime.

SQL Server 2008 and 2008 R2 also use a new row constructor to insert multiple rows in a single INSERT statement in stored-procedure programming. Database administrators may already be familiar with this syntax: “INSERT INTO TABLE (COLa, COLb, COLc) … VALUES (V1a, V1b, V1c …), (V2a, V2b, V2c …), … (V1000a, V1000b, V1000c …)”. Here the number 1000 indicates the maximum allowed rows in one INSERT statement. The new INSERT syntax is cleaner because it removes duplicate strings. It also allows for faster round trips to and from the server by reducing network traffic.

In my triple-trial test, I ran 10 batches of 1,000 new-style inserts per batch and 10,000 individual inserts in parallel. Each test counted as one round trip, but compared with the old-school INSERTs, the simplified row constructor cut down the number of transactions and bytes sent from the server 1,000-fold and reduced the amount of client-sent data by a factor of three and client processing time by a factor of 300. Total execution time was reduced by more than 50%.

Quick way to handle “Transaction log file full” issue


In our day to day work life very frequently we are getting the error that transaction log grows unexpectedly or becomes full. In most of the production system it is happening frequently.

Errors from SQL server error log:

Error: 9002, Severity: 17, State: 2.

The transaction log for database ‘test’ is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

If you get an error “Error: 9002, Severity: 17, State: 2.” like this go and check the log_reuse_wait_desc column in sys.databases by running below command and see what value it returns.

SELECT name, recovery_model_desc, log_reuse_wait_desc FROM sys.databases

log_reuse_wait_desc “NOTHING” means there is no open transaction. If any other value is mentioned then we should take the transaction log backup for the respective database.

Script To shrink the all user database if database associate with single log file:

sp_msforeachdb “use [?]; dbcc shrinkfile(2,TRUNCATEONLY) WITH NO_INFOMSGS ”

Note:The command will work fine only if every user database only have single log file.

If we are using multiple log files in different drive, then it is bit tough to quickly shrink the all user database log files together. It is very hectic. Please use the below command file to shrink all user database log file at the same time.

Script if databases have multiple log files:

DECLARE @DBName varchar(255)

DECLARE @DBLogicalFileName varchar(255)

DECLARE @DATABASES_Fetch int

DECLARE DATABASES_CURSOR CURSOR FOR

select DATABASE_NAME = db_name(MaTableMasterFiles.database_id), MaTableMasterFiles.name

from sys.master_files MaTableMasterFiles

where

– ONLINE

MaTableMasterFiles.state = 0

– Only look at databases to which we have access

and has_dbaccess(db_name(MaTableMasterFiles.database_id)) = 1

– Not master, tempdb or model

and db_name(MaTableMasterFiles.database_id) not in (‘Master’,’tempdb’,’model’)

AND type_desc LIKE ‘log’

group by MaTableMasterFiles.database_id, MaTableMasterFiles.name

order by 1

OPEN DATABASES_CURSOR

FETCH NEXT FROM DATABASES_CURSOR INTO @DBName, @DBLogicalFileName

WHILE @@FETCH_STATUS = 0

BEGIN

Exec (‘Use [‘ + @DBName + ‘] DBCC SHRINKFILE (“‘ + @DBLogicalFileName + ‘”)’)

FETCH NEXT FROM DATABASES_CURSOR INTO @DBName, @DBLogicalFileName

END

CLOSE DATABASES_CURSOR

DEALLOCATE DATABASES_CURSOR

SQL Server 2008/2008 R2 Upgrades and Migration


Business Requirements/Needs

It’s widely recognized that database sizes are growing significantly, and that the growth is being forced by many factors, such as companies requiring more data to be available online for longer or an increasing amount of data being digitized for storage. SQL Server 2008 R2 has emerged as data platform for large scale industries which stores and manages several terabytes of data efficiently in a variety of different formats including XML, e-mail, time/calendar, file, document, geospatial, and so on. SQL Server 2008 R2 is very much capable in handling data explosion while providing a rich set of services to interact with the data: search, query, data analysis, reporting, data integration, and robust synchronization.

Our goal is to Migrate 100 SQL server 20005 Enterprise Edition to SQL server 2008 R2 and consolidate the existing dozen servers to In Virtual environment a few servers for easy manage and maintenance. So there are 2 things that we want to accomplish:

1. Upgrade to SQL server 200R R2

2. Consolidate existing servers In VMware environment

Key Challenges:

We need to define the criteria that determine the success of our database upgrade. To prepare the success criteria, we should review each phase and step of the overall database upgrade plan and ask ourselves several questions. The following questions will help us understand what we need to do to declare that the phase or step is successful:

How can I measure whether this step is successful?

How can I test that measurement?

How can I compare my test results against what they would have been in the old database?

Although creating an upgrade plan reduces the likelihood of problems occurring during the upgrade process, problems do arise that can prevent the upgrade process from completing. Most organizations rely heavily on the data contained in their databases, and having that data unavailable due to an upgrade might cause problems in business operations and even have financial implications. We are follow proper plan to recover from each phase and step of the upgrade process to help minimize data loss and reduce the time that data might be unavailable.

Project Execution:-

We are following below two methods to upgrade the server 2008 and 2008 R2 environment.

Methods to Upgrade

There are two methods to upgrade from SQL Server 2000(SP4)/ SQL Server 2005(SP2) to SQL Server 2008.

In-place Upgrade – Organizations that do not have resources available to host multiple database environments commonly use an in-place upgrade. An in-place upgrade overwrites a previous installation of SQL Server 2000 or 2005 with an installation of SQL Server 2008. In other words, SQL Server 2008 R2 gives us ability to automatically upgrade an instance of SQL Server 2000 or 2005 to SQL Server 2008. The reason it is called in-place upgrade is because a target instance of SQL Server 2000 or 2005 is actually replaced with a SQL Server 2008 instance. We do not have to worry about coping data from the old instance to new instance as the old data files are automatically converted to new format. This upgrade method is the easiest way to upgrade the database to newer version.

Side by Side Upgrade – Database environments that have additional server resources can perform a side-by-side migration of their SQL Server 2000 or 2005 installations to SQL Server 2008. In this upgrade method, a new instance is created on the same server or in a new server. In this upgrade method the old database instance runs in parallel to the old legacy database. So as the old instance is untouched during this type of upgrade, the old legacy database is still available and online for the application. Having the old environment still active during the upgrade process allows for the continuous operation of the original database environment while we can install and test the upgraded environment. Side-by-side migrations can often minimize the amount of downtime for the SQL Server.

A side-by-side migration does not overwrite the SQL Server files on our current installation, nor does it move the databases to new SQL Server 2008 installation. We/DBAs need to manually move databases to the new SQL Server 2008 installation and other supporting objects (Jobs, DTS/SSIS packages etc.) after a side-by-side installation by using one of the upgrade methods discussed below.

Checklist:

The following tips can help us perform a secure and successful upgrade:

Create a series of checklists: DBAs and developers should prepare a series of checklists that need to be performed before, during, and after a database upgrade.

Back up all important files: Back up all SQL Server database files from the instance to be upgraded, as well as any application files, script files, extract files, and so on so that you can completely restore them if necessary.

Ensure database consistency: Run DBCC CHECKDB on databases to be upgraded to ensure that they are in a consistent state before performing the upgrade.

Reserve enough disk space: Estimate the disk space required to upgrade SQL Server components, user databases, and any database files that might need to be created during the upgrade process. We might need two to four times the amount of disk space during the upgrade process as we will need after the upgrade is finished.

Ensure space for system databases: Configure system databases (master, model, msdb, and tempdb) to auto grow during the upgrade process, and make sure that they have enough disk space for this growth.

Transfer login information: Ensure that all database servers have login information in the master database before upgrading the database. This step is important for restoring a database because system login information resides in the master database and must be re-created in the new instance.

Disable all startup stored procedures: The upgrade process will usually stop and start services multiple times on the SQL Server instance being upgraded. Stored procedures set to execute on startup might block the upgrade process.

Stop replication: Stop replication and make sure that the replication log is empty for starting the upgrade process.

Quit all applications: Certain applications, including all services with SQL Server dependencies, might cause the upgrade process to fail if local applications are connected to the instance being upgraded.

Register your servers after the upgrade: The upgrade process removes registry settings for the previous SQL Server instance. After upgrading, we must reregister our servers.

Repopulate full-text catalogs: The upgrade process marks your databases as full-text disabled. Catalogs must be repopulated, but Setup doesn’t run this operation automatically because it can be time-consuming. Because this operation enhances the performance of your SQL Server 2008 installation, we should plan to repopulate full-text catalogs at a convenient time after the upgrade.(Full Text Search has been integrated in SQL Server 2008, please refer to the section given above for more details)

Update statistics: To help optimize query performance, update statistics on all databases following the upgrade.

Update usage counters: In earlier versions of SQL Server, the values for the table and index row counts and page counts can become incorrect. To correct any invalid row or page counts, run DBCC UPDATEUSAGE on all databases following the upgrade.

Migration Plan

Before upgrade

  • Review de-supported (removed) features (for example Notification Services, backup with TRUNCATE ONLY etc. are no more available in SQL Server 2008).
  • Document existing system.
  • Develop validation tests / performance benchmarks and capture baseline data
  • Formulate and test rollback plan
  • As discussed above, we should run upgrade advisor for all databases (including system databases) and fix any issue/blocker for upgrade.
  • Take proper down time before starting the upgrade. The outage required would depend upon the size of the databases and the environment in which database servers are hosted.
  • Once all the issues and blockers are solved, and all applications are closed, take complete backup of all the databases (including master, model and msdb).
  • Also take transactional backup for all user databases and the disable all the jobs including jobs for taking full backups, jobs to defrag and other maintenance plans.
  • It is also recommended to take an image of your database server; this can be very handy in case you will have to rollback.
  • If the operating system of the server is Windows 2003, make sure that Windows service pack 1 is installed. This needs to be done on all the nodes of cluster in case the database server is on Failover Cluster environment.
  • We can save upgrade time by installing a few prerequisites in the database server. They are .Net 2.0 framework and SQL Native Client, in case if we have servers on cluster, install these setups on each node.
  • Get ready with the new SQL 2008 DVD and the edition you want to install.
  • Make sure that there is enough space in the drive where SQL Server 2000/2005 is installed in case of in-place upgrade.
  • Full Text indexes applied on tables in SQL 2000/2005 are not compatible with SQL Server 2008. Refer to Full Text Integration section in this document for more details.

During upgrade

  • Make sure, all the points are taken care from “Before upgrade section”.
  • Start installing by clicking SetUp.exe from SQL Server 2008 DVD.
  • The setup program is designed in such a way that it automatically updates the database binaries. It altogether creates a different folder “100″ to contain its new binaries. The “100” folder specifies the compatibility of the database. It also upgrades the other SQL Servers on clusters automatically but only the database engine and not other components like reporting services or analysis services. This is very important. Therefore you will have to install other components separately in each node.
  • You can see the progress of installation on installation screen, in case installation is done on cluster, setup will populate all the nodes in a drop down list.
  • Once all the nodes are upgraded with the database engine and all components are installed individually on servers, install the latest SQL Server service pack(if there is any). This need to be done on all the cluster nodes.
  • A reboot may be required to complete the installation due to open file locks during the installation process. If the installation log does not contain any 3010 error, then a reboot is not required.

After upgrade

  • Review all logs
  • Revisit Upgrade Advisor recommendations
  • Update statistics to ensure performance
  • Sample for very large tables.
  • Reconfigure Log Shipping
  • Verify Agent jobs and maintenance tasks
  • Verify security settings, especially cross server and/or cross-domain access privileges
  • Recreate all the full text indexes removed from the databases.
  • Re-populate Full-Text catalogs
  • Update the statistics for all the databases. This step may take time depending on the size of database.
  • Update usage counters in earlier versions of SQL Server, the values for the table and index row counts and page counts can become incorrect. To correct any invalid row or page counts, run DBCC UPDATEUSAGE on all databases following the upgrade.
  • After initially restoring SQL Server 2000/2005 databases to SQL Server 2008, the compatibility level will still be 80/90, which corresponds to SQL Server 2000/2005 respectively. To take advantage of all the new features of SQL Server 2008, we must change the compatibility level to 100, which refer to SQL Server 2008.

Monitoring multiple jobs from one Central location:SQL Server 2008 R2 Centralize server


Good database administration is essential to protect critical business data and ensuring business applications operate efficiently. As a Database administrator have to perform many different tasks; many of which are reactive and some proactive, the challenge is achieving the correct balance rather than being driven with the reactive tasks.

I especially follow Monitoring multiple jobs in one central location, and receive one email each day to monitor backup and daily jobs status for 250 production server and implement the same for our existing client. . It also includes backup and recovery to ensure business continuity in the event of loss or corruption as well as ongoing availability by means of replication or some other means. Automation (of administration) can typically save 15% or more in costs by reduced administration efforts. …relieving DBAs from trivial and routine tasks…” We follow the same for our existing client

To customize the same we use the SQL Server 2008 R2 Centralize server to manage 250 remote servers as if they were a single fleet.

Set Up a Central Management Server:

To configure it, open SSMS 2008 and go into the Registered Servers window. Right-click on Central Management Servers and you get options to set one up. From there, it’s basically the same as your local registered server list – only it’s centralized:

There we can create a Group Like Production server and register all the server as below:

Once the servers are registered, you can click on the production server folder and click on the new query.

Need to execute the below script to get the job report for the entire register server.

USE msdb

GO

SET NOCOUNT ON

SELECT job_id, MAX(instance_id) ‘Max_ID’ INTO #tempInstance

FROM sysjobhistory GROUP BY job_id

SELECT cast(@@servername as varchar(20)) as ‘server’,

j.name ‘Job_Name’,

CASE js.run_status

WHEN 0 THEN ‘Failed’

WHEN 1 THEN ‘Successful’

WHEN 3 THEN ‘Cancelled’

WHEN 4 THEN ‘In Progress’

END ‘Job_Status’,

js.run_date , sjs.next_run_date

FROM sysjobs j, sysjobhistory js, SysJobSchedules sjs, #tempInstance t

WHERE j.job_id = js.job_id AND

js.job_id = sjs.job_id AND

j.job_id = t.job_id AND

js.job_id= t.job_id AND

js.instance_id = t.Max_id AND

js.run_status IN (0) AND

j.enabled = 1 and

j.category_id = 0

DROP TABLE #tempInstance

Output: