Capture DB usage before planning Decommission


In large establishments, where there are many servers, there are always chances of some servers or databases not being used but which remain operational. There is a need to decommission those servers or databases so that they can be used for other purposes. This should be a continuous process to analyze servers or databases which are not in use. There is huge cost of storage and licensing which an organization can save by re using them.
Before we decide to decommission such servers or databases we need to ensure they are not used by any app or user. This needs to be tracked but it’s not always easy. There are times when a db is put in offline mode and DBA team waits for someone to raise an alarm that they are not able to connect to so and so database which is being used for so and so purpose. Then those databases are brought online.
This approach has many flaws; app becomes unavailable, causes inconvenience to teams using them and creates downtime.
Thus we need a more proactive approach to analyze first before we decide to decommission a server or database.
To do this here we have created a job which will capture usage of all databases along with logins using them and hostname from where connections are coming. This information is captured in a table. We can let this job run for a week or two and then see who all are connecting to which database and decide or check whether to decommission them or not.
You can run ‘select * from DBAdmin..loginaudit’ to see the info captured

Following code is for SQL Server 2005 and above

————————-Create a Table in DBAdmin database———————————
USE [DBAdmin]
GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[LoginAudit](
[Login] [varchar](100) NULL,
[HostName] [varchar](100) NULL,
[DBName] [varchar](100) NULL,
[Command] [varchar](255) NULL,
[LastBatch] [varchar](255) NULL,
[ProgramName] [varchar](255) NULL
) ON [PRIMARY]

GO
SET ANSI_PADDING OFF

—————Create Job to run every 30 min—Change mail operator plus start and date———–
USE [msdb]
GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0

IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]’ AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N’JOB’, @type=N’LOCAL’, @name=N'[Uncategorized (Local)]’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N’Login Audit Job’,
@enabled=1,
@notify_level_eventlog=2,
@notify_level_email=2,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N’No description available.’,
@category_name=N'[Uncategorized (Local)]’,
@owner_login_name=N’sa’,
@notify_email_operator_name=N’DBATeam@SQLDBA.com’, –(your email operator)
@job_id = @jobId OUTPUT
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
/****** Object: Step [Login Audit Step] Script Date: 03/27/2015 03:25:49 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N’Login Audit Step’,
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N’TSQL’,
@command=N’CREATE TABLE #sp_who2 (SPID INT,Status VARCHAR(255),
Login VARCHAR(255),HostName VARCHAR(255),
BlkBy VARCHAR(255),DBName VARCHAR(255),
Command VARCHAR(255),CPUTime INT,
DiskIO INT,LastBatch VARCHAR(255),
ProgramName VARCHAR(255),SPID2 INT,
REQUESTID INT)
————————————–
INSERT INTO #sp_who2 EXEC sp_who2
————————————–
INSERT INTO DBAdmin..LoginAudit
SELECT Login, Hostname, DBName, Command, LastBatch, ProgramName
FROM #sp_who2
WHERE Status ”Background”
and SPID > 49
and programName not like ”%SQLAgent%”
and Login ”Null”
and Login ””
ORDER BY LastBatch ASC
—————————————
DROP TABLE #sp_who2′,
@database_name=N’DBAdmin’,
@flags=0
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule
@job_id=@jobId,
@name=N’Every Half an hour’,
@enabled=1,
@freq_type=4,
@freq_interval=1,
@freq_subday_type=4,
@freq_subday_interval=24,
@freq_relative_interval=0,
@freq_recurrence_factor=0,
@active_start_date=20150327, —Monitoring (datacapture) start date @active_end_date=20150411, —Monitoring (datacapture) end date
@active_start_time=0,
@active_end_time=235959
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

Following code is for SQL Server 2000

————–Create Table in DBAdmin Database —————
CREATE TABLE [LoginAudit] (
[Login] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[HostName] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[DBName] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Command] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LastBatch] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[ProgramName] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO

—————Create Job to run every 30 min—Change mail operator plus start and date———–
BEGIN TRANSACTION
DECLARE @JobID BINARY(16)
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
IF (SELECT COUNT(*) FROM msdb.dbo.syscategories WHERE name = N'[Uncategorized (Local)]’) < 1
EXECUTE msdb.dbo.sp_add_category @name = N'[Uncategorized (Local)]'

— Delete the job with the same name (if it exists)
SELECT @JobID = job_id
FROM msdb.dbo.sysjobs
WHERE (name = N'Login Audit Job')
IF (@JobID IS NOT NULL)
BEGIN
— Check if the job is a multi-server job
IF (EXISTS (SELECT *
FROM msdb.dbo.sysjobservers
WHERE (job_id = @JobID) AND (server_id 0)))
BEGIN
— There is, so abort the script
RAISERROR (N’Unable to import job ”Login Audit Job” since there is already a multi-server job with this name.’, 16, 1)
GOTO QuitWithRollback
END
ELSE
— Delete the [local] job
EXECUTE msdb.dbo.sp_delete_job @job_name = N’Login Audit Job’
SELECT @JobID = NULL
END

BEGIN

— Add the job
EXECUTE @ReturnCode = msdb.dbo.sp_add_job
@job_id = @JobID OUTPUT ,
@job_name = N’Login Audit Job’,
@owner_login_name = N’sa’,
@description = N’No description available.’,
@category_name = N'[Uncategorized (Local)]’,
@enabled = 1,
@notify_level_email = 2,
@notify_level_page = 0,
@notify_level_netsend = 0,
@notify_level_eventlog = 2,
@delete_level= 0,
@notify_email_operator_name = N’DBATeam@SQLDBA.com’, –(your email operator)

IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the job steps
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep @job_id = @JobID, @step_id = 1, @step_name = N’Login Audit Step’,
@command = N’CREATE TABLE #sp_who2 (SPID INT,Status VARCHAR(255),
Login VARCHAR(255),
HostName VARCHAR(255),
BlkBy VARCHAR(255),
DBName VARCHAR(255),
Command VARCHAR(255),
CPUTime INT,
DiskIO INT,
LastBatch VARCHAR(255),
ProgramName VARCHAR(255),
SPID2 INT)

————————————–
INSERT INTO #sp_who2 EXEC sp_who2
————————————–
INSERT INTO DBAdmin..LoginAudit
SELECT Login, Hostname, DBName, Command, LastBatch, ProgramName
FROM #sp_who2
WHERE Status ”Background”
and SPID > 49
and programName not like ”%SQLAgent%”
and Login ”Null”
and Login ””
ORDER BY LastBatch ASC
—————————————
DROP TABLE #sp_who2′,

@database_name = N’DBAdmin’,
@server = N”,
@database_user_name = N”,
@subsystem = N’TSQL’,
@cmdexec_success_code = 0,
@flags = 0,
@retry_attempts = 0,
@retry_interval = 1,
@output_file_name = N”,
@on_success_step_id = 0,
@on_success_action = 1,
@on_fail_step_id = 0,
@on_fail_action = 2
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXECUTE @ReturnCode = msdb.dbo.sp_update_job @job_id = @JobID, @start_step_id = 1

IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the job schedules
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobschedule
@job_id = @JobID,
@name = N’Every Half an hour’,
@enabled = 1,
@freq_type = 4,
@active_start_date = 20150327, —Monitoring (datacapture) start date
@active_start_time = 0,
@freq_interval = 1,
@freq_subday_type = 4,
@freq_subday_interval = 30,
@freq_relative_interval = 0,
@freq_recurrence_factor = 0,
@active_end_date = 20150411, —Monitoring (datacapture) end date
@active_end_time = 235959
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the Target Servers
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @JobID, @server_name = N'(local)’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

END
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

Common DBA project Challenge along with resolution


Problem

To me the biggest blunder is knowing that a problem exists and either ignoring it or procrastinating on the implementing the resolution. . This tip focuses on Common DBA project Challenge that could have been prevented.

Solution

The reality is that nothing is perfect and as technical professionals we need to build a realistic solution with the time and budget available, then communicate any potential issues to the business so they are aware of them.

To deliver better application performance, DBAs should consider the following tips:

  • Be proactive and align behind end-user experience as a shared objective across the entire IT organization by looking at application performance and the impact that the database has on it continuously, not only when it becomes a major problem.
  • Measure performance based not on an infrastructure resources perspective, but on end-user wait times. Wait-time analysis gives DBAs a view into what end-users are waiting for and what the database is waiting for, providing clear visibility into bottlenecks.
  • Implement monitoring tools that provide visibility across the entire application stack, including all the infrastructure that supports the database – virtualization layers, database servers, hosts, storage systems, networks, etc.
  • Establish historic baselines of application and database performance that look at how applications performed at the same time on the same day last week, and the week before that, to detect any anomalies before they become larger problems.
  • Have a common set of goals, metrics and SLAs across all databases, ideally based on application response times, not only uptime.
  • Use tools that provide a single dashboard of performance and the ability to drill down across database technologies and deployment methods, including cloud.
  • Document a consistent set of processes for ensuring integrity and security: backup and restore processes, encryption at rest and on transit, detection of anomalies and potential security events in logs, to name a few.
  • Establish a strategy, roadmap, and guidelines for moving to the cloud (or not) and for reducing workload costs by moving databases to lower-license-cost versions or open-source alternatives.
  • Make sure team members can escape firefighting mode and spend enough time proactively optimizing performance of the databases and taking care of important maintenance tasks, which can result in significant cost savings and prevent problems in the future.

Database project Operations issue:

  • All senior team members on vacation – When you have a major deployment make sure to have your key staff members on site and available to meet the project needs. Do not fool yourself that a junior team member will be able to just push a button and deploy a solution, especially when a minimal amount of testing is conducted. When a problem arises it is the senior team member’s knowledge and expertise that is needed to quickly resolve issues. If all of those people are out on the beach and will be back next week, it makes sense to wait a week for the deployment to have your team onsite and available to address any unexpected issues.
  • Putting all of your eggs in 1 basket – When you work through an enterprise upgrade whether it is an application or hardware firmware, do not upgrade all of the systems (including the DR site) at once. Take a step back and be sure to have some systems that are out of sync for a short period of time to be migrate to a known stable platform in case an unexpected issue arises.
  • Not validating backups on a daily basis – If a serious issue occurs, make sure you have a solid last line of defense. That is a consistent and reliable set of backups on a daily basis. In addition, make sure your backup plan includes retiring tapes on a weekly, monthly or quarterly basis to be able to rollback to some point in time rather than going out of business. Also check-in with the business to ensure backups are not needed for legal or regulatory needs.
  • Not changing passwords – As an administrator you have the keys to the kingdom and need to recognize the responsibility that you have. As such, make sure your passwords are complex, change them frequently and do not share your passwords.
  • Password expiration – This is almost the opposite of the previous bullet. With SQL Server 2005 password policies can be setup for standard logins so the passwords expire and accounts get locked out. When this happens your applications will not be accessible if one of these accounts are in use. As such, setting password expiration is a good idea, just be sure to change the password and coordinate the change with your team.
  • Letting the primary file group fill up – With the rate of data growth, be sure to either cap your database size or monitor the size on a daily, weekly or monthly basis or permit your databases to automatically grow. In either circumstance, be sure to watch your disk space so that you do not fill up your disk and then have 2 problems (full file group and full disk drive).
  • Hot data centers – High temperatures mean failure for servers. The failure could be a controller card or a disk drive, but one sustained spike in the room temperature could be a critical problem that is not fully realized for a three to six month time period. Make sure your temperature is properly regulated, has a backup conditioning system and can notify your team before an issue arises.