Archive

Author Archive

Capture DB usage before planning Decommission


In large establishments, where there are many servers, there are always chances of some servers or databases not being used but which remain operational. There is a need to decommission those servers or databases so that they can be used for other purposes. This should be a continuous process to analyze servers or databases which are not in use. There is huge cost of storage and licensing which an organization can save by re using them.
Before we decide to decommission such servers or databases we need to ensure they are not used by any app or user. This needs to be tracked but it’s not always easy. There are times when a db is put in offline mode and DBA team waits for someone to raise an alarm that they are not able to connect to so and so database which is being used for so and so purpose. Then those databases are brought online.
This approach has many flaws; app becomes unavailable, causes inconvenience to teams using them and creates downtime.
Thus we need a more proactive approach to analyze first before we decide to decommission a server or database.
To do this here we have created a job which will capture usage of all databases along with logins using them and hostname from where connections are coming. This information is captured in a table. We can let this job run for a week or two and then see who all are connecting to which database and decide or check whether to decommission them or not.
You can run ‘select * from DBAdmin..loginaudit’ to see the info captured

Following code is for SQL Server 2005 and above

————————-Create a Table in DBAdmin database———————————
USE [DBAdmin]
GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[LoginAudit](
[Login] [varchar](100) NULL,
[HostName] [varchar](100) NULL,
[DBName] [varchar](100) NULL,
[Command] [varchar](255) NULL,
[LastBatch] [varchar](255) NULL,
[ProgramName] [varchar](255) NULL
) ON [PRIMARY]

GO
SET ANSI_PADDING OFF

—————Create Job to run every 30 min—Change mail operator plus start and date———–
USE [msdb]
GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0

IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]’ AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N’JOB’, @type=N’LOCAL’, @name=N'[Uncategorized (Local)]’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N’Login Audit Job’,
@enabled=1,
@notify_level_eventlog=2,
@notify_level_email=2,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N’No description available.’,
@category_name=N'[Uncategorized (Local)]’,
@owner_login_name=N’sa’,
@notify_email_operator_name=N’DBATeam@SQLDBA.com’, –(your email operator)
@job_id = @jobId OUTPUT
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
/****** Object: Step [Login Audit Step] Script Date: 03/27/2015 03:25:49 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N’Login Audit Step’,
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N’TSQL’,
@command=N’CREATE TABLE #sp_who2 (SPID INT,Status VARCHAR(255),
Login VARCHAR(255),HostName VARCHAR(255),
BlkBy VARCHAR(255),DBName VARCHAR(255),
Command VARCHAR(255),CPUTime INT,
DiskIO INT,LastBatch VARCHAR(255),
ProgramName VARCHAR(255),SPID2 INT,
REQUESTID INT)
————————————–
INSERT INTO #sp_who2 EXEC sp_who2
————————————–
INSERT INTO DBAdmin..LoginAudit
SELECT Login, Hostname, DBName, Command, LastBatch, ProgramName
FROM #sp_who2
WHERE Status ”Background”
and SPID > 49
and programName not like ”%SQLAgent%”
and Login ”Null”
and Login ””
ORDER BY LastBatch ASC
—————————————
DROP TABLE #sp_who2′,
@database_name=N’DBAdmin’,
@flags=0
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule
@job_id=@jobId,
@name=N’Every Half an hour’,
@enabled=1,
@freq_type=4,
@freq_interval=1,
@freq_subday_type=4,
@freq_subday_interval=24,
@freq_relative_interval=0,
@freq_recurrence_factor=0,
@active_start_date=20150327, —Monitoring (datacapture) start date @active_end_date=20150411, —Monitoring (datacapture) end date
@active_start_time=0,
@active_end_time=235959
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

Following code is for SQL Server 2000

————–Create Table in DBAdmin Database —————
CREATE TABLE [LoginAudit] (
[Login] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[HostName] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[DBName] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Command] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LastBatch] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[ProgramName] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO

—————Create Job to run every 30 min—Change mail operator plus start and date———–
BEGIN TRANSACTION
DECLARE @JobID BINARY(16)
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
IF (SELECT COUNT(*) FROM msdb.dbo.syscategories WHERE name = N'[Uncategorized (Local)]’) < 1
EXECUTE msdb.dbo.sp_add_category @name = N'[Uncategorized (Local)]'

— Delete the job with the same name (if it exists)
SELECT @JobID = job_id
FROM msdb.dbo.sysjobs
WHERE (name = N'Login Audit Job')
IF (@JobID IS NOT NULL)
BEGIN
— Check if the job is a multi-server job
IF (EXISTS (SELECT *
FROM msdb.dbo.sysjobservers
WHERE (job_id = @JobID) AND (server_id 0)))
BEGIN
— There is, so abort the script
RAISERROR (N’Unable to import job ”Login Audit Job” since there is already a multi-server job with this name.’, 16, 1)
GOTO QuitWithRollback
END
ELSE
— Delete the [local] job
EXECUTE msdb.dbo.sp_delete_job @job_name = N’Login Audit Job’
SELECT @JobID = NULL
END

BEGIN

— Add the job
EXECUTE @ReturnCode = msdb.dbo.sp_add_job
@job_id = @JobID OUTPUT ,
@job_name = N’Login Audit Job’,
@owner_login_name = N’sa’,
@description = N’No description available.’,
@category_name = N'[Uncategorized (Local)]’,
@enabled = 1,
@notify_level_email = 2,
@notify_level_page = 0,
@notify_level_netsend = 0,
@notify_level_eventlog = 2,
@delete_level= 0,
@notify_email_operator_name = N’DBATeam@SQLDBA.com’, –(your email operator)

IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the job steps
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep @job_id = @JobID, @step_id = 1, @step_name = N’Login Audit Step’,
@command = N’CREATE TABLE #sp_who2 (SPID INT,Status VARCHAR(255),
Login VARCHAR(255),
HostName VARCHAR(255),
BlkBy VARCHAR(255),
DBName VARCHAR(255),
Command VARCHAR(255),
CPUTime INT,
DiskIO INT,
LastBatch VARCHAR(255),
ProgramName VARCHAR(255),
SPID2 INT)

————————————–
INSERT INTO #sp_who2 EXEC sp_who2
————————————–
INSERT INTO DBAdmin..LoginAudit
SELECT Login, Hostname, DBName, Command, LastBatch, ProgramName
FROM #sp_who2
WHERE Status ”Background”
and SPID > 49
and programName not like ”%SQLAgent%”
and Login ”Null”
and Login ””
ORDER BY LastBatch ASC
—————————————
DROP TABLE #sp_who2′,

@database_name = N’DBAdmin’,
@server = N”,
@database_user_name = N”,
@subsystem = N’TSQL’,
@cmdexec_success_code = 0,
@flags = 0,
@retry_attempts = 0,
@retry_interval = 1,
@output_file_name = N”,
@on_success_step_id = 0,
@on_success_action = 1,
@on_fail_step_id = 0,
@on_fail_action = 2
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXECUTE @ReturnCode = msdb.dbo.sp_update_job @job_id = @JobID, @start_step_id = 1

IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the job schedules
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobschedule
@job_id = @JobID,
@name = N’Every Half an hour’,
@enabled = 1,
@freq_type = 4,
@active_start_date = 20150327, —Monitoring (datacapture) start date
@active_start_time = 0,
@freq_interval = 1,
@freq_subday_type = 4,
@freq_subday_interval = 30,
@freq_relative_interval = 0,
@freq_recurrence_factor = 0,
@active_end_date = 20150411, —Monitoring (datacapture) end date
@active_end_time = 235959
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the Target Servers
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @JobID, @server_name = N'(local)’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

END
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

Capacity Planning using SQL Server Backup History


Problem Statement: Storage is always been one of the IT infrastructure component, where organization invest in a lot. Optimal utilization of storage is one of the key factors on saving cost. We should have fair details on the current storage requirement and also ability to forecast the tentative growth. This is critical, so that we don’t end up wasting storage by allocation extra, where the data is not growing and having trouble in adding storage where data is growing.
SQL Server is used for lot of application as its database in any organization. The same storage story apply to SQL Server databases as well. When any new database is created, we may predict the storage requirement depending on may be a baseline of the applications design or just leave it to grow with know the growth. For the forecast we may use tools that may help in understanding the growth. Then there is no issue in forecasting the growth and allocating the storage accordingly. What if we don’t have any such tools in place to provide the data growth history details of database or its file?
Solution: In any stable and healthy environment we maintain a good backup strategy. Let us assume that, we have daily backup in place for a database called Test and we maintain the history of this backup in msdb database. This Test database have multiple data file residing on multiple drive.
We can perform the below steps to build the database/data file growth report.

1. Gather the data from backup history. The below query will help in fetching these details.
select a.backup_set_id,b.backup_finish_date, b.database_name,
a.physical_name, sum(a.file_size/128/1024) as fizeSizeinMB from msdb..backupfile a, msdb..backupset b
where a.backup_set_id = b.backup_set_id
and b.database_name in (‘Test’)
and b.type = ‘D’
and a.file_type = ‘D’
group by a.backup_set_id,b.backup_finish_date, b.database_name,
a.physical_name
order by b.backup_finish_date desc

2. Copy the data in excel sheet.

3. Make use of Pivot and graph to represent the data growth. We will not discuss on how to work with excel here, but refer the excel file which will help in better understanding here.

The above easy step will provide the pattern of data file growth within a database. You can now analyze the data with slice and dice in excel.

Categories: Backup and Recovery

Am I took the correct decision to moving my data warehouse solution to the cloud? Here’s why.


Traditional Data warehousing has hit a roadblock. Most organizations have ancient information management systems typically built in an age where inflexible systems working within solos were sufficient to address data needs of that era- limited data sources, infrequent changes, lesser volume of transactions and low competition. But today, the same systems have been rendered ineffective with the splurge in data sources as well as volumes.

Modern Data Warehouse Demand:
• Designed for sheer volume and pace of data.
• Can be used by individual departments like marketing, finance, development, and sales at organizations of all types and size
• Affordable to small and mid-sized organizations, very easy to adapt dynamic changes in data volume and analytics workloads
• Data available immediately and at every step of modification, supporting data exploration, business intelligence and reporting
• No Limitations on number of users
• Data insights can be always up to date and directly accessible to everyone who needs them
• Empowers enterprises to shift their focus from systems management to analysis.
• Operates painlessly at any scale and makes it possible to combine diverse data, both structured and semi-structured

Cloud data warehouse will replace the on-premises warehouse as the main source of decision support and business analytics.

Cost:–
One of the most obvious advantages of cloud deployment is the up-front cost savings. Buying or upgrading hardware and software is both costly and time-consuming. Moving the data warehouse to the cloud can give your business a competitive advantage over businesses that keep theirs on premises.
Demand match with requirement:
When we install servers on premises, we have the infrastructure we have. It may be perfect for today, or even next week, but totally inadequate for what’s coming in 18 months. Or may have far more capacity than we currently need or will need for a while, and that’s wonderful —
Cloud-based deployments eliminate such problems through elasticity. Resources are dynamically reassigned as workloads change. So we can grow your environment, shrink our environment, or, for example, spin off a whole new environment just for a two-week promotion.
Transformation– Traditional data warehouses consist of data models, extract, transform, and load processes, and data governance, with BI tools sitting on top. Instead of doing things the old way, which includes structuring, ingesting and analyzing, enterprise data warehouses need to flip the paradigm and ingest, analyze, and structure by utilizing the cloud, data lakes, and polyglot warehousing. We need to think of your data warehouse not as a single technology but as a collection of technologies.
Agility– Many business functions, hitherto not associated with BI, have taken to data analytics for justifying spends, analyzing performance etc. It will be unproductive for these lines of business to wait for central IT to provision a data warehouse for them so they can start analysing their data. The cloud offers a relatively quick as well as robust solution to cater to these warehousing needs. On the contrary, for on premise infra, procurement as well as deployment cycles are very long. Add to that the pain of going through upgrades every 2-3 years.
Intersection with Big Data– Big data has empowered the world to tap any kind of unstructured data sources to gain insights. Cloud data warehousing can be a bridge for bringing the world of structured data from legacy on-premises data warehouses together with these newer big data sources.
To conclude, on-premises workloads will continue to shift to the cloud. In the days to come, the cloud data warehouse will replace the on-premises warehouse as the main source of decision support and business analytics. Azure SQL Data Warehouse, a cloud based data warehouse hosted on Microsoft Azure is capable of processing massive volumes of data and can provide your business the speed & scale that it needs to manage enterprise data.

Categories: Cloud

AWS Certified Solutions Architect and Professional certification Path

April 27, 2018 Leave a comment

SQL Server database migration to SQL Database in the cloud environment without Downtime


SQL Server database migration to SQL Database in the cloud environment without Downtime

 

We need to ensure that the source database is compatible with Azure SQL Database using the Data Migration Assistant (DMA). SQL Database V12 is approaching feature parity with SQL Server, other than issues related to server-level and cross-database operations. Databases and applications that rely on partially supported or unsupported functions need some re-engineering to fix these incompatibilities before the SQL Server database can be migrated.

 

When we cannot afford to remove your SQL Server database from production while the migration is occurring, we can use SQL Server transactional replication as your migration solution. To use this method, the source database must meet the requirements for transactional replication and be compatible for Azure SQL Database.

To use this solution, configure Azure SQL Database as a subscriber to the SQL Server instance that wish to migrate. The transactional replication distributor synchronizes data from the database to be synchronized (the publisher) while new transactions continue occur.

With transactional replication, all changes to data or schema show up in Azure SQL Database. Once the synchronization is complete and we are ready to migrate, change the connection string of your applications to point them to our Azure SQL Database. Once transactional replication drains any changes left on your source database and all your applications point to Azure DB, you can uninstall transactional replication. Your Azure SQL Database is now your production system.

Categories: Migration

SQL server Database Migration form DB2, MySQL, Oracle, Sybase ASE and Access.


SQL server Database Migration form DB2, MySQL, Oracle, Sybase ASE and Access.

The Microsoft SQL Server Migration Assistant, which is often abbreviated to just SSMA, comes in different versions that support the migration from several of the most common database formats.

The one we’re most interested in is SSMA for Access, but there’s also separate versions that will migrate from MySQL databases, or Oracle databases, or a Sybase, or many others. SSMA is a free tool that greatly simplifies the database migration process when moving from Access to SQL Server in four key ways. First, SSMA for Access automates the conversion of Microsoft Access database objects to their SQL Server counterparts. Then, it gives you fine control over the data type mappings.

It then loads the objects into SQL Server, followed by a migration of your data from Microsoft Access to its new home in SQL Server. Finally, the Migration Assistant will even update your Access database with links to the new data tables, so that your downtime is kept to a minimum during the transition. SSMA for Access is currently at version 6.0, but is designed to support migration from Microsoft Access 97 and higher to all editions of SQL Server 2005, 2008, 2012, and 2014, as well as Microsoft’s cloud hosted SQL Azure databases.

Migration Sources

Once you decide to migrate to SQL Server – SmartDog SQL service is just what you need. We are help out to client to build a road plan for migration.

Migration should be conducted in three phases:

  • Pre-migration inventory / checks
  • Database migration
  • Post-migration configuration / checks

 

Pre-migration phase: For the pre-migration phase, review and document the following tasks on the SOURCE server.

  • Database sizes
  • Data and Log file location
  • Server and Database properties (Collation, Auto Stats, DB Owner, Recovery Model, Compatibility level, etc.)
  • Collect the information of dependent applications, make sure application services will be stopped during the database migration
  • Database logins, users and their permissions
  • Dependent objects (SQL Agent Jobs and Linked Servers)
  • Document maintenance plans
  • On the TARGET server, conduct the following tasks:
  • Verify adequate disk space exists
  • Appropriate folders have been created
  • SQL Server is correctly installed and configured with latest Service Pack and Cumulative Update
  • Set SQL Server properties; memory, tempdb size, autogrowth and drive location.
  • Appropriate connectivity exists with application servers, web servers, etc.
  • Create databases consistent with planned database migrations

Database migration phase

  • From SOURCE server, get recent list of DB and file location list
  • On SOURCE SQL Server, Script migrate user LOGIN information between servers, save output for subsequent step.
  • Use native SQL Server to back up databases from SOURCE, use backup compression to reduce size of backup (optionally, use detach database, copy db files, then reattach database)
  • On SOURCE server, place databases in Read Only mode
  • Restore databases to TARGET
  • On TARGET SQL, apply script to migrate users/group security (Created in Step 3)
  • Update any custom SSRS DSNs to point to TARGET server.
  • On SOURCE server, script any required SQL Server Agent Jobs (select Job, right click, Script Job as > CREATE to > File ) to create
  • On TARGET, using the file create, then transfer/execute job creation on TARGET. Create schedules as needed.
  • Need to look out the datatype part if database are migrated from different source like Oracle Sybase

Post-migration phase

  • Point the application to the new DB server address, or Availability Group LISTENER for AlwaysOn Availability Groups (Connection strings, etc. to altered by the application support team)
  • Restart / review Network connections between all stake holding servers (Network Team)
  • Check the SQL Server Error Log and Windows Error logs for any failures
  • Confirm application functionality with end users

 

Categories: Migration Tags:

100% uptime with SQL Azure DR Plan


With the allure of cost savings, greater flexibility, and more agility, many organizations are eyeing the cloud as an alternative for deploying new applications, including those with high database performance requirements. In fact, technology research firm TechNavio predicts a 62 percent annual growth rate of cloud-based databases through 2018. We are build up the setup SQL server Cluster in Azure Platform.

HADR deployment architectures

 

SQL Server HADR technologies that are supported in Azure include:

  • Always On Availability Groups
  • Always On Failover Cluster Instances
  • Log Shipping
  • SQL Server Backup and Restore with Azure Blob Storage Service

 

  • Database Mirroring – Deprecated in SQL Server 2016

 

SQL Server Cluster in Azure

Solution that provides the most options and flexibility to implement a high availability SQL Server solution, we will be discussing how to build a Windows Server Failover Cluster for SQL Server using Azure virtual machines.  As stated, one of the requirements for WSFC is the ability to access shared storage which is not an option in Azure, but by integrating a solution like SIOS DataKeeper Cluster Edition into the mix we can build a SQL Server cluster using Azure virtual machines.  SIOS DataKeeper Cluster Edition is a Microsoft Azure Certified product which means this is a fully supported Microsoft solution.

  • Provision the virtual machines
  • Add the Failover Clustering feature to the virtual machines
  • Create a file share witness
  • Install DataKeeper and setup the storage

SIOS DataKeeper allows us to get around the need for shared storage, by creating disk volumes that are replicated between each node in the cluster. The data in these volumes is synchronized using real time, block level replication. The data therefore exists on all nodes in the cluster, which also eliminates the possibility of losing the data with a single shared storage source.

Once the volume or volumes have been created on the cluster nodes using SIOS DataKeeper, this storage becomes available within the Failover Cluster Manager as available storage.

After the storage has been configured, the following steps remain:

  • Install SQL Server on the first cluster node using the “New SQL Server failover cluster installation option”. When you go through the setup of SQL Server you will see the available storage that has been configured using SIOS DataKeeper.
  • Install SQL Server on the second cluster node using the “Add node to a SQL Server failover cluster” option.
  • Configure the Azure Internal Load Balancer, so clients can connect to active node.
Categories: Azure SQL Server