Archive

Archive for the ‘SQL server DBA administration’ Category

RoboNext RPA Framework


RoboNext is a robust, proven framework to take full advantage of Robotic Process Automation and Artificial Intelligence. Taking care of everything for you, we deliver a full, end-to-end range of automation services — from building the business case to running the new virtual workforce. All our automations are technology agnostic, platform independent and non-intrusive to current IT Infrastructure.

Our framework allows you to:
• Reduce your operational costs
• Increase the agility and speed of your operations
• Redirect your people to tasks that add value
• Improve employee satisfaction
• Reduce your time to market
• Improve the quality and consistency of services for customers
• Improve the accuracy of repetitive processes
• Increase the level of business insight available to your organization

Solutions lays the path towards Robotic Process Automation. We help you understand what you need vs what you want.
For example:
• RPA vs process automation? Robot vs Bot?
• Cognitive, cognition and machine learning? RPA vs intelligent automation?
• Desktop vs enterprise vs virtual?
• Outsourcer vs enterprise perspective?
• Which RPA strategy – process automation, bot farm, virtual workforce
• Repatriate work? Business led or IT led?
• Technology configuration vs end use application?

We help you navigate these and other complexities to ensure the best decision for your organization

Upgrade to SQL Server 2016 is justified as per Hybrid Cloud aspect


Upgrade to SQL Server 2016 is justified as per Hybrid Cloud aspect

Oracle ERP Cloud Applications is correct choice in today world?

Capture DB usage before planning Decommission


In large establishments, where there are many servers, there are always chances of some servers or databases not being used but which remain operational. There is a need to decommission those servers or databases so that they can be used for other purposes. This should be a continuous process to analyze servers or databases which are not in use. There is huge cost of storage and licensing which an organization can save by re using them.
Before we decide to decommission such servers or databases we need to ensure they are not used by any app or user. This needs to be tracked but it’s not always easy. There are times when a db is put in offline mode and DBA team waits for someone to raise an alarm that they are not able to connect to so and so database which is being used for so and so purpose. Then those databases are brought online.
This approach has many flaws; app becomes unavailable, causes inconvenience to teams using them and creates downtime.
Thus we need a more proactive approach to analyze first before we decide to decommission a server or database.
To do this here we have created a job which will capture usage of all databases along with logins using them and hostname from where connections are coming. This information is captured in a table. We can let this job run for a week or two and then see who all are connecting to which database and decide or check whether to decommission them or not.
You can run ‘select * from DBAdmin..loginaudit’ to see the info captured

Following code is for SQL Server 2005 and above

————————-Create a Table in DBAdmin database———————————
USE [DBAdmin]
GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[LoginAudit](
[Login] [varchar](100) NULL,
[HostName] [varchar](100) NULL,
[DBName] [varchar](100) NULL,
[Command] [varchar](255) NULL,
[LastBatch] [varchar](255) NULL,
[ProgramName] [varchar](255) NULL
) ON [PRIMARY]

GO
SET ANSI_PADDING OFF

—————Create Job to run every 30 min—Change mail operator plus start and date———–
USE [msdb]
GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0

IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]’ AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N’JOB’, @type=N’LOCAL’, @name=N'[Uncategorized (Local)]’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N’Login Audit Job’,
@enabled=1,
@notify_level_eventlog=2,
@notify_level_email=2,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N’No description available.’,
@category_name=N'[Uncategorized (Local)]’,
@owner_login_name=N’sa’,
@notify_email_operator_name=N’DBATeam@SQLDBA.com’, –(your email operator)
@job_id = @jobId OUTPUT
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
/****** Object: Step [Login Audit Step] Script Date: 03/27/2015 03:25:49 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N’Login Audit Step’,
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N’TSQL’,
@command=N’CREATE TABLE #sp_who2 (SPID INT,Status VARCHAR(255),
Login VARCHAR(255),HostName VARCHAR(255),
BlkBy VARCHAR(255),DBName VARCHAR(255),
Command VARCHAR(255),CPUTime INT,
DiskIO INT,LastBatch VARCHAR(255),
ProgramName VARCHAR(255),SPID2 INT,
REQUESTID INT)
————————————–
INSERT INTO #sp_who2 EXEC sp_who2
————————————–
INSERT INTO DBAdmin..LoginAudit
SELECT Login, Hostname, DBName, Command, LastBatch, ProgramName
FROM #sp_who2
WHERE Status ”Background”
and SPID > 49
and programName not like ”%SQLAgent%”
and Login ”Null”
and Login ””
ORDER BY LastBatch ASC
—————————————
DROP TABLE #sp_who2′,
@database_name=N’DBAdmin’,
@flags=0
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule
@job_id=@jobId,
@name=N’Every Half an hour’,
@enabled=1,
@freq_type=4,
@freq_interval=1,
@freq_subday_type=4,
@freq_subday_interval=24,
@freq_relative_interval=0,
@freq_recurrence_factor=0,
@active_start_date=20150327, —Monitoring (datacapture) start date @active_end_date=20150411, —Monitoring (datacapture) end date
@active_start_time=0,
@active_end_time=235959
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

Following code is for SQL Server 2000

————–Create Table in DBAdmin Database —————
CREATE TABLE [LoginAudit] (
[Login] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[HostName] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[DBName] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Command] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LastBatch] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[ProgramName] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO

—————Create Job to run every 30 min—Change mail operator plus start and date———–
BEGIN TRANSACTION
DECLARE @JobID BINARY(16)
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
IF (SELECT COUNT(*) FROM msdb.dbo.syscategories WHERE name = N'[Uncategorized (Local)]’) < 1
EXECUTE msdb.dbo.sp_add_category @name = N'[Uncategorized (Local)]'

— Delete the job with the same name (if it exists)
SELECT @JobID = job_id
FROM msdb.dbo.sysjobs
WHERE (name = N'Login Audit Job')
IF (@JobID IS NOT NULL)
BEGIN
— Check if the job is a multi-server job
IF (EXISTS (SELECT *
FROM msdb.dbo.sysjobservers
WHERE (job_id = @JobID) AND (server_id 0)))
BEGIN
— There is, so abort the script
RAISERROR (N’Unable to import job ”Login Audit Job” since there is already a multi-server job with this name.’, 16, 1)
GOTO QuitWithRollback
END
ELSE
— Delete the [local] job
EXECUTE msdb.dbo.sp_delete_job @job_name = N’Login Audit Job’
SELECT @JobID = NULL
END

BEGIN

— Add the job
EXECUTE @ReturnCode = msdb.dbo.sp_add_job
@job_id = @JobID OUTPUT ,
@job_name = N’Login Audit Job’,
@owner_login_name = N’sa’,
@description = N’No description available.’,
@category_name = N'[Uncategorized (Local)]’,
@enabled = 1,
@notify_level_email = 2,
@notify_level_page = 0,
@notify_level_netsend = 0,
@notify_level_eventlog = 2,
@delete_level= 0,
@notify_email_operator_name = N’DBATeam@SQLDBA.com’, –(your email operator)

IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the job steps
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep @job_id = @JobID, @step_id = 1, @step_name = N’Login Audit Step’,
@command = N’CREATE TABLE #sp_who2 (SPID INT,Status VARCHAR(255),
Login VARCHAR(255),
HostName VARCHAR(255),
BlkBy VARCHAR(255),
DBName VARCHAR(255),
Command VARCHAR(255),
CPUTime INT,
DiskIO INT,
LastBatch VARCHAR(255),
ProgramName VARCHAR(255),
SPID2 INT)

————————————–
INSERT INTO #sp_who2 EXEC sp_who2
————————————–
INSERT INTO DBAdmin..LoginAudit
SELECT Login, Hostname, DBName, Command, LastBatch, ProgramName
FROM #sp_who2
WHERE Status ”Background”
and SPID > 49
and programName not like ”%SQLAgent%”
and Login ”Null”
and Login ””
ORDER BY LastBatch ASC
—————————————
DROP TABLE #sp_who2′,

@database_name = N’DBAdmin’,
@server = N”,
@database_user_name = N”,
@subsystem = N’TSQL’,
@cmdexec_success_code = 0,
@flags = 0,
@retry_attempts = 0,
@retry_interval = 1,
@output_file_name = N”,
@on_success_step_id = 0,
@on_success_action = 1,
@on_fail_step_id = 0,
@on_fail_action = 2
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXECUTE @ReturnCode = msdb.dbo.sp_update_job @job_id = @JobID, @start_step_id = 1

IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the job schedules
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobschedule
@job_id = @JobID,
@name = N’Every Half an hour’,
@enabled = 1,
@freq_type = 4,
@active_start_date = 20150327, —Monitoring (datacapture) start date
@active_start_time = 0,
@freq_interval = 1,
@freq_subday_type = 4,
@freq_subday_interval = 30,
@freq_relative_interval = 0,
@freq_recurrence_factor = 0,
@active_end_date = 20150411, —Monitoring (datacapture) end date
@active_end_time = 235959
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the Target Servers
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @JobID, @server_name = N'(local)’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

END
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

AWS Certified Solutions Architect and Professional certification Path

April 27, 2018 Leave a comment

Common DBA project Challenge along with resolution


Problem

To me the biggest blunder is knowing that a problem exists and either ignoring it or procrastinating on the implementing the resolution. . This tip focuses on Common DBA project Challenge that could have been prevented.

Solution

The reality is that nothing is perfect and as technical professionals we need to build a realistic solution with the time and budget available, then communicate any potential issues to the business so they are aware of them.

To deliver better application performance, DBAs should consider the following tips:

  • Be proactive and align behind end-user experience as a shared objective across the entire IT organization by looking at application performance and the impact that the database has on it continuously, not only when it becomes a major problem.
  • Measure performance based not on an infrastructure resources perspective, but on end-user wait times. Wait-time analysis gives DBAs a view into what end-users are waiting for and what the database is waiting for, providing clear visibility into bottlenecks.
  • Implement monitoring tools that provide visibility across the entire application stack, including all the infrastructure that supports the database – virtualization layers, database servers, hosts, storage systems, networks, etc.
  • Establish historic baselines of application and database performance that look at how applications performed at the same time on the same day last week, and the week before that, to detect any anomalies before they become larger problems.
  • Have a common set of goals, metrics and SLAs across all databases, ideally based on application response times, not only uptime.
  • Use tools that provide a single dashboard of performance and the ability to drill down across database technologies and deployment methods, including cloud.
  • Document a consistent set of processes for ensuring integrity and security: backup and restore processes, encryption at rest and on transit, detection of anomalies and potential security events in logs, to name a few.
  • Establish a strategy, roadmap, and guidelines for moving to the cloud (or not) and for reducing workload costs by moving databases to lower-license-cost versions or open-source alternatives.
  • Make sure team members can escape firefighting mode and spend enough time proactively optimizing performance of the databases and taking care of important maintenance tasks, which can result in significant cost savings and prevent problems in the future.

Database project Operations issue:

  • All senior team members on vacation – When you have a major deployment make sure to have your key staff members on site and available to meet the project needs. Do not fool yourself that a junior team member will be able to just push a button and deploy a solution, especially when a minimal amount of testing is conducted. When a problem arises it is the senior team member’s knowledge and expertise that is needed to quickly resolve issues. If all of those people are out on the beach and will be back next week, it makes sense to wait a week for the deployment to have your team onsite and available to address any unexpected issues.
  • Putting all of your eggs in 1 basket – When you work through an enterprise upgrade whether it is an application or hardware firmware, do not upgrade all of the systems (including the DR site) at once. Take a step back and be sure to have some systems that are out of sync for a short period of time to be migrate to a known stable platform in case an unexpected issue arises.
  • Not validating backups on a daily basis – If a serious issue occurs, make sure you have a solid last line of defense. That is a consistent and reliable set of backups on a daily basis. In addition, make sure your backup plan includes retiring tapes on a weekly, monthly or quarterly basis to be able to rollback to some point in time rather than going out of business. Also check-in with the business to ensure backups are not needed for legal or regulatory needs.
  • Not changing passwords – As an administrator you have the keys to the kingdom and need to recognize the responsibility that you have. As such, make sure your passwords are complex, change them frequently and do not share your passwords.
  • Password expiration – This is almost the opposite of the previous bullet. With SQL Server 2005 password policies can be setup for standard logins so the passwords expire and accounts get locked out. When this happens your applications will not be accessible if one of these accounts are in use. As such, setting password expiration is a good idea, just be sure to change the password and coordinate the change with your team.
  • Letting the primary file group fill up – With the rate of data growth, be sure to either cap your database size or monitor the size on a daily, weekly or monthly basis or permit your databases to automatically grow. In either circumstance, be sure to watch your disk space so that you do not fill up your disk and then have 2 problems (full file group and full disk drive).
  • Hot data centers – High temperatures mean failure for servers. The failure could be a controller card or a disk drive, but one sustained spike in the room temperature could be a critical problem that is not fully realized for a three to six month time period. Make sure your temperature is properly regulated, has a backup conditioning system and can notify your team before an issue arises.

Hosting DBA service with SQL Server 2014

March 19, 2017 Leave a comment

The database industry has seen significant change over the last 5 years, from the introduction of new technologies like clustering, big data, and in-memory data platforms to the recent variants of popular open source SQL alternatives like Percona Server and MariaDB making their way into popular technology stacks. Even with these new technologies becoming increasingly common, many enterprises still trust their mission critical database operation to Microsoft SQL Server. Recently, Microsoft released its latest iteration of the technology, Microsoft SQL Server 2014, with notable advancements in performance including in-memory functionality and added security with high availability.

 

In-Memory OLTP

As one of the first to market with support for this new functionality, SmartERP is focused on increasing database performance by allowing some data to be queried and processed in RAM instead of on constrained disk resources. Previous technology forced users to load ALL information into RAM, making large data sets problematic. Now, database administrators can choose which data tables to process in-memory, allowing those operations to achieve maximum priority to valuable processing resources. By speeding up the overall performance of query operation across a database environment, users can effectively do more with less resources, which eliminates the need to scale into expensive hardware solutions. SmartERP teams are already helping users plan for their migration to SQL Server 2014 by evaluating target data sets to load into memory.

High Availability

While replication and failover have long been part of any core strategy for business continuity, achieving true high availability for a database environment has been problematic. Organizations often must decide what level of inconsistency between database environments is acceptable – hours or days? However, with a new functionality called AlwaysOn, and strong connections between multi-cloud datacenters including ones belonging to SmartERP and Microsoft Azure, users can now implement nearly 100 percent resilient database environments that protect from both database operational failure and datacenter availability. Even though SmartERP has an industry leading record in uptime and a 100 percent uptime SLA, we want our users to meet their unique availability requirements, which may include a highly available database environment. We can help you meet those requirements with a team of DBA’s that have successfully implemented high availability for many customers. Check out this report on our support of AlwaysOn for more information.

DBA Services

Recently, we shared with you the economic benefits of our DBA services offering, including a full time DBA available 24×7 at a fraction of the cost of an in-house resource. We have seen growing interest in our DBA coverage and released a tiered offering to accommodate a variety of supplemental DBA needs. We have also witnessed some organizations relying solely on our DBA services to ensure database functionality, while others have formed close partnerships with their core database team and our DBA service. This allows our customers to focus on important development and optimization efforts without the burden of maintaining and monitoring the health of the existing database. Moving between any database iteration requires some thought and planning. Our DBA team is helping customers understand the scope of work and code change required to not only migrate, but also utilize new functionality. Our partnership with Microsoft ensures that our DBA Services team is ready to address even the more challenging scenarios when switching.

Flexible Licensing

At SmartERP, customers can consume fully managed SQL application licensing along with support at a low monthly cost. This allows users to move in-between SQL versions and hardware profiles without needing to address their software license agreement. We want to make sure you always have the best combination of optimized hardware and SQL versioning to meet your objectives.

Hybrid Potential

Lastly, while many web applications are moving to the cloud, many database users still require the performance consistency and controls of a dedicated server. Often, users partnering with a single cloud or managed hosting provider will be forced to make sacrifices by either not moving applications and web assets to the cloud because of database restrictions or sacrificing database performance and reliability by trying to operate it in a utility cloud service. With a Managed Private Cloud powered by Microsoft Cloud Platform, and the built in Database-as-a-Service capability, customers can take advantage of a fully managed, physically isolated, private cloud environment that allows customers to programmatically spin up SQL Server 2014 databases on demand. Customers can also strengthen their high availability and disaster recovery capabilities by choosing from a number of available destinations, including Microsoft Azure, as a designated disaster recovery location.
SmartERP can bridge public and private clouds running SQL Server over a strong network connection. It allows users to leverage both cloud and dedicated server resources in a single environment. Dominoes UK spoke about their use of hybrid to serve web content via the cloud while still having a reliable dedicated database in dedicated hosting. You can read more about that here.

SmartERP DBA services professionals are ready to help you wrap your brain around the cool new features in Microsoft SQL Server 2014. If you want to know more, please download our whitepaper and visit go.SmartERP.com/SQL2014.