Why Client Choice Oracle Cloud?


Some customers can’t move their data and applications to the public cloud because of data privacy concerns, industry regulations, or unique security constraints. Fortunately, they can still take advantage of the scalability, affordability, and ease of public cloud technology by using Oracle Cloud technology in their own data center. Based on a flexible subscription model, Oracle Cloud at Customer is ideal when data must remain on premises for regulatory, privacy, legal, or performance reasons. This is another service that only Oracle offers.In other words, if you can’t move to the public cloud, Oracle will move the cloud to you by deploying and operating an instance of Oracle Cloud Machine or Oracle Database Exadata Cloud at Customer behind your firewall. We will enjoy the same robust cloud platform services, the same automatic software updates, and the samesubscription-based pricing model as all Oracle Cloud customers do. Rather than purchasing hardware and software, you can simply subscribe to it, and let Oracle handle every aspect of installation, configuration, patching, lifecycle management, upgrading, and monitoring. Customers with existing on-premises licenses can leverage that investment to use Oracle Database Cloud at a fraction of the old PaaS price. Oracle Bring Your Own License (Oracle BYOL) allows Oracle Database customers to seamlessly migrate to the cloud. With Oracle’s 100 percent workload compatibility and license mobility, we can preserve our existing investment.
Data Warehouse in the Cloud
Oracle Cloud is ideal for data warehouse workloads. We are able to migrate mixed workloads—such as online transaction processing (OLTP) and analytics, from small applications to enterprise information systems. Oracle Cloud supports structured and unstructured data suchas NoSQL and Hadoop. We can also take advantage of a wide range of platform services for business intelligence, as well as use Oracle’s cloud-based integration services to accommodate third-party analytics.For high-performance data warehouses, consider Oracle Database Exadata Cloud Service, which includes preconfigured hardware and software to eliminate costly data warehouse builds—and offers extreme performance for instant analytics.
Some companies sign on with a commodity cloud vendor, and then realize that they need advanced platform capabilities that the vendor can’t offer. Only Oracle supports hybrid cloud deployments that give us freedom to move workloads from own data center to the public cloud and back again, with mature migration tools to simplify the move.
We are subscribe to Oracle Bare Metal server and storage infrastructure, then install the exact operating systems, middleware, databases, and applications that we need. Dedicated servers deliver predictable performance and extensive control so you can optimize your applications. In addition, you can take advantage of industry-leading technologies such as Oracle Database, Oracle RAC, and the wide range of Oracle IaaS and Oracle PaaS cloud services.

Oracle Compute Cloud Deployment Process
Download Images
Select Type of Environment to Deploy: DevTest or Production
Provisioning Hardware
Allocate Compute Resources
Create the Instances
Apply Patches
Configure the Application
Release the Users

The Superiority of Oracle Cloud
 Unified management tools for monitoring applications and infrastructure
 Superior utilities for capacity planning, compliance, scheduling, and log analytics
 Compute services that are 11.5 times faster than commodity servers, yet cost about 20 percent less
 Storage capacity at about one-seventh the cost of commodity cloud alternatives
The below reason help us to choice Oracle cloud
• Engineered Systems dramatically increase compute performance and scalability while at the same time generally achieve economies which reduce IT costs.
• Engineered Systems reallocate much of the IT labor needed for highly technical and specialized hardware and compute services from the customer to the vendor.
• Exalogic configurations available for private cloud deployment are not just limited to large enterprise companies. The offerings go down to eighth rack sizing, made up of 4 servers, thereby becoming a viable option for SMBs.
• Oracle have the infrastructure in place to support high demand and massive enterprises, with fairly straightforward pricing that favors this level of usage. After all, Oracle is one of the pioneers in the database field, and they’re bringing this expertise to PaaS and IaaS. Oracle’s big data management system is particularly enticing, especially for organizations that are already heavily invested in Oracle products.

Capture DB usage before planning Decommission


In large establishments, where there are many servers, there are always chances of some servers or databases not being used but which remain operational. There is a need to decommission those servers or databases so that they can be used for other purposes. This should be a continuous process to analyze servers or databases which are not in use. There is huge cost of storage and licensing which an organization can save by re using them.
Before we decide to decommission such servers or databases we need to ensure they are not used by any app or user. This needs to be tracked but it’s not always easy. There are times when a db is put in offline mode and DBA team waits for someone to raise an alarm that they are not able to connect to so and so database which is being used for so and so purpose. Then those databases are brought online.
This approach has many flaws; app becomes unavailable, causes inconvenience to teams using them and creates downtime.
Thus we need a more proactive approach to analyze first before we decide to decommission a server or database.
To do this here we have created a job which will capture usage of all databases along with logins using them and hostname from where connections are coming. This information is captured in a table. We can let this job run for a week or two and then see who all are connecting to which database and decide or check whether to decommission them or not.
You can run ‘select * from DBAdmin..loginaudit’ to see the info captured

Following code is for SQL Server 2005 and above

————————-Create a Table in DBAdmin database———————————
USE [DBAdmin]
GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[LoginAudit](
[Login] [varchar](100) NULL,
[HostName] [varchar](100) NULL,
[DBName] [varchar](100) NULL,
[Command] [varchar](255) NULL,
[LastBatch] [varchar](255) NULL,
[ProgramName] [varchar](255) NULL
) ON [PRIMARY]

GO
SET ANSI_PADDING OFF

—————Create Job to run every 30 min—Change mail operator plus start and date———–
USE [msdb]
GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0

IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]’ AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N’JOB’, @type=N’LOCAL’, @name=N'[Uncategorized (Local)]’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N’Login Audit Job’,
@enabled=1,
@notify_level_eventlog=2,
@notify_level_email=2,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N’No description available.’,
@category_name=N'[Uncategorized (Local)]’,
@owner_login_name=N’sa’,
@notify_email_operator_name=N’DBATeam@SQLDBA.com’, –(your email operator)
@job_id = @jobId OUTPUT
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
/****** Object: Step [Login Audit Step] Script Date: 03/27/2015 03:25:49 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N’Login Audit Step’,
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N’TSQL’,
@command=N’CREATE TABLE #sp_who2 (SPID INT,Status VARCHAR(255),
Login VARCHAR(255),HostName VARCHAR(255),
BlkBy VARCHAR(255),DBName VARCHAR(255),
Command VARCHAR(255),CPUTime INT,
DiskIO INT,LastBatch VARCHAR(255),
ProgramName VARCHAR(255),SPID2 INT,
REQUESTID INT)
————————————–
INSERT INTO #sp_who2 EXEC sp_who2
————————————–
INSERT INTO DBAdmin..LoginAudit
SELECT Login, Hostname, DBName, Command, LastBatch, ProgramName
FROM #sp_who2
WHERE Status ”Background”
and SPID > 49
and programName not like ”%SQLAgent%”
and Login ”Null”
and Login ””
ORDER BY LastBatch ASC
—————————————
DROP TABLE #sp_who2′,
@database_name=N’DBAdmin’,
@flags=0
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule
@job_id=@jobId,
@name=N’Every Half an hour’,
@enabled=1,
@freq_type=4,
@freq_interval=1,
@freq_subday_type=4,
@freq_subday_interval=24,
@freq_relative_interval=0,
@freq_recurrence_factor=0,
@active_start_date=20150327, —Monitoring (datacapture) start date @active_end_date=20150411, —Monitoring (datacapture) end date
@active_start_time=0,
@active_end_time=235959
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

Following code is for SQL Server 2000

————–Create Table in DBAdmin Database —————
CREATE TABLE [LoginAudit] (
[Login] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[HostName] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[DBName] [varchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Command] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[LastBatch] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[ProgramName] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO

—————Create Job to run every 30 min—Change mail operator plus start and date———–
BEGIN TRANSACTION
DECLARE @JobID BINARY(16)
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
IF (SELECT COUNT(*) FROM msdb.dbo.syscategories WHERE name = N'[Uncategorized (Local)]’) < 1
EXECUTE msdb.dbo.sp_add_category @name = N'[Uncategorized (Local)]'

— Delete the job with the same name (if it exists)
SELECT @JobID = job_id
FROM msdb.dbo.sysjobs
WHERE (name = N'Login Audit Job')
IF (@JobID IS NOT NULL)
BEGIN
— Check if the job is a multi-server job
IF (EXISTS (SELECT *
FROM msdb.dbo.sysjobservers
WHERE (job_id = @JobID) AND (server_id 0)))
BEGIN
— There is, so abort the script
RAISERROR (N’Unable to import job ”Login Audit Job” since there is already a multi-server job with this name.’, 16, 1)
GOTO QuitWithRollback
END
ELSE
— Delete the [local] job
EXECUTE msdb.dbo.sp_delete_job @job_name = N’Login Audit Job’
SELECT @JobID = NULL
END

BEGIN

— Add the job
EXECUTE @ReturnCode = msdb.dbo.sp_add_job
@job_id = @JobID OUTPUT ,
@job_name = N’Login Audit Job’,
@owner_login_name = N’sa’,
@description = N’No description available.’,
@category_name = N'[Uncategorized (Local)]’,
@enabled = 1,
@notify_level_email = 2,
@notify_level_page = 0,
@notify_level_netsend = 0,
@notify_level_eventlog = 2,
@delete_level= 0,
@notify_email_operator_name = N’DBATeam@SQLDBA.com’, –(your email operator)

IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the job steps
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep @job_id = @JobID, @step_id = 1, @step_name = N’Login Audit Step’,
@command = N’CREATE TABLE #sp_who2 (SPID INT,Status VARCHAR(255),
Login VARCHAR(255),
HostName VARCHAR(255),
BlkBy VARCHAR(255),
DBName VARCHAR(255),
Command VARCHAR(255),
CPUTime INT,
DiskIO INT,
LastBatch VARCHAR(255),
ProgramName VARCHAR(255),
SPID2 INT)

————————————–
INSERT INTO #sp_who2 EXEC sp_who2
————————————–
INSERT INTO DBAdmin..LoginAudit
SELECT Login, Hostname, DBName, Command, LastBatch, ProgramName
FROM #sp_who2
WHERE Status ”Background”
and SPID > 49
and programName not like ”%SQLAgent%”
and Login ”Null”
and Login ””
ORDER BY LastBatch ASC
—————————————
DROP TABLE #sp_who2′,

@database_name = N’DBAdmin’,
@server = N”,
@database_user_name = N”,
@subsystem = N’TSQL’,
@cmdexec_success_code = 0,
@flags = 0,
@retry_attempts = 0,
@retry_interval = 1,
@output_file_name = N”,
@on_success_step_id = 0,
@on_success_action = 1,
@on_fail_step_id = 0,
@on_fail_action = 2
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback
EXECUTE @ReturnCode = msdb.dbo.sp_update_job @job_id = @JobID, @start_step_id = 1

IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the job schedules
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobschedule
@job_id = @JobID,
@name = N’Every Half an hour’,
@enabled = 1,
@freq_type = 4,
@active_start_date = 20150327, —Monitoring (datacapture) start date
@active_start_time = 0,
@freq_interval = 1,
@freq_subday_type = 4,
@freq_subday_interval = 30,
@freq_relative_interval = 0,
@freq_recurrence_factor = 0,
@active_end_date = 20150411, —Monitoring (datacapture) end date
@active_end_time = 235959
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

— Add the Target Servers
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @JobID, @server_name = N'(local)’
IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback

END
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

Capacity Planning using SQL Server Backup History


Problem Statement: Storage is always been one of the IT infrastructure component, where organization invest in a lot. Optimal utilization of storage is one of the key factors on saving cost. We should have fair details on the current storage requirement and also ability to forecast the tentative growth. This is critical, so that we don’t end up wasting storage by allocation extra, where the data is not growing and having trouble in adding storage where data is growing.
SQL Server is used for lot of application as its database in any organization. The same storage story apply to SQL Server databases as well. When any new database is created, we may predict the storage requirement depending on may be a baseline of the applications design or just leave it to grow with know the growth. For the forecast we may use tools that may help in understanding the growth. Then there is no issue in forecasting the growth and allocating the storage accordingly. What if we don’t have any such tools in place to provide the data growth history details of database or its file?
Solution: In any stable and healthy environment we maintain a good backup strategy. Let us assume that, we have daily backup in place for a database called Test and we maintain the history of this backup in msdb database. This Test database have multiple data file residing on multiple drive.
We can perform the below steps to build the database/data file growth report.

1. Gather the data from backup history. The below query will help in fetching these details.
select a.backup_set_id,b.backup_finish_date, b.database_name,
a.physical_name, sum(a.file_size/128/1024) as fizeSizeinMB from msdb..backupfile a, msdb..backupset b
where a.backup_set_id = b.backup_set_id
and b.database_name in (‘Test’)
and b.type = ‘D’
and a.file_type = ‘D’
group by a.backup_set_id,b.backup_finish_date, b.database_name,
a.physical_name
order by b.backup_finish_date desc

2. Copy the data in excel sheet.

3. Make use of Pivot and graph to represent the data growth. We will not discuss on how to work with excel here, but refer the excel file which will help in better understanding here.

The above easy step will provide the pattern of data file growth within a database. You can now analyze the data with slice and dice in excel.

Am I took the correct decision to moving my data warehouse solution to the cloud? Here’s why.


Traditional Data warehousing has hit a roadblock. Most organizations have ancient information management systems typically built in an age where inflexible systems working within solos were sufficient to address data needs of that era- limited data sources, infrequent changes, lesser volume of transactions and low competition. But today, the same systems have been rendered ineffective with the splurge in data sources as well as volumes.

Modern Data Warehouse Demand:
• Designed for sheer volume and pace of data.
• Can be used by individual departments like marketing, finance, development, and sales at organizations of all types and size
• Affordable to small and mid-sized organizations, very easy to adapt dynamic changes in data volume and analytics workloads
• Data available immediately and at every step of modification, supporting data exploration, business intelligence and reporting
• No Limitations on number of users
• Data insights can be always up to date and directly accessible to everyone who needs them
• Empowers enterprises to shift their focus from systems management to analysis.
• Operates painlessly at any scale and makes it possible to combine diverse data, both structured and semi-structured

Cloud data warehouse will replace the on-premises warehouse as the main source of decision support and business analytics.

Cost:–
One of the most obvious advantages of cloud deployment is the up-front cost savings. Buying or upgrading hardware and software is both costly and time-consuming. Moving the data warehouse to the cloud can give your business a competitive advantage over businesses that keep theirs on premises.
Demand match with requirement:
When we install servers on premises, we have the infrastructure we have. It may be perfect for today, or even next week, but totally inadequate for what’s coming in 18 months. Or may have far more capacity than we currently need or will need for a while, and that’s wonderful —
Cloud-based deployments eliminate such problems through elasticity. Resources are dynamically reassigned as workloads change. So we can grow your environment, shrink our environment, or, for example, spin off a whole new environment just for a two-week promotion.
Transformation– Traditional data warehouses consist of data models, extract, transform, and load processes, and data governance, with BI tools sitting on top. Instead of doing things the old way, which includes structuring, ingesting and analyzing, enterprise data warehouses need to flip the paradigm and ingest, analyze, and structure by utilizing the cloud, data lakes, and polyglot warehousing. We need to think of your data warehouse not as a single technology but as a collection of technologies.
Agility– Many business functions, hitherto not associated with BI, have taken to data analytics for justifying spends, analyzing performance etc. It will be unproductive for these lines of business to wait for central IT to provision a data warehouse for them so they can start analysing their data. The cloud offers a relatively quick as well as robust solution to cater to these warehousing needs. On the contrary, for on premise infra, procurement as well as deployment cycles are very long. Add to that the pain of going through upgrades every 2-3 years.
Intersection with Big Data– Big data has empowered the world to tap any kind of unstructured data sources to gain insights. Cloud data warehousing can be a bridge for bringing the world of structured data from legacy on-premises data warehouses together with these newer big data sources.
To conclude, on-premises workloads will continue to shift to the cloud. In the days to come, the cloud data warehouse will replace the on-premises warehouse as the main source of decision support and business analytics. Azure SQL Data Warehouse, a cloud based data warehouse hosted on Microsoft Azure is capable of processing massive volumes of data and can provide your business the speed & scale that it needs to manage enterprise data.