Performance boost up by Optimize for ad hoc workloads


Performance boost up by Optimize for ad hoc workloads

I have faced a serious issue where the sales calculation is taking more than 7 hours whereas as per IBM it should be completed within 3 hours. The client is using Varicent sales application ( IBM software)to make the sales report. We are using 64 GB RAM and 8 core processor but still sql server faced the memory pressure. I am start to investigate the memory pressure and observed the cache memory issue. As per the application tendency it is using the lots ad hoc query. I am resolve the issue after enable the optimize for ad hoc workloads and time is comes down to 2 hours 30 minute. I have mentioned the details of optimize for ad hoc workloads option so we can determine how is it help us to improve the performance.

In SQL Server 2008 and later, we can use optimize for ad hoc workloads, which is a server-level setting, i.e. it will affect every single database on the server (as opposed to forced parameterization, which is database-specific).

With this setting enabled, SQL Server does not cache the plan of an ad hoc query the first time it is encountered. Instead, a plan-stub is cached that just indicates that the query has been seen before. Only if the query is seen a second time is the plan cached. This won’t reduce the number of compiles for ad hoc queries, but it will make it less likely that the plan cache will grow as much, since the initial stub takes up very little memory. As such,
it reduces the chances that other plans which could be reusable will be discarded due to memory pressure.

To enable the optimize for ad hoc workloads setting, use sp_configure, as shown

EXEC sp_configure ‘show advanced options’,1
RECONFIGURE
EXEC sp_configure ‘optimize for ad hoc workloads’,1
RECONFIGURE

Procedure need to follow to determine the SAN issue.


Procedure need to follow to determine the SAN issue.
Normally everything is fine, however over the past few weeks we have been having huge lag spikes in sql server. It’s clear that SQL Server is waiting on disk I/O. But I keep getting told that it is because SQL Server is asking for abnormally high I/O. Which isn’t the case?
IO issue identification due to SAN:
Identify I/O bottlenecks by examining the latch waits. Run the following DMV query to find I/O latch wait statistics.
select wait_type, waiting_tasks_count, wait_time_ms, signal_wait_time_ms, wait_time_ms / waiting_tasks_count
from sys.dm_os_wait_stats
where wait_type like ‘PAGEIOLATCH%’ and waiting_tasks_count > 0
order by wait_type

Identify the number of pending I/Os that are waiting to be completed for the entire SQL Server instance:
SELECT SUM(pending_disk_io_count) AS “Number of pending I/Os” FROM sys.dm_os_schedulers

Capture the details about the stalled I/O count reported by the above query.
SELECT * FROM sys.dm_io_pending_io_requests
To identify the stalled I/O per each database files in the SQL Server instance, run the below query:
SELECT DB_NAME(database_id) AS “Database”,file_id, io_stall_read_ms, io_stall_write_ms, io_stall FROM sys.dm_io_virtual_file_stats(NULL,NULL)

Note: The values are cumulative since SQL Server started. You can use sample_ms column output to compare the output between two instances of the output and find which file is the cause of the stalled I/O. (sample_ms is the number of milliseconds since the computer was started). Once you identify the file and the drive with high stall IO, next step is to capture the IO latency
SELECT
–virtual file latency
[ReadLatency] =
CASE WHEN [num_of_reads] = 0
THEN 0 ELSE ([io_stall_read_ms] / [num_of_reads]) END,
[WriteLatency] =
CASE WHEN [num_of_writes] = 0
THEN 0 ELSE ([io_stall_write_ms] / [num_of_writes]) END,
[Latency] =
CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0)
THEN 0 ELSE ([io_stall] / ([num_of_reads] + [num_of_writes])) END,
–avg bytes per IOP
[AvgBPerRead] =
CASE WHEN [num_of_reads] = 0
THEN 0 ELSE ([num_of_bytes_read] / [num_of_reads]) END,
[AvgBPerWrite] =
CASE WHEN [io_stall_write_ms] = 0
THEN 0 ELSE ([num_of_bytes_written] / [num_of_writes]) END,
[AvgBPerTransfer] =
CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0)
THEN 0 ELSE
(([num_of_bytes_read] + [num_of_bytes_written]) /
([num_of_reads] + [num_of_writes])) END,
LEFT ([mf].[physical_name], 2) AS [Drive],
DB_NAME ([vfs].[database_id]) AS [DB],
–[vfs].*,
[mf].[physical_name]
FROM
sys.dm_io_virtual_file_stats (NULL,NULL) AS [vfs]
JOIN sys.master_files AS [mf]
ON [vfs].[database_id] = [mf].[database_id]
AND [vfs].[file_id] = [mf].[file_id]
— WHERE [vfs].[file_id] = 2 — log files
— ORDER BY [Latency] DESC
— ORDER BY [ReadLatency] DESC
ORDER BY [WriteLatency] DESC;
GO

First see which database files have the most IO bottleneck by running this query(Query by Glenn Berry)
SELECT DB_NAME(fs.database_id) AS [Database Name] ,
mf.physical_name ,
io_stall_read_ms ,
num_of_reads ,
CAST(io_stall_read_ms / ( 1.0 + num_of_reads ) AS NUMERIC(10, 1)) AS [avg_read_stall_ms] ,
io_stall_write_ms ,
num_of_writes ,
CAST(io_stall_write_ms / ( 1.0 + num_of_writes ) AS NUMERIC(10, 1)) AS [avg_write_stall_ms] ,
io_stall_read_ms + io_stall_write_ms AS [io_stalls] ,
num_of_reads + num_of_writes AS [total_io] ,
CAST(( io_stall_read_ms + io_stall_write_ms ) / ( 1.0 + num_of_reads
+ num_of_writes ) AS NUMERIC(10,
1)) AS [avg_io_stall_ms]
FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS fs
INNER JOIN sys.master_files AS mf WITH ( NOLOCK ) ON fs.database_id = mf.database_id
AND fs.[file_id] = mf.[file_id]
ORDER BY avg_io_stall_ms DESC
OPTION ( RECOMPILE );
 Then run this query to see the top ten events your server is waiting on(query by Jonathan Kehayias). You will also find similar query from Glenn Berry diagnostic queries.
SELECT TOP 10
wait_type ,
max_wait_time_ms wait_time_ms ,
signal_wait_time_ms ,
wait_time_ms – signal_wait_time_ms AS resource_wait_time_ms ,
100.0 * wait_time_ms / SUM(wait_time_ms) OVER ( ) AS percent_total_waits ,
100.0 * signal_wait_time_ms / SUM(signal_wait_time_ms) OVER ( ) AS percent_total_signal_waits ,
100.0 * ( wait_time_ms – signal_wait_time_ms )
/ SUM(wait_time_ms) OVER ( ) AS percent_total_resource_waits
FROM sys.dm_os_wait_stats
WHERE wait_time_ms > 0 — remove zero wait_time
AND wait_type NOT IN — filter out additional irrelevant waits
( ‘SLEEP_TASK’, ‘BROKER_TASK_STOP’, ‘BROKER_TO_FLUSH’, ‘SQLTRACE_BUFFER_FLUSH’,
‘CLR_AUTO_EVENT’, ‘CLR_MANUAL_EVENT’, ‘LAZYWRITER_SLEEP’, ‘SLEEP_SYSTEMTASK’,
‘SLEEP_BPOOL_FLUSH’, ‘BROKER_EVENTHANDLER’, ‘XE_DISPATCHER_WAIT’,
‘FT_IFTSHC_MUTEX’, ‘CHECKPOINT_QUEUE’, ‘FT_IFTS_SCHEDULER_IDLE_WAIT’,
‘BROKER_TRANSMITTER’, ‘FT_IFTSHC_MUTEX’, ‘KSOURCE_WAKEUP’,
‘LAZYWRITER_SLEEP’, ‘LOGMGR_QUEUE’, ‘ONDEMAND_TASK_QUEUE’,
‘REQUEST_FOR_DEADLOCK_SEARCH’, ‘XE_TIMER_EVENT’, ‘BAD_PAGE_PROCESS’,
‘DBMIRROR_EVENTS_QUEUE’, ‘BROKER_RECEIVE_WAITFOR’,
‘PREEMPTIVE_OS_GETPROCADDRESS’, ‘PREEMPTIVE_OS_AUTHENTICATIONOPS’, ‘WAITFOR’,
‘DISPATCHER_QUEUE_SEMAPHORE’, ‘XE_DISPATCHER_JOIN’, ‘RESOURCE_QUEUE’ )
ORDER BY wait_time_ms DESC
Once you have this information at hand it would be much easier to troubleshoot the problem.

Once you identify the file and the drive with high stall IO, next step is to capture logical disk counters using Perfmon (Windows Performance monitor).
Logical Disk
Avg. Disk sec/read
Avg. Disk sec/write
Avg. Disk sec/transfer
%Disk time
Disk Read Bytes/sec
Disk Write Bytes/sec
Split IO/sec
Process
IO Data Bytes/sec
IO Read Data Bytes/sec
IO Write Data Bytes/sec
Remember to capture the counters for respective drive or process not in _Total
Average Disk Sec/Transfer on a SQL Server machine is typically less than 15ms. If this value climbs, it may indicate that the I/O subsystem is not keeping up with I/O demand. To gauge I/O properly, here are the values for reference:
Any number Less than 10ms – very good
Between 10 – 20ms – okay
Between 20 – 50ms – slow, needs attention
Greater than 50ms – Serious I/O bottleneck
If this value is above the recommended value, check if SQL Server is flooding the IO subsystem – in case of poorly written IO intensive query and explore the opportunity of tuning such queries, an addition index or creating missing or updating existing statistics or re-writing the problematic query might help reduce the IO.

CPU pressure and optimal number of CPUs to handle the workload


CPU pressure and optimal number of CPUs to handle the workload

 

 

AS a DBA we are faced the question from a customer asks :I am running a DSS/ OLAP like SQL job and it takes x amount of time, how can I reduce the time so the SQL job completes sooner, can I add more CPUs ? if yes, how many ? And we are getting high CPU alert also so here need to analysis the problem can be resolved by adding the CPU.

 

I am follow the below query step by step to decide the CPU pressure stage.

——————————————————————————————————————

 

You can see the top wait stats and the percentage of waits at the current point in time by running the following query by Tim Ford and Glenn Berry.

 

WITH Waits AS

(

SELECT

wait_type,

wait_time_ms / 1000. AS wait_time_s,

100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct,

ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn

FROM sys.dm_os_wait_stats

WHERE wait_type

NOT IN

(‘CLR_SEMAPHORE’, ‘LAZYWRITER_SLEEP’, ‘RESOURCE_QUEUE’,

‘SLEEP_TASK’, ‘SLEEP_SYSTEMTASK’, ‘SQLTRACE_BUFFER_FLUSH’, ‘WAITFOR’,

‘CLR_AUTO_EVENT’, ‘CLR_MANUAL_EVENT’)

) — filter out additional irrelevant waits

SELECT W1.wait_type,

CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s,

CAST(W1.pct AS DECIMAL(12, 2)) AS pct,

CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct

FROM Waits AS W1

INNER JOIN Waits AS W2 ON W2.rn <= W1.rn

GROUP BY W1.rn,

W1.wait_type,

W1.wait_time_s,

W1.pct

HAVING SUM(W2.pct) – W1.pct < 95; — percentage threshold;

 

Note: SOS_SCHEDULER_YIELD is a fairly common wait type and occurs when there is CPU pressure. SQL Server runs multiple threads and tries to allow all threads to run without problems. However, if all the threads are busy on each scheduler and can’t let other threads run, it will yield itself for another thread, which in turn creates the SOS_SCHEDULER_YIELD wait type.

 

To see an average of current tasks and current waiting tasks you can use the following query:

 

SELECT AVG(current_tasks_count) AS [Avg Current Task],

AVG(runnable_tasks_count) AS [Avg Wait Task]

FROM sys.dm_os_schedulers

WHERE scheduler_id < 255

AND status = ‘VISIBLE ONLINE’

–query we can see how many processors SQL Server is using:

 

SELECT COUNT(*) AS proc# 
FROM sys.dm_os_schedulers 
WHERE status = 'VISIBLE ONLINE' 
AND is_online = 1

 

The following query will show current tasks running, CPU time, wait type, etc.:

 

SELECT s2.text, 
session_id,
start_time,
status, 
cpu_time, 
blocking_session_id, 
wait_type,
wait_time, 
wait_resource, 
open_transaction_count
FROM sys.dm_exec_requests a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS s2  
WHERE status <> 'background'

 

Query to find out how many CPU on an average will help to scale the workload better.

 

When concurrent users apply simultaneous DSS/OLAP like CPU intensive workload, there could be CPU pressure. We can conclude presence of CPU pressure  by use of the above command .

 

In this case one can run the below query to find out how many CPU on an average will help to scale the workload better. It might be more informative to collect the below information in short time intervals (many samples) than just once to understand during which time of the workload application the CPU pressure was the most. Single sample will lead to average additional CPUs necessary for the entire workload duration.

 

  1. Reset Wait Stats

dbcc sqlperf(‘sys.dm_os_wait_stats’, clear)–example provided by http://www.sqlworkshops.com

2. Apply workload (you can find sample workload query at the end of this article, you need to execute the sample workload query simultaneously in many sessions to simulate concurrent user tasks).

3. Run the below query to find Additional CPUs Necessary it is important to run the query right after the workload completes to get reliable information.

select round(((convert(float, ws.wait_time_ms) / ws.waiting_tasks_count) / (convert(float, si.os_quantum) /si.cpu_ticks_in_ms) * cpu_count), 2) as Additional_Sockets_Necessary
from sys.dm_os_wait_stats ws cross apply sys.dm_os_sys_info si where ws.wait_type =‘SOS_SCHEDULER_YIELD’

–example provided by http://www.sqlworkshops.coms

The SQL Server 2008 query:

select round(((convert(float, ws.wait_time_ms) / ws.waiting_tasks_count) / si.os_quantum *scheduler_count), 2) as Additional_Sockets_Necessary
from sys.dm_os_wait_stats ws cross apply sys.dm_os_sys_info si where ws.wait_type =‘SOS_SCHEDULER_YIELD’

–example provided by http://www.sqlworkshops.com

 

How did you ensure that SQL server performance has been degraded due to memory pressure?


Let’s take a look at sys.dm_os_ring_buffers to understand the scenario .

Running this query against the system:

SELECT * FROM sys.dm_os_ring_buffers AS dorb;

There are only four columns, but the final column is XML and that’s where the good information can be found. There are a number of different ring buffers but we’re only interested in the type called RING_BUFFER_RESOURCE_MONITOR. This records changes to memory allocations, which is a great way to see when you’re running out of memory because a message that your memory is low is a pretty good indicator that you might be seeing memory pressure.

The information presented in the MemoryNode and the MemoryRecord are useful for attempting to figure out what went wrong, but the key points are up in the ResourceMonitor element with the Notification, IndicatorsProcess, and IndicatorsSystem values. First, the notification tells us that this was RESOURCE_MEMPHYSICAL_LOW message that was captured in the ring buffers. That means that physical memory was low. The next two indicators let us know what was low. If the IndicatorsProcess returns 0 and the IndicatorsSystem returns a value then the problem was system wide. But, in our case the IndicatorsProcess has a value and IndicatorsSystem is returning 0. This means that this alert was for a single process that ran suffered from low memory, not the entire system. The values break down as follows:

Value Meaning 1 High Physical Memory 2 Low Physical Memory 4 Low Virtual Memory

Database statistics update issue along with a resolution


In our environment we execute the sp_updatestats job on a daily basis at 5 am, but we have faced severe performance issue due to inaccuracy of statistics and as a result query is unable to use the optimal execution plan.

Finding the state of your statistics

 

Up-to-date statistics help ensure that the most appropriate index is chosen to obtain the underlying data. They also help ensure that the correct index access mechanism is chosen, for example, seek or lookup. Running the SQL script given in the following listing will identify the current state of your statistics.

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

SELECT

ss.name AS SchemaName

, st.name AS TableName

, s.name AS IndexName

, STATS_DATE(s.id,s.indid) AS ‘Statistics Last Updated’

, s.rowcnt AS ‘Row Count’

, s.rowmodctr AS ‘Number Of Changes’

, CAST((CAST(s.rowmodctr AS DECIMAL(28,8))/CAST(s.rowcnt AS

DECIMAL(28,2)) * 100.0)

AS DECIMAL(28,2)) AS ‘% Rows Changed’

FROM sys.sysindexes s

INNER JOIN sys.tables st ON st.[object_id] = s.[id]

INNER JOIN sys.schemas ss ON ss.[schema_id] = st.[schema_id]

WHERE s.id > 100

AND s.indid > 0

AND s.rowcnt >= 500

ORDER BY SchemaName, TableName, IndexName

Reason :

The system stored procedure sp_updatestats does not provide the ability to update your statistics with the FULLSCAN option. When we are faced with an urgent performance issue updating statistics with the “WITH FULLSCAN” option can be exactly what SQL Server needs to find the optimal execution plan and update the statistics

Workaround:

We can use rhe EXEC sp_MSforeachtable ‘UPDATE STATISTICS ? WITH FULLSCAN’ at specific database level to update the statistics correctly or we can schedule the Rebuild job daily basis if it is permitted by business owner or did not conflict with any other job.

Troubleshoot SQL server performance issue due to user-defined scalar functions


We have faced a strange issue in Database server where it took 20 hours time to upload the 1 million rows. The issue is not related with Database locking and resource consuming issue. The database server has 16 CPU and 32 GB ram RAM. To fixed such kind of issue first we have to look out Inefficient query plan, When generating an execution plan for a query, the SQL Server optimizer attempts to choose a plan that provides the fastest response time for that query. The most significant input the optimizer uses in evaluating the cost of various alternative query plans is the cardinality estimates for each operator, which you can see in the Show plan (EstimateRows and EstimateExecutions attributes). By the below step I am able to figure out that there is a need to use some other process to upload the batch instead of Function. It is a user-defined scalar function and UDF is used in the select list, this happens when creating the results of the query to pass to the next stage of query processing. It’s the row by row processing that seems to slow SQL Server the most.  It does not create any issue for a small number of rows, but this penalty shows up as poor query execution time when a query applies a UDF to a large number of rows, typically 1000 or more. No matter how simple a UDF there’s a large performance penalty paid when they’re used. The penalty is incurred because the SQL Server database engine must create its own internal cursor like processing. It must invoke each UDF on each row. It turns a Seek into a Scan.

Step By Step to figure out the Inefficient query Plan:


DBA can use profiler for the same but the below step is more convenient.

 

Step 1:  The query against sys.dm_exec_query_stats is an efficient way to determine which query is using the most cumulative CPU.

select

highest_cpu_queries.plan_handle,

highest_cpu_queries.total_worker_time,

q.dbid,

q.objectid,

q.number,

q.encrypted,

q.[text]

from

(select top 50

qs.plan_handle,

qs.total_worker_time

from

sys.dm_exec_query_stats qs

order by qs.total_worker_time desc) as highest_cpu_queries

cross apply sys.dm_exec_sql_text(plan_handle) as q

order by highest_cpu_queries.total_worker_time desc

 

Step 2: We can use the following query to find the top user sessions that are allocating internal objects, including currently active tasks.

 

SELECT

t1.session_id,

(t1.internal_objects_alloc_page_count + task_alloc) as allocated,

(t1.internal_objects_dealloc_page_count + task_dealloc) as

deallocated

from sys.dm_db_session_space_usage as t1,

(select session_id,

sum(internal_objects_alloc_page_count)

as task_alloc,

sum (internal_objects_dealloc_page_count) as

task_dealloc

from sys.dm_db_task_space_usage group by session_id) as t2

where t1.session_id = t2.session_id and t1.session_id >50

order by allocated DESC

 

Here is a sample output.

 

session_id allocated            deallocated

———- ——————– ——————–

52         5120                 5136

51         16                   0

 

Step 3: Once you have isolated the task or tasks that are generating a lot of internal object allocations, we  can find out which Transact-SQL statement it is and its query plan for a more detailed analysis.

 

select

t1.session_id,

t1.request_id,

t1.task_alloc,

t1.task_dealloc,

t2.sql_handle,

t2.statement_start_offset,

t2.statement_end_offset,

t2.plan_handle

from (Select session_id,

request_id,

sum(internal_objects_alloc_page_count) as task_alloc,

sum (internal_objects_dealloc_page_count) as task_dealloc

from sys.dm_db_task_space_usage

group by session_id, request_id) as t1,

sys.dm_exec_requests as t2

where t1.session_id = t2.session_id and

(t1.request_id = t2.request_id)

order by t1.task_alloc DESC

 

Here is a sample output.

 

session_id request_id  task_alloc           task_dealloc

———————————————————

52         0           1024                 1024

sql_handle                                                                   statement_start_offset

———————————————————————–

0x02000000D490961BDD2A8BE3B0FB81ED67655EFEEB360172   356

 

statement_end_offset  plan_handle

———————————

-1                    0x06000500D490961BA8C19503000000000000000000000000

 

You can use the sql_handle and plan_handle to get the SQL statement and the query plan as follows:

 

select text from sys.dm_exec_sql_text(@sql_handle)

select * from sys.dm_exec_query_plan(@plan_handle)

Memory setting in SQL 2005 ,2008 64 bit environment if SSIS and SSAS resides of the same server.


As per business requirement it is pretty much important to make proper memory setting to avoid memory pressure. As per the Microsoft recommendation  on a dedicated X64 SQL database server, where DB engine only running then we have to set 20GB as a max memory if total RAM is  24GB . If SSIS and SSAS resides of the same database  server the same database server then we have to rethink the MAX memory parameter. Although SSIS and SSAS from part of the SQL Server stack they have completely different memory architectures so we  have to monitor their peak memory usage by reducing SQL Server’s max memory settings to ensure SSIS and SSAS have enough room and to see how high their memory usage goes under peak loading.

In my experience on servers where we had to run SSAS/SSIS alongside SQL Server  the SSIS/SSAS process will peak at a certain level depending on the workload thrown at them and we would adjust SQL Server’s memory settings accordingly, or just add additional memory if the peak uses of the SSIS/SSAS processes took up too many resources. I’d also highly recommend adjusting SQL Server’s max degree of parallelism settings otherwise SQL Server will saturate the CPUs denying SSIS/SSAS cpu time when any e.g. data loading/transformation is taking place.

 

In this kind of environment we  have to set 16GB as a max memory if total RAM is  24GB and remaining 8 GB for Operation system so SSIS and SSAS can get some memory. SSIS and SSAS can take the memory which occupied by SQLSERVER.exe.

Microsoft SQL Server – TempDB usage per active session


Lists the TempDB usage per each active session.
It helps identifying the sessions that use the tempdb heavily with internal objects.

When the internal objects usage is high, the session is probably using big hash tables or spooling in worktables. It could be a symptom of an inefficient plan or a missing index.

Shrinking a TempDB full of internal objects will probably have no effect, because the engine will not release the deallocated space. The only possible alternative to restarting the service or Kill the specified SPID after received the approval from business owner.

It helps to identify the root cause of the problem why Tempdb is rapidly increased.

Script: I already tested in our Citagus test environment and it work as expected.,

;WITH task_space_usage AS (

— SUM alloc/delloc pages

SELECT session_id,

request_id,

SUM(internal_objects_alloc_page_count) AS alloc_pages,

SUM(internal_objects_dealloc_page_count) AS dealloc_pages

FROM sys.dm_db_task_space_usage WITH (NOLOCK)

WHERE session_id <> @@SPID

GROUP BY session_id, request_id

)

SELECT TSU.session_id,

TSU.alloc_pages * 1.0 / 128 AS [internal object MB space],

TSU.dealloc_pages * 1.0 / 128 AS [internal object dealloc MB space],

EST.text,

— Extract statement from sql text

ISNULL(

NULLIF(

SUBSTRING(

EST.text,

ERQ.statement_start_offset / 2,

CASE WHEN ERQ.statement_end_offset < ERQ.statement_start_offset THEN 0 ELSE( ERQ.statement_end_offset – ERQ.statement_start_offset ) / 2 END

), ”

), EST.text

) AS [statement text],

EQP.query_plan

FROM task_space_usage AS TSU

INNER JOIN sys.dm_exec_requests ERQ WITH (NOLOCK)

ON TSU.session_id = ERQ.session_id

AND TSU.request_id = ERQ.request_id

OUTER APPLY sys.dm_exec_sql_text(ERQ.sql_handle) AS EST

OUTER APPLY sys.dm_exec_query_plan(ERQ.plan_handle) AS EQP

WHERE EST.text IS NOT NULL OR EQP.query_plan IS NOT NULL

ORDER BY 3 DESC, 5 DESC

By: Rajib Kundu

Posted in Microsoft SqlServer Administration | Comments (51)

6 Jun

Should I take the backup with or without compress? Which option is best?

In our day to day life the business owner frequently asked that question .

Explanation:

File Size Summary
The SQL 2008 Backup without compression was 3.5GB
The SQL 2008 Backup with compression was 986MB. The reduced space is 2.5GB; the compressed file is only 28% of the original size!

Note: Not all databases will have this compression factor or CPU usage due to systems and data types etc.

The SQL 2008 Backup without compression took approx 6 minutes and the SQL 2008 Backup with compression took approx 3 minutes. You might be asking – how is that possible?

Ans: Well the CPU is used to compress the data before it is written to disk, so LESS data is written to disk. That is the reason to take the less time to do the backup job with compress features.