Resolution for Microsoft OLE DB Provider for SQL Server: Creating or altering table ‘Test’ failed because the minimum row size would be 8204, including 255 bytes of internal overhead. This exceeds the maximum allowable table row size of 8060 bytes.


We have faced the issue few weeks back and workaround follow for the same as like below.

Workaround: As per the below error log  it is clearly indicate that we  are trying to create a table that has a row with FIXED data columns in excess of 8060 bytes. The Row-overflow data and rows in excess of 8060 bytes is only possible using VARIABLE data types..

.if we change our table syntax to the following, we’ll notice it should work without issue: create table dbo. Test (x char(60) not null, y varchar(8000) not null)

some important facts

  • Table row can have more than 8060      bytes. (2GB Max)
  • varchar, nvarchar, varbinary,      sql_variant, or CLR user-defined type columns can have max 8000 bytes.
  • varchar(max), nvarchar(max),      varbinary(max), text, image or xml data type columns have no restrictions.
  • All the other data type columns      (other than mentioned in above three points) width addition must be still      under 8060 byte row limit.
  • Index can only be created which      falls with-in 8060 byte row limit.

Note: The length of individual columns must still fall within the limit of 8,000 bytes for varchar, nvarchar, varbinary, sql_variant, and CLR user-defined type columns. Only their combined lengths can exceed the 8,060-byte row limit of a table.

How do I know which advanced futures are used in Database at SQL 2008 enterprise edition


How do I know which advanced futures are used in Database at SQL 2008 enterprise edition

 

As we are know that all futures are available at enterprise edition but the same will not available at standard edition. Due to business requirement if we have to move the database enterprise to standard then It can be important to know which Enterprise features have been enabled. To retrieve details of which SQL Server Enterprise functions have been enabled, you just need to issue the simple SQL statement given here:

 

SELECT * FROM sys.dm_db_persisted_sku_features( Only applicable at SQL 2008 also)

 

Futures are available at enterprise edition which not available at standard.

 

Compression Indicates at least one table or index uses data compression

Partitioning Indicates the database contains partitioned tables, indexes, schemes, or functions

Transparent Data Encryption Indicates the database has been encrypted with transparent data encryption

Change Capture Indicates the database has change data capture enabled

 

If one table is compressed in entire database then it is indicate

 

Output:

 

feature_name                   feature_id

Compression                     100

Rebuild Job failure issue at SQL server 2008 R2 environment incorrect settings: ‘QUOTED_IDENTIFIER’


We have received the below error:

 

Error:

 

ALTER INDEX failed because the following SET options have incorrect settings: ‘QUOTED_IDENTIFIER’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations. [SQLSTATE 42000] (Error 1934).  The step failed.

 

 

Resolution:

 

Please pasted the below three lines on the rebuild job step

 

SET QUOTED_IDENTIFIER ON

GO

SET ANSI_NULLS ON

Go

SET ARITHABORT ON

 

 

Explanation:

These are the two widely used SET options in SQL Server. Most developers explicitly set these options while creating Stored Procedures, Triggers and User Defined Functions but many are unclear on why we need to explicitly SET them? And why they are special compared to other options?

The default behavior is ON in any database. As per BOL 2008 this option will always be set to ON in the future releases of SQL Server and any explicit SET to OFF will result an error. So avoid explicitly setting this option in future development work.

These two SET options are special because whenever a stored procedure or a Trigger or a User Defined Function is created or modified with these options explicitly SET; SQL Server remembers those settings in the associated object metadata. And every time the object (stored procedure, Trigger..etc.) is executed SQL server uses the stored settings irrespective of what the current user session settings are. So the behavior of the stored procedure is not altered by the calling session settings and the usage of the SET option behavior inside the SP is always guaranteed.

We can get any procedure or trigger or function settings for these options from the sys..sql_modules metadata table.

SELECT uses_ansi_nulls, uses_quoted_identifier  FROM sys.sql_modules WHERE object_id = object_id('SampleProcedure')

Some rare facts about Log shipping


We have tested some rare facts in our test environment and shared the details.

 

Database can be mirrored or log shipped simultaneously

 

Yes . it can also be simultaneously mirrored and log shipped. Typically, when combining log shipping and database mirroring, the mirroring session is established before log shipping, although this is not required. Then the current principal database is configured as the log shipping primary (the principal/primary database), along with one or more remote secondary databases. Also, the mirror database must be configured as a log shipping primary (the mirror/primary database). The log shipping secondary databases should be on different server instances than either the principal/primary server or mirror/primary server.

 

Setting Up Mirroring and Log Shipping Together

Restore backups of the principal/primary database with NORECOVERY onto another server instance to be later used as database mirroring mirror database for the principal/primary database.

Set up database mirroring.

  1. Restore backups of the principal/primary database to other server instances to be later used as log shipping secondary databases for the primary database.
  2. Set up log shipping on the principal database as the primary database for one or more secondary databases.

Need to set up a single share as the backup directory (a backup share). This ensures that after role switching between the principal and mirror servers, backup jobs continue to write to the same directory as before. A best practice is to ensure that this share is located on a different physical server from the servers hosting the databases involved in mirroring and log shipping.

Manually failover from the principal to the mirror.

Step To manually fail over a database mirroring session

  1. Connect to the principal server.
  2. Set the database context to the master database:

USE master;

  1. Issue the following statement on the principal server:

ALTER DATABASE database_name SET PARTNER FAILOVER, where database_name is the mirrored database.

This initiates an immediate transition of the mirror server to the principal role.

On the former principal, clients are disconnected from the database and in-flight transactions are rolled back.

Set up log shipping on the new principal (previously mirror) as the primary database.

 

Log shipping can be possible in different SQL Server versions

 

SQL Server 2008 à 2005 Log shipping not possible because 2008 database can’t be restored at 2005 server.

SQL Server 2005 à 2008—it is possible, but database can be only recovery mode. You can’t put the database read-only or standby  mode.

It is always recommended that both primary and secondary servers have same version.

SQL server store procedure new features in SQL 2008-2012


Stored procedures today:

One of the biggest stored-procedure improvements in SQL Server 2008 and 2008 R2 is table-valued parameters. This parameter type groups multiple rows of data into sets and then sends them all at once to the database server, reducing round trips between client and server, and it doesn’t create temporary tables or numerous parameters. Table-valued parameters don’t require any locking to initially populate client data. They also enable the client to specify sort order.

The table-valued structure is easy to understand and use, yet it can accommodate complex business logic. You use table-valued parameters the same way you use classic stored-procedure parameters. First, declare a user-defined table type and create a stored procedure with the input parameter of that table type. Next, declare a variable of the table type and reference it. Use an INSERT statement to populate the table variable instead of a SET or SELECT statement. Finally, plug the filled table variable into the stored procedure as an input parameter.

SQL Server 2008 also introduced a MERGE statement that allows for multiple data manipulation language (DML) operations in a single T-SQL statement. With the appropriate indexes on joined tables, you’ll gain increased query performance. That’s because a single MERGE statement reduces the number of times the data in source and target tables is processed. In SQL Server 2005 and earlier versions, each INSERT, DELETE, and UPDATE statement had to process the data one time. MERGE is useful, say, when you are synchronizing two tables (see “Sidebar title TK”).

In SQL Server 2008 and 2008 R2 stored procedures, grouping operations have been enhanced by a new GROUPING SETS syntax. A simple GROUP BY clause returns one row for each combination of all column values, whereas GROUPING SETS returns one row for each unique value in each column. Without this new feature, the latter would take multiple GROUP BY statements for each column combined in a UNION structure. That would inevitably require more resources, like disk I/O operations, memory and runtime.

SQL Server 2008 and 2008 R2 also use a new row constructor to insert multiple rows in a single INSERT statement in stored-procedure programming. Database administrators may already be familiar with this syntax: “INSERT INTO TABLE (COLa, COLb, COLc) … VALUES (V1a, V1b, V1c …), (V2a, V2b, V2c …), … (V1000a, V1000b, V1000c …)”. Here the number 1000 indicates the maximum allowed rows in one INSERT statement. The new INSERT syntax is cleaner because it removes duplicate strings. It also allows for faster round trips to and from the server by reducing network traffic.

In my triple-trial test, I ran 10 batches of 1,000 new-style inserts per batch and 10,000 individual inserts in parallel. Each test counted as one round trip, but compared with the old-school INSERTs, the simplified row constructor cut down the number of transactions and bytes sent from the server 1,000-fold and reduced the amount of client-sent data by a factor of three and client processing time by a factor of 300. Total execution time was reduced by more than 50%.

SQL Server 2008/2008 R2 Upgrades and Migration


Business Requirements/Needs

It’s widely recognized that database sizes are growing significantly, and that the growth is being forced by many factors, such as companies requiring more data to be available online for longer or an increasing amount of data being digitized for storage. SQL Server 2008 R2 has emerged as data platform for large scale industries which stores and manages several terabytes of data efficiently in a variety of different formats including XML, e-mail, time/calendar, file, document, geospatial, and so on. SQL Server 2008 R2 is very much capable in handling data explosion while providing a rich set of services to interact with the data: search, query, data analysis, reporting, data integration, and robust synchronization.

Our goal is to Migrate 100 SQL server 20005 Enterprise Edition to SQL server 2008 R2 and consolidate the existing dozen servers to In Virtual environment a few servers for easy manage and maintenance. So there are 2 things that we want to accomplish:

1. Upgrade to SQL server 200R R2

2. Consolidate existing servers In VMware environment

Key Challenges:

We need to define the criteria that determine the success of our database upgrade. To prepare the success criteria, we should review each phase and step of the overall database upgrade plan and ask ourselves several questions. The following questions will help us understand what we need to do to declare that the phase or step is successful:

How can I measure whether this step is successful?

How can I test that measurement?

How can I compare my test results against what they would have been in the old database?

Although creating an upgrade plan reduces the likelihood of problems occurring during the upgrade process, problems do arise that can prevent the upgrade process from completing. Most organizations rely heavily on the data contained in their databases, and having that data unavailable due to an upgrade might cause problems in business operations and even have financial implications. We are follow proper plan to recover from each phase and step of the upgrade process to help minimize data loss and reduce the time that data might be unavailable.

Project Execution:-

We are following below two methods to upgrade the server 2008 and 2008 R2 environment.

Methods to Upgrade

There are two methods to upgrade from SQL Server 2000(SP4)/ SQL Server 2005(SP2) to SQL Server 2008.

In-place Upgrade – Organizations that do not have resources available to host multiple database environments commonly use an in-place upgrade. An in-place upgrade overwrites a previous installation of SQL Server 2000 or 2005 with an installation of SQL Server 2008. In other words, SQL Server 2008 R2 gives us ability to automatically upgrade an instance of SQL Server 2000 or 2005 to SQL Server 2008. The reason it is called in-place upgrade is because a target instance of SQL Server 2000 or 2005 is actually replaced with a SQL Server 2008 instance. We do not have to worry about coping data from the old instance to new instance as the old data files are automatically converted to new format. This upgrade method is the easiest way to upgrade the database to newer version.

Side by Side Upgrade – Database environments that have additional server resources can perform a side-by-side migration of their SQL Server 2000 or 2005 installations to SQL Server 2008. In this upgrade method, a new instance is created on the same server or in a new server. In this upgrade method the old database instance runs in parallel to the old legacy database. So as the old instance is untouched during this type of upgrade, the old legacy database is still available and online for the application. Having the old environment still active during the upgrade process allows for the continuous operation of the original database environment while we can install and test the upgraded environment. Side-by-side migrations can often minimize the amount of downtime for the SQL Server.

A side-by-side migration does not overwrite the SQL Server files on our current installation, nor does it move the databases to new SQL Server 2008 installation. We/DBAs need to manually move databases to the new SQL Server 2008 installation and other supporting objects (Jobs, DTS/SSIS packages etc.) after a side-by-side installation by using one of the upgrade methods discussed below.

Checklist:

The following tips can help us perform a secure and successful upgrade:

Create a series of checklists: DBAs and developers should prepare a series of checklists that need to be performed before, during, and after a database upgrade.

Back up all important files: Back up all SQL Server database files from the instance to be upgraded, as well as any application files, script files, extract files, and so on so that you can completely restore them if necessary.

Ensure database consistency: Run DBCC CHECKDB on databases to be upgraded to ensure that they are in a consistent state before performing the upgrade.

Reserve enough disk space: Estimate the disk space required to upgrade SQL Server components, user databases, and any database files that might need to be created during the upgrade process. We might need two to four times the amount of disk space during the upgrade process as we will need after the upgrade is finished.

Ensure space for system databases: Configure system databases (master, model, msdb, and tempdb) to auto grow during the upgrade process, and make sure that they have enough disk space for this growth.

Transfer login information: Ensure that all database servers have login information in the master database before upgrading the database. This step is important for restoring a database because system login information resides in the master database and must be re-created in the new instance.

Disable all startup stored procedures: The upgrade process will usually stop and start services multiple times on the SQL Server instance being upgraded. Stored procedures set to execute on startup might block the upgrade process.

Stop replication: Stop replication and make sure that the replication log is empty for starting the upgrade process.

Quit all applications: Certain applications, including all services with SQL Server dependencies, might cause the upgrade process to fail if local applications are connected to the instance being upgraded.

Register your servers after the upgrade: The upgrade process removes registry settings for the previous SQL Server instance. After upgrading, we must reregister our servers.

Repopulate full-text catalogs: The upgrade process marks your databases as full-text disabled. Catalogs must be repopulated, but Setup doesn’t run this operation automatically because it can be time-consuming. Because this operation enhances the performance of your SQL Server 2008 installation, we should plan to repopulate full-text catalogs at a convenient time after the upgrade.(Full Text Search has been integrated in SQL Server 2008, please refer to the section given above for more details)

Update statistics: To help optimize query performance, update statistics on all databases following the upgrade.

Update usage counters: In earlier versions of SQL Server, the values for the table and index row counts and page counts can become incorrect. To correct any invalid row or page counts, run DBCC UPDATEUSAGE on all databases following the upgrade.

Migration Plan

Before upgrade

  • Review de-supported (removed) features (for example Notification Services, backup with TRUNCATE ONLY etc. are no more available in SQL Server 2008).
  • Document existing system.
  • Develop validation tests / performance benchmarks and capture baseline data
  • Formulate and test rollback plan
  • As discussed above, we should run upgrade advisor for all databases (including system databases) and fix any issue/blocker for upgrade.
  • Take proper down time before starting the upgrade. The outage required would depend upon the size of the databases and the environment in which database servers are hosted.
  • Once all the issues and blockers are solved, and all applications are closed, take complete backup of all the databases (including master, model and msdb).
  • Also take transactional backup for all user databases and the disable all the jobs including jobs for taking full backups, jobs to defrag and other maintenance plans.
  • It is also recommended to take an image of your database server; this can be very handy in case you will have to rollback.
  • If the operating system of the server is Windows 2003, make sure that Windows service pack 1 is installed. This needs to be done on all the nodes of cluster in case the database server is on Failover Cluster environment.
  • We can save upgrade time by installing a few prerequisites in the database server. They are .Net 2.0 framework and SQL Native Client, in case if we have servers on cluster, install these setups on each node.
  • Get ready with the new SQL 2008 DVD and the edition you want to install.
  • Make sure that there is enough space in the drive where SQL Server 2000/2005 is installed in case of in-place upgrade.
  • Full Text indexes applied on tables in SQL 2000/2005 are not compatible with SQL Server 2008. Refer to Full Text Integration section in this document for more details.

During upgrade

  • Make sure, all the points are taken care from “Before upgrade section”.
  • Start installing by clicking SetUp.exe from SQL Server 2008 DVD.
  • The setup program is designed in such a way that it automatically updates the database binaries. It altogether creates a different folder “100″ to contain its new binaries. The “100” folder specifies the compatibility of the database. It also upgrades the other SQL Servers on clusters automatically but only the database engine and not other components like reporting services or analysis services. This is very important. Therefore you will have to install other components separately in each node.
  • You can see the progress of installation on installation screen, in case installation is done on cluster, setup will populate all the nodes in a drop down list.
  • Once all the nodes are upgraded with the database engine and all components are installed individually on servers, install the latest SQL Server service pack(if there is any). This need to be done on all the cluster nodes.
  • A reboot may be required to complete the installation due to open file locks during the installation process. If the installation log does not contain any 3010 error, then a reboot is not required.

After upgrade

  • Review all logs
  • Revisit Upgrade Advisor recommendations
  • Update statistics to ensure performance
  • Sample for very large tables.
  • Reconfigure Log Shipping
  • Verify Agent jobs and maintenance tasks
  • Verify security settings, especially cross server and/or cross-domain access privileges
  • Recreate all the full text indexes removed from the databases.
  • Re-populate Full-Text catalogs
  • Update the statistics for all the databases. This step may take time depending on the size of database.
  • Update usage counters in earlier versions of SQL Server, the values for the table and index row counts and page counts can become incorrect. To correct any invalid row or page counts, run DBCC UPDATEUSAGE on all databases following the upgrade.
  • After initially restoring SQL Server 2000/2005 databases to SQL Server 2008, the compatibility level will still be 80/90, which corresponds to SQL Server 2000/2005 respectively. To take advantage of all the new features of SQL Server 2008, we must change the compatibility level to 100, which refer to SQL Server 2008.

Monitoring multiple jobs from one Central location:SQL Server 2008 R2 Centralize server


Good database administration is essential to protect critical business data and ensuring business applications operate efficiently. As a Database administrator have to perform many different tasks; many of which are reactive and some proactive, the challenge is achieving the correct balance rather than being driven with the reactive tasks.

I especially follow Monitoring multiple jobs in one central location, and receive one email each day to monitor backup and daily jobs status for 250 production server and implement the same for our existing client. . It also includes backup and recovery to ensure business continuity in the event of loss or corruption as well as ongoing availability by means of replication or some other means. Automation (of administration) can typically save 15% or more in costs by reduced administration efforts. …relieving DBAs from trivial and routine tasks…” We follow the same for our existing client

To customize the same we use the SQL Server 2008 R2 Centralize server to manage 250 remote servers as if they were a single fleet.

Set Up a Central Management Server:

To configure it, open SSMS 2008 and go into the Registered Servers window. Right-click on Central Management Servers and you get options to set one up. From there, it’s basically the same as your local registered server list – only it’s centralized:

There we can create a Group Like Production server and register all the server as below:

Once the servers are registered, you can click on the production server folder and click on the new query.

Need to execute the below script to get the job report for the entire register server.

USE msdb

GO

SET NOCOUNT ON

SELECT job_id, MAX(instance_id) ‘Max_ID’ INTO #tempInstance

FROM sysjobhistory GROUP BY job_id

SELECT cast(@@servername as varchar(20)) as ‘server’,

j.name ‘Job_Name’,

CASE js.run_status

WHEN 0 THEN ‘Failed’

WHEN 1 THEN ‘Successful’

WHEN 3 THEN ‘Cancelled’

WHEN 4 THEN ‘In Progress’

END ‘Job_Status’,

js.run_date , sjs.next_run_date

FROM sysjobs j, sysjobhistory js, SysJobSchedules sjs, #tempInstance t

WHERE j.job_id = js.job_id AND

js.job_id = sjs.job_id AND

j.job_id = t.job_id AND

js.job_id= t.job_id AND

js.instance_id = t.Max_id AND

js.run_status IN (0) AND

j.enabled = 1 and

j.category_id = 0

DROP TABLE #tempInstance

Output:

Auditing if someone runs the DBCC command apart from DBA team and DELETE issued against an important table in SQL Server 2008


Auditing an instance of SQL Server or a SQL Server database involves tracking and logging events that occur on the system. Based on the information accumulated we would be able to track the changes to the database, access to the database etc. In this article we have audited if someone runs the DBCC command apart from DBA team and DELETEs issued against an important table.

While we are working with SQL Server 2008 auditing we need to keep four things in mind:

  1. SQL Server Audit
  2. Server Audit Specification (Events to capture on the Server Instance Level)
  3. Database Audit Specification (Events to capture on a specific database)
  4. Target (Where would be the events be logged)

SQL Server Audit

The SQL Server Audit object collects a single instance of server or database-level actions and groups of actions to monitor. The audit is at the SQL Server instance level. You can have multiple audits per SQL Server instance. When you define an audit, you specify the location for the output of the result. This is the audit destination. The audit is created in a disabled state, and does not automatically audit any actions. After the audit is enabled, the audit destination receives data from the audit.

Server Audit Specification

The Server Audit Specification object belongs to an audit. You can create one server audit specification per audit, because both are created at the SQL Server instance scope

Database Audit Specification

The Database Audit Specification object also belongs to a SQL Server Audit. You can create one database audit specification per SQL Server database per audit. The database audit specification collects database-level audit actions raised by the Extended Events feature. You can add either audit action groups or audit events to a database audit specification.

 

Target

The results of an audit are sent to a target, which can be a file, the Windows Security event log, or the Windows Application event log .Logs must be reviewed and archived periodically to make sure that the target has sufficient space to write additional records. Writing to the Windows Security log requires the SQL Server service account to be added to the Generate security audits policy. By default, the Local System, Local Service, and Network Service are part of this policy.

Step Need to follow:

  • Audit DBCC usage.
  • Audit DELETEs issued against a table.
Tasks Supporting information
1…..It is very easy to capture information about activities in the database through Audits. We can create an Audit and associate events to capture at the server and/or the database level. Start with an audit storing information in the Application Log. Within the Instance’s Security expand to Audits 

Right Mouse click and add an Audit
Name: Demo Audit
Audit Destination: Application Log
OK

Right Click the new Audit and ‘Enable’ it.

2…..Just below the Audit let us add a Server level audit based on the DBCC_Group so we can determine who is running any DBCC Command and which one. Add ‘New Server Audit Specification
Name: DBCC Usage
Audit: Demo Audit
Uncheck Enabled (Very Important with CTP 6)
Audit Action Type: DBCC_Group
OK 

Right Click the new Specification and ‘Enable….’ it.

3…..Additionally, add a Database Audit Specification to the AdventureWorks2008 to capture every delete attempted against Sales.SpecialOffer Add New Database Audit Specification
Name: Archive Deletes
Audit: Demo Audit
Uncheck Enabled (Very Important with CTP6)
Audit Action Type: DELETE/Production/TransacationHistoryArchive
Principal – choose ellipse and check all users.
OK 

Right Click the new Specification and ‘Enable…’ it.

4…..Let’s see what type of entries are created by first clearing our Procedure Cache and then by deleting five records of the Transaction History. Use the Audit.sql scrpt: 

DBCC FREEPROCCACHE

GO

USE AdventureWorks2008
GO

BEGIN TRAN

DELETE TOP(5) Production.TrasactionHitoryArchive

ROLLBACK

5….Open the Event Viewer and look at the results. Find the two latest Audit Success items and review for the DBCC call and the deletion of data out of a table.

HOW WE CAN IMPROVE THE PERFORMANCE OF MICROSOFT DYNAMICS CRM 4.0 WITH MICROSOFT SQL SERVER 2008 R2


We have upgraded 100 servers to SQL server 2008 R2 last quarter for our existing client because we evaluated that SQL 2008 R2 features can significantly improve the application performance of both SQL Server 2008 R2 and Microsoft Dynamics CRM 4.0 while resulting in space savings.

In addition, deploying these features does not require invasive infrastructure changes, and the features are easily enabled and disabled.

Microsoft SQL Server 2008 R2 contains a variety of features that, when implemented properly, can improve the performance of a Microsoft Dynamics CRM 4.0 implementation .

These Microsoft SQL Server 2008 R2 features include:

Compression

  • Sparse Columns
  • Backup Compression
  • ROW COMPRESSION

ROW compression maps a fixed length data type to variable length physical storage to save space used to store the data; basically, it compresses columns in the row. For example, a CHAR(100) column stored in a variable length storage format only uses up the amount of storage defined by the data. With ROW compression enabled, storing “SQL Server 2008″ in the column requires storing only 15 (not the full 100) characters, representing a savings of 85%. Also, with ROW compression enabled, storing zero or null values requires no storage space. Microsoft SQL Server 2008 R2 supports ROW and PAGE compression for tables and indexes.

PAGE COMPRESSION

A superset of ROW compression, PAGE compression takes into account the redundant data in one or more rows on a page to save space used to

store the data. PAGE compression uses column prefixes and a page level dictionary technique for this task. In other words, with both page

compression techniques, the storage engine reduces the amount of data that is repeated in the page.

SPARSE COLUMNS

_Sparse columns _are ordinary columns that have an optimized storage for NULL values. Sparse columns reduce the space requirements for NULL values at the cost of greater CPU overhead to retrieve not NULL values. Sparse columns enable applications, such as Windows SharePoint Services, to efficiently store and access a large number of user-defined properties by using SQL Server 2008 R2.

INSERT, UPDATE, and DELETE statements can reference the sparse columns by name. Additionally, you can also view and work with all the sparse columns of a table that are combined into a single XML column. This column is called a column set.

BACKUP COMPRESSION

Because a compressed backup is smaller than an uncompressed backup of the same data, compressing a backup typically reduces device I/O and therefore usually increases backup speed significantly.

IMPORTANT: Creating compressed backups is supported only IN SQL SERVER 2008 ENTERPRISE EDITION and later versions, but every edition of SQL Server 2008 and later versions can restore a compressed backup. Also, backup compression is not recommended for use with TDE. Based on your evaluation we come up along with some recommendation

IMPROVING PERFORMANCE BY USING SQL SERVER 2008 R2 COMPRESSION

Columns in Microsoft Dynamics CRM tables are typically sparsely populated, which our analysis of the customer database used in the testing efforts confirmed. Except for certain columns such as Account ID, Name, Phone Number and Email Address, the Account table was sparsely populated. Both row and page compression are very effective on Microsoft Dynamics CRM entity tables, which also contain a lot of binary columns and columns that specify pick list values.

RECOMMENDATIONS

When considering the use of SQL Server 2008 R2 Compression to improve the performance of a Microsoft Dynamics CRM 4.0 implementation, keep in mind the following recommendations:

v Identify the largest tables in the Microsoft Dynamics CRM database and consult SQL Server best practices to pick the best candidates for compression.

v Note: Microsoft Dynamics CRM tables (such as metadata tables) that are frequently accessed or continuously updated may not be good candidates for compression. An example is the PrincipleObjectAccess table which manages the privileges for the Microsoft Dynamics CRM system. In these cases, the performance impact of managing these compressed tables may outweigh the space savings from compression.

v Estimate savings for each table by using the following stored procedure

v sp_estimate_data_compression_savings

v Enable Page Compression on tables with mostly static data.

v Enable Row Compression on entity tables.

v Verify performance impact.

IMPORTANT: Compressing every table in the database may put a lot of load on the computer running SQL Server and compression should be limited to large tables as per SQL Server best practices.

IMPROVING PERFORMANCE BY USING SQL SERVER 2008 R2 SPARSE COLUMNS as mentioned previously, columns in Microsoft Dynamics CRM tables typically are sparsely populated. The sparse columns feature in SQL 2008 is perfectly suited for Microsoft Dynamics CRM tables because it reduces the space required to store data in user-specified columns. In addition, by setting the columns as sparse, Microsoft Dynamics CRM administrators can optimize access to frequently accessed tables in which certain columns are rarely accessed and include all or mostly NULL values.

RECOMMENDATIONS:

Designating a column as sparse is only useful when the column contains mostly NULL values; consider using sparse columns when the space saved is at least 20 percent to 40 percent to strike a balance between space savings and additional CPU overhead. As you work to improve Microsoft Dynamics CRM 4.0 performance by using SQL Server 2008 R2 Sparse Columns, keep in mind the following

RECOMMENDATIONS:

v Identify large tables in the Microsoft Dynamics CRM database.

v Consult SQL Server best practices documented in the article mentioned previously to select the columns to be marked as sparse; typically, use sparse storage for columns in which most values are NULL.

v Note: Marking non-sparse columns as sparse will significantly increase the amount of space needed to store the data.

v Mark the columns as sparse and rebuild the indexes on the table.

v Verify performance impact.

PERFORMING AND MAINTAINING BACKUPS MORE EFFICIENTLY BY USING SQL SERVER 2008 R2 BACKUP COMPRESSION

As expected, results confirmed that enabling Backup Compression yields significant savings in the time required to perform backups and in the space required for storing those backups.

_RESULTS_

MEASURE

UNCOMPRESSED

COMPRESSED

PERCENT CHANGE

BACKUP TIME (MIN)

9:03:09

2:38:51

_-7__0__.__7__5_

BACKUP SIZE (KB)

12299686

2791721

_-7__7__.__0__3_

NOTE: The size of the database used in this test was 15.5 gigabytes.

_RECOMMENDATIONS_

Enable backup compression to increase the efficiency of performing and maintaining backups. However, keep in mind that compression can significantly increase CPU usage and that the additional overhead might adversely impact concurrent operations. As a result, when using backup compression, be sure to verify that the overall system performs at desired levels. Additionally, consider creating low-priority compressed backups in a session whose CPU usage is limited by Resource Governor.