Database Mail Configuration script


I recently had to setup Database Mail on dozens of SQL Server instances.   Rather than perform this tedious task using the SSMS GUI, I developed a script that saved me a lot of time which I’m sharing here.  

Setup Script

Below is the template script I used for my task.  The sysmail_add_account_sp @username and @password parameters might be required depending on your SMTP server authentication and

— Enable Database Mail for this instance

EXECUTE sp_configure ‘show advanced’, 1;

RECONFIGURE;

EXECUTE sp_configure ‘Database Mail XPs’,1;

RECONFIGURE;

GO

 

— Create a Database Mail account

EXECUTE msdb.dbo.sysmail_add_account_sp

    @account_name = ‘Primary Account’,

    @description = ‘Account used by all mail profiles.’,

    @email_address = ‘myaddress@mydomain.com’,

    @replyto_address = ‘myaddress@mydomain.com’,

    @display_name = ‘Database Mail’,

    @mailserver_name = ‘mail.mydomain.com’;

 

— Create a Database Mail profile

EXECUTE msdb.dbo.sysmail_add_profile_sp

    @profile_name = ‘Default Public Profile’,

    @description = ‘Default public profile for all users’;

 

— Add the account to the profile

EXECUTE msdb.dbo.sysmail_add_profileaccount_sp

    @profile_name = ‘Default Public Profile’,

    @account_name = ‘Primary Account’,

    @sequence_number = 1;

 

— Grant access to the profile to all msdb database users

EXECUTE msdb.dbo.sysmail_add_principalprofile_sp

    @profile_name = ‘Default Public Profile’,

    @principal_name = ‘public’,

    @is_default = 1;

GO

 

–send a test email

EXECUTE msdb.dbo.sp_send_dbmail

    @subject = ‘Test Database Mail Message’,

    @recipients = ‘testaddress@mydomain.com’,

    @query = ‘SELECT @@SERVERNAME’;

GO

Backup and restore command Status


Whenever we are doing the restoration and backup task for big databases that time it could be tough for us to give a ETA to business owner. The below command help us to get the status at percent level of completion along with estimate completion_time.

 

 

Script:—

select d.name,percent_complete, dateadd(second,estimated_completion_time/1000, getdate()) as estimated_completion_time, Getdate() as now,

datediff(minute, start_time, getdate()) as running, estimated_completion_time/1000/60 as togo,

start_time, command

from sys.dm_exec_requests req

inner join sys.sysdatabases d on d.dbid = req.database_id

where req.command in (‘RESTORE DATABASE’, ‘BACKUP DATABASE’, ‘RESTORE LOG’, ‘BACKUP LOG’)

Shrinking all the databases log file at a moment.


In our day to day DBA job log file full error is common mater bit it is a very tedious job if a SQL instance hold lots of databases  and it is quite tough to figure out which database log file is full. The impact of log file full is very pathetic cause database can be suspected due to less space. To shrink the log file below I used one command which could be truncate the inactive space for all database within a minute. The script is not destroying the log chain so you can use in replication server also and configure as a SQL job

 

Shrink Command for all databases:

 

sp_msforeachdb “use [?];  dbcc shrinkfile(2,TRUNCATEONLY) WITH NO_INFOMSGS ”

 

SQL server Job related some Important query which help us to get the details quickly.


/*Failed jobs report. This query lists the name of all jobs that failed in their last attempt:*/

 

      SELECT name FROM msdb.dbo.sysjobs A, msdb.dbo.sysjobservers B WHERE A.job_id = B.job_id AND B.last_run_outcome = 0

 

 

/*Disabled jobs. ever have a critical job that someone decided to disable? Catch it with this script!*/

 

 

      SELECT name FROM msdb.dbo.sysjobs WHERE enabled = 0 ORDER BY name

 

/*Running jobs. Need to know what jobs are still running?

 

Ever come into work on Monday morning to find several critical jobs having “piled up”,

running way to long, or hung up? This query lists those running queries (whether normally scheduled or not).

This procedure call is good for making sure your Distribution or Merge agent job is still running too.

I make a call to sp_get_composite_job_info (loading in a bunch of NULLS, and a “1” to indicate running jobs):*/

 

 

      msdb.dbo.sp_get_composite_job_info NULL, NULL, NULL, NULL, NULL, NULL, 1, NULL, NULL

How we can use Varchar in encryption column


This method is little different here we have to work with View and will send the one more method like encryption by stored procedure technique. Below I have mentioned the total demonstration.

 

 

The goal of the encryption process is to abstract the tables with cell level encryption by over-laying them with views.  This allows the application (with the proper permissions) to continue to work with the encrypted data seamlessly via the normal development methodologies without having to consistently write the code to decrypt / encrypt the columns (cells).  This process is the generally the same for new applications and existing applications that need to be modified to encrypt data with the key difference being the requirement to open the keys for new applications to provide additional security.

 

The SQL Server encryption functions return varbinary data, so all columns to be encrypted need to be changed to varbinary.  The use of Views to over-lay the tables provides an abstraction layer so the data types can be properly viewed for data modelers, developers, administrators, analysts and other end users.  To encrypt a SSN, varchar(9) field requires the column in the base table to be implemented as varbinary(200). 

 

Users that access the views, who do not have permissions to open the proper key(s) will not receive an error, but the column will not be decrypted and will be populated with nulls.

 

Database developers will need to work closely with the data modeling team and the administrators to ensure the data types, views, encryption keys, backup and restore of keys are implemented properly.  Data is not recoverable if the backup of keys and databases are not implemented properly.

 

Guidelines for implementing cell level encryption.

 

All tables that include an encrypted cell (column) follow the normal standard naming conventions with the addition of an underscore and the word base to the table name {table name}_base.  All columns follow the standard implementation with the exception of those to be encrypted, which must use a varbinary data type.  The length of the varbinary column is determined by the max length returned from the encryption function. 

 

A view is created that follows the normal standards, except the name is implemented off the base table by dropping the underscore and the word base that was used to name the table (this allows legacy code to continue to work).  The view will make use of the decryption function and the proper Cast and Convert statements to manipulate the varbinary column to the correct un-encrypted data type.

 

Inserts, updates and deletes are handled in the normal manner, with the tsql statements being executed against the view, NOT the underlying tables.  “Instead of Triggers” are created on each view to handle the Inserts, updates and deletes.  This provides an additional layer of abstraction so that consistency can be maintained with the code.

 

A short tsql script example:

 

–Database Creation

 

Create database encrypt_test;

 

 

go

 

use encrypt_test;

 

go

 

  –Created masterkey and Certificate

 

Create master key encryption by password = ‘knights12$gzmlauncher#1@%dmissionisclear*()’

 

go

 

Create certificate cert_sk with subject = ‘Certificate for accessing symmteric keys – for use by App’

 

–SQL SERVER can use the following algorithms in encryption:DES, TRIPLE_DES, RC2, RC4, RC4_128, DESX, AES_128, AES_192 AND AES_256.

 

go

 

  CREATE SYMMETRIC KEY sk_encrypt_test

    WITH ALGORITHM = RC4_128

    ENCRYPTION BY CERTIFICATE cert_sk ;

GO

 

 

  —Table Creation

 

create table dbo.Client_Base

 

      ( client_Id int Identity(1,1) primary key,

 

            ssn varbinary(200),

 

            Amount_due varbinary(200),

 

            Comments varchar(1000)

 

      );

 

go

 

  –Created View with a convert clause so we can use

 

create view dbo.Client

 

as

 

      Select Client_ID,

 

            convert(varchar(9),decryptbykeyautocert(cert_id(‘cert_sk’),

 

                  Null,SSN,1,convert(varchar(10),client_Id))) AS SSN,

 

            convert(money,convert(varchar(10),decryptbykeyautocert(cert_id(‘cert_sk’),

 

                  Null,Amount_Due,1,convert(varchar(10),client_Id)))) AS Amount_Due,

 

            Comments

 

      From dbo.Client_Base;

 

go

 

 

 

select * from dbo.client;

 

go

 

—Created Insert Trigger

 

create trigger dbo.trg_client_insert

      on dbo.Client

 

INSTEAD OF INSERT

 

AS

 

      BEGIN

 

 

 

            Declare @Client_ID int, @SSN varchar(9), @Amount_Due money,

 

                  @Comments varchar(1000);

 

            open SYMMETRIC key sk_encrypt_test decryption by certificate cert_sk;

 

            DECLARE cur_Client CURSOR FOR

                  SELECT      SSN,Amount_Due,Comments FROM INSERTED;

 

            OPEN cur_Client;

 

            FETCH NEXT FROM cur_Client INTO @SSN, @Amount_Due,@Comments;

 

            WHILE @@FETCH_STATUS = 0

 

            BEGIN

 

                  Insert into dbo.Client_Base (Comments) values (@Comments)

 

                  set @Client_Id = scope_identity()

 

                  Update dbo.Client_base

 

                        set SSN = encryptbykey(key_guid(‘sk_encrypt_test’),@SSN,1,Convert(varchar(10),@Client_ID))

 

                              ,Amount_Due = encryptbykey(key_guid(‘sk_encrypt_test’),convert(varchar(10),@Amount_Due),1,Convert(varchar(10),Client_ID))

 

                        where Client_ID = @Client_ID

 

 

 

            FETCH NEXT FROM cur_Client INTO @SSN, @Amount_Due,@Comments

 

            END

 

            CLOSE cur_Client

 

            DEALLOCATE cur_Client        

 

            CLOSE SYMMETRIC key sk_encrypt_test

 

      END

 

go

 

  –Created Update Trigger

 

create TRIGGER trg_Client_Update ON dbo.Client

 

INSTEAD OF UPDATE

 

AS

 

BEGIN

 

 

 

      open SYMMETRIC key sk_encrypt_test decryption by certificate cert_sk

 

      Update Client_base

 

            Set SSN = encryptbykey(key_guid(‘sk_encrypt_test’),i.SSN,1,Convert(varchar(10),i.Client_ID))

 

                  ,Amount_Due = encryptbykey(key_guid(‘sk_encrypt_test’),convert(varchar(10),i.Amount_Due),1,Convert(varchar(10),i.Client_ID))

 

            FROM Client_Base

 

            inner join inserted i on Client_Base.Client_ID = i.Client_Id

 

 

 

      CLOSE SYMMETRIC key sk_encrypt_test

 

END

 

go

 

  –Created Delete Trigger

 

create trigger dbo.trg_Client_Delete

      on dbo.Client

 

INSTEAD OF DELETE

 

AS

 

BEGIN

 

      Delete Client_Base

 

      from Client_Base db

 

      inner join deleted d on db.Client_ID = d.Client_ID

 

END

 

go

 

———————————————————-

 

— MUST OPEN KEY or all insert statements will FAIL

 

OPEN MASTER KEY

 

DECRYPTION BY PASSWORD = N’knights12$gzmlauncher#1@%dmissionisclear*()’;

 

GO

 

–Insert Value

 

Insert into dbo.client (ssn,amount_due,comments) values

 

      (‘123456789′,256.01,’This is a test of encryption’)

 

go

 

select * from dbo.client;

 

select * from dbo.client_base;

 

go

 

  –Update the value

 

Update Client

 

      set ssn = 987654321,

 

            Amount_Due = 100

 

      where ssn = 123456789;

 

go

 

select * from Client;

 

go

 

Close Master Key;

Is it possible that sql 2005 backup can restore in sql 2000?


Ans.

 

As per Microsoft there is no workaround. SQL Server 2005 does not support downgrades or restores  to SQL Server 2000. But if server contains SQL 2000 and 2005 both instances then we can restore the SQL 2005 backup on SQL 2000 instance.

 

Or

There is no “Restore” functionality, but there is a workaround to copy 2005 databases to 2000:

Right-click on DB -> tasks -> generate scripts
select DB and click “script all objects…”, hit next
select any options you want, specifically changing “script for server version” to SQL Server 2000
next through and run the script

Now just export data from the 2005 database to the newly-created 2000 database.
Right-click on DB -> tasks -> export…
set source and hit next
set destination and hit next
select “copy data from one or more tables”, hit next
select all, check “optimize for many tables” and “run in a transaction”
you may have to edit each table mapping and check “enable identity insert”
next through to finish

Resolve the Deadlock issue


Resolve the Deadlock issue

 

Deadlock:

 

Deadlocking occurs when two or more SQL Server processes have locks on separate database objects and each process is trying to acquire a lock on an object that the other processes have previously locked.  For example, process one has an exclusive lock on object one, process two has an exclusive lock on object two, and process one also wants an exclusive lock on object two, and object two wants an exclusive lock on object one. Because two processes can’t have an exclusive lock on the same object at the same time, the two processes become entangled in a deadlock, with neither process willing to yield of its own accord.

 

Application log got the message:

 

Error captured:

Msg 1205, Sev 13, State 56, Line 1 : Transaction (Process ID 276) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. [SQLSTATE 40001]

 

Step for resolution:

 

  • By Enable the Trace
  • Captured the Deadlock information by Profiler
  • Step to capture the Deadlock information by Perfmon:

 

 

 

To identify the deadlock I have enable the trace on the SQL server so if is there any deadlock encounter in Sql end it will log at SQL error log file and we can figure the root cause of deadlock.

 

Enable the Trace:

I have Turn on deadlock trace {   DBCC TRACEON (1204, 1222)}        

This will enable deadlock tracing for all existing connections and new. Trace flag 1204 reports deadlock information formatted by each node involved in the deadlock. Trace flag reports deadlock information formatted, first by processes and then by resources. So if is there any deadlock occurred at least it could be logged at SQL error log.

SQL error log report:

 

Deadlock encountered …. Printing deadlock information

2011-05-19 06:31:42.18 spid3    

2011-05-19 06:31:42.18 spid3     Wait-for graph

2011-05-19 06:31:42.18 spid3    

2011-05-19 06:31:42.18 spid3     Node:1

2011-05-19 06:31:42.20 spid3     PAG: 9:1:4122955               CleanCnt:2 Mode: SIU Flags: 0x2

2011-05-19 06:31:42.20 spid3      Grant List 0::

2011-05-19 06:31:42.20 spid3        Owner:0x38661880 Mode: S        Flg:0x0 Ref:0 Life:00000001 SPID:754 ECID:0

2011-05-19 06:31:42.20 spid3        SPID: 754 ECID: 0 Statement Type: DELETE Line #: 448

2011-05-19 06:31:42.20 spid3        Input Buf: RPC Event: sp_create_order;1

2011-05-19 06:31:42.20 spid3      Grant List 1::

2011-05-19 06:31:42.20 spid3      Requested By:

2011-05-19 06:31:42.20 spid3        ResType:LockOwner Stype:’OR’ Mode: IX SPID:549 ECID:0 Ec:(0x5523F520) Value:0x1ef2dae0 Cost:(0/3C)

2011-05-19 06:31:42.20 spid3    

2011-05-19 06:31:42.20 spid3     Node:2

2011-05-19 06:31:42.20 spid3     PAG: 9:1:3598450               CleanCnt:2 Mode: SIU Flags: 0x2

2011-05-19 06:31:42.20 spid3      Grant List 0::

2011-05-19 06:31:42.20 spid3      Grant List 1::

2011-05-19 06:31:42.20 spid3        Owner:0x38611f00 Mode: S        Flg:0x0 Ref:0 Life:00000001 SPID:549 ECID:0

2011-05-19 06:31:42.20 spid3        SPID: 549 ECID: 0 Statement Type: DELETE Line #: 448

2011-05-19 06:31:42.20 spid3        Input Buf: RPC Event: sp_executesql;1

2011-05-19 06:31:42.20 spid3      Requested By:

2011-05-19 06:31:42.20 spid3        ResType:LockOwner Stype:’OR’ Mode: IX SPID:754 ECID:0 Ec:(0x47939548) Value:0x38661340 Cost:(0/3C)

 

In the “Wait-for-graph” entry, you have Node 1 and Node 2. In each node, you have a grant section and a request section. The grant section is the “Grant List”, and the request section is the “Request By.”
In each node, you can identify the following:

  • The SPID.
  • The command the SPID was executing.
  • The resource.
  • The lock mode on the resource.

Query:

select * from sysobjects where id = 4122955 

select * from sysindexes where indid = 1 and id = 4122955 

 

 

Captured the log by Profiler:

 

In SQL Server 2005 was introduced, new events were added to the SQL Server 2005 Profiler that makes identifying deadlocks very easy and I used to same technique.

 

Events selected in profiler:

 

Deadlock graph

·         Lock: Deadlock

·         Lock: Deadlock Chain

·         RPC:Completed

·         SP:StmtCompleted

·         SQL:BatchCompleted

·         SQL:BatchStarting

 

 

 

 

 

 

 

 

Step to capture the Deadlock information by Perfmon:

Use SQL Profiler to trace deadlock events and get the resource ID of the table or index under contention. The steps to do this are:

  1. Start SQL profiler
  2. On the Trace Properties dialog box, on the General tab, check Save to file and specify a path to save the trace
  3. Click the Events tab, only add Locks\Lock:deadlock and Locks\Lock:deadlock chain
  4. Click the Data columns tab, add DatabaseID, IndexID, ObjectID

This trace will record all deadlocks on this SQL Server instance, along with the ID of the source table of contention. To translate the database id and object id into names (although DatabaseName and ObjectName are selectable columns, the data is not always presented in the trace), you can do

 

 

 

 

To help minimize deadlocks:

  • Access objects in the same order.
  • Avoid user interaction in transactions.
  • Keep transactions short and in one batch.
  • Use a lower isolation level.
  • Use a row versioning-based isolation level.
    • Set READ_COMMITTED_SNAPSHOT database option ON to enable read-committed transactions to use row versioning.
    • Use snapshot isolation.

 

Why should I use the SNAPSHOT isolation level? Advantage and disadvantage of Statement-Level Snapshot Isolation


Why should I use the SNAPSHOT isolation level? Advantage and disadvantage of Statement-Level Snapshot Isolation

 

SQL Server 2005 introduced a new “snapshot” isolation level that is intended to enhance concurrency for online transaction processing (OLTP) applications.SNAPSHOT isolation level does not apply a lock while reading rows. SNAPSHOT  isolation level. As a true isolation level, SNAPSHOT is used within transactions and set at the session level with the SET TRANSACTION ISOLATION statement. During the course of a transaction, any changes to data that were read during the transaction are versioned: the original unchanged copy is stored in the versioning system in tempdb. During the remainder of the transaction, any SELECT statements read changed data from the version store and unchanged data directly from the database tables.

The great benefit of the new SNAPSHOT isolation level is that transactions no longer block each other because of SELECT statements. Transactions may still undergo blocking because of data change commands such as INSERT, UPDATE, and DELETE, because those commands still obtain locks. With SNAPSHOT isolation, you get the consistency of READ COMMITTED, and the concurrency of READ UNCOMMITTED. However, in order to use SNAPSHOT isolation, you must begin explicit transactions with the SET TRANSACTION ISOLATION LEVEL command. The SNAPSHOT isolation level has no application to SELECT statements outside explicit transactions.

 

Statement-Level Snapshot Isolation:
The READ_COMMITTED_SNAPSHOT option extends the benefits of SNAPSHOT isolation level to all SELECT statements in a database by changing the way the READ COMMITTED isolation level behaves. It is a database option, not a session level setting like the SNAPSHOT isolation level. However, you do have to enable it with ALTER DATABASE, as in the following command:

   ALTER DATABASE AdventureWorks

      SET READ_COMMITTED_SNAPSHOT ON

 

And that’s it: no other commands are required. (But note: there can be no other users in the database when you execute the ALTER DATABASE command with the READ_COMMITTED_SNAPSHOT option.) Automatically, the READ COMMITTED isolation level starts using row versioning, and SELECT statements no longer take shared locks.
Costs and Benefits of Statement-Level Snapshots
Like the SNAPSHOT isolation level, one of the major benefits of the database option READ_COMMITTED_SNAPSHOT is the elimination of blocking combined with reading only committed data.

A Key Benefit: Consistency
There is another subtle but arguably the most important benefit for the statement-level READ_COMMITTED_SNAPSHOT option: because the data returned from a SELECT statement is a consistent snapshot of the data as of the beginning of the statement, you are assured that the SELECT statement will return consistent aggregate values.

For example, suppose your SELECT statement sums up certain values from one or more tables. You are assured by READ_COMMITTED_SNAPSHOT that the summary values will be based on a snapshot of the data consistent at the starting time of the SELECT statement. Even though the underlying data may change during the execution of the SELECT statement, those changes are not read by the SELECT statement because only the version of the data current at the start time of the query is read.

Costs of Statement-Level Snapshots
Like the SNAPSHOT isolation level, the versioned rows for READ_COMMITTED_SNAPSHOT are kept in tempdb. Enabling this option database-wide implies that a considerable amount of activity in tempdb will be required to store and maintain those versions. This activity could potentially exceed that of the SNAPSHOT isolation level because the versioning is now applied database-wide. As a result, you must plan for increased space usage and increased I/O demands on the tempdb database. To ensure smooth running of a production system using this specific isolation level, the database administrator must allocate enough disk space for tempdb to ensure there is always approximately 10 percent free space. When free space falls below 10 percent, system throughput will degrade because the version cleanup process will spend more time trying to reclaim space in the version store. If I/O performance in tempdb becomes an issue, we recommend the database administrator create more than one file for tempdb on different disks to increase I/O bandwidth. In fact, on multiprocessor computers, increasing the number of files to match the number of processors can often yield even greater gains.

READ_COMMITTED_SNAPSHOT option can eliminate deadlocking without any change to your code. Because SELECT statements are no longer blocked by transactions that change data, deadlocks caused by the interaction of shared locks and exclusive locks are eliminated. Because the UPDATE statements apply to different rows, they do not conflict. As the SELECT statements do not require any shared locks, no blocking between the transactions occurs, and the deadlock does not occur.

For more information about how to set the isolation level within an application, see Adjusting Transaction Isolation Levels.

 

Limitations of Transactions Using Row Versioning-based Isolation Levels

 

Consider the following limitations when working with row versioning-based isolation levels:

  • READ_COMMITTED_SNAPSHOT cannot be enabled in tempdb, msdb, or master.

 

  • Global temp tables are stored in tempdb. When accessing global temp tables inside a snapshot transaction, one of the following must happen:
    • Set the ALLOW_SNAPSHOT_ISOLATION database option ON in tempdb.
    • Use an isolation hint to change the isolation level for the statement.

 

  • Snapshot transactions fail when:
    • A database is made read-only after the snapshot transaction starts, but before the snapshot transaction accesses the database.
    • If accessing objects from multiple databases, a database state was changed in such a way that database recovery occurred after a snapshot transaction starts, but before the snapshot transaction accesses the database. For example: the database was set to OFFLINE and then to ONLINE, database autoclose and open, or database detach and attach.
  • Distributed transactions, including queries in distributed partitioned databases, are not supported under snapshot isolation.

 

  • SQL Server does not keep multiple versions of system metadata. Data definition language (DDL) statements on tables and other database objects (indexes, views, data types, stored procedures, and common language runtime functions) change metadata. If a DDL statement modifies an object, any concurrent reference to the object under snapshot isolation causes the snapshot transaction to fail. Read-committed transactions do not have this limitation when the READ_COMMITTED_SNAPSHOT database option is ON.

For example, a database administrator executes the following ALTER INDEX statement.

Copy

USE AdventureWorks2008R2;

GO

ALTER INDEX AK_Employee_LoginID

    ON HumanResources.Employee REBUILD;

GO

Any snapshot transaction that is active when the ALTER INDEX statement is executed receives an error if it attempts to reference the HumanResources.Employee table after the ALTER INDEX statement is executed. Read-committed transactions using row versioning are not affected.