Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SQL Server Error Logs

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 29
At a glance
Powered by AI
The key takeaways from the document are that SQL Server error logs maintain events raised by the database engine or agent and are useful for troubleshooting problems. There are two types of error logs - SQL Server logs and SQL Agent logs.

The two types of SQL Server error logs are SQL Server logs and SQL Agent logs.

Information typically recorded in SQL Server error logs includes SQL Server startup events, backup and restore details, failed SQL Server jobs, user defined error messages, maintenance tasks like DBCC checks, changing trace flags, long running sessions, and starting and stopping SQL Profiler traces.

SQL Server Error Logs

* Error Logs maintains events raised by SQL Server database engine or Agent.
* Error Logs are main source for troubleshooting SQL Server problems.
* SQL Server supports 2 types of error logs
* SQL Server Logs
* SQL Agent Logs
FAQ : - What is recorded in error logs?
1.
2.
3.
4.
5.
6.
7.
8.

SQL Server start up events including database recovery.


Backup and restore details.
Any failed SQL Server jobs
User defined error message which has WITH LOG clause.
Maintenance related DBCC statements, such as DBCC CHECKDB and DBCC CHECKALLOC.
Turning trace flags on or off.
SQL Servers usage of a particular session for a long period of time.
Starting and stopping Profiler traces
* By default SQL Server supports
1 - Current Log
6 - Archieve Logs
* Error logs are present in LOG folder of respective instance.
* We can read error logs using
sp_readerrorlog
xp_readerrorlog
* By default when the server was restarted the error logs are recycled automatically. We can recycle error
logs using
sp_cycle_errorlog
* We can configure up to 99 error logs
* How to Configure?
* Go to Object Explorer
* Management
* R.C on SQL Server Logs
* Configure
* Select checkbox " Limit the no of error logs..................."
Max no of logs= 20
* OK
* To filter the events from error log
sp_readerrorlog 0,1,'Error',null
Value of error log file you want to read:
0 = current, 1 = Archive #1, 2 = Archive #2, etc...

Log file type: 1 or NULL = error log, 2 = SQL Agent log


Search string 1: String one you want to search for
Search string 2: String two you want to search for to further refine the results
5th
: Start Date
6th
: End Date

Scripts
WMI Error Solution
Copy the following script in notepad and save with "wmi.bat"
Double click on the file, WMI service is installed then continue setup.
@echo on
cd /d c:\temp
if not exist %windir%\system32\wbem goto TryInstall
cd /d %windir%\system32\wbem
net stop winmgmt
winmgmt /kill
if exist Rep_bak rd Rep_bak /s /q
rename Repository Rep_bak
for %%i in (*.dll) do RegSvr32 -s %%i
for %%i in (*.exe) do call :FixSrv %%i
for %%i in (*.mof,*.mfl) do Mofcomp %%i
net start winmgmt
goto End
:FixSrv
if /I (%1) == (wbemcntl.exe) goto SkipSrv
if /I (%1) == (wbemtest.exe) goto SkipSrv
if /I (%1) == (mofcomp.exe) goto SkipSrv
%1 /RegServer
:SkipSrv
goto End
:TryInstall
if not exist wmicore.exe goto End
wmicore /s
net start winmgmt
:End
Installing and Configuring SQL Server 2008
System Requirements
* O/S
* Memory
* CPU
* HDD

: Windows2003 with sp2/ 2008 Server


: 1GB
: >1GHz
: 2684MB

: 611MB for database server


* SQL Native Client
* .Net Framework 3.5 with sp1
* Windows Services
:WMI (Windows Management Instrumentation)
:RPC (Remote Procedural Call)
:DTC (Distributed Transaction Coordinator)
Installing SQL Server
* SQL Server supports 2 types of installations
1. Stand Alone Environment
2. Cluster based ,,
1. Stand Alone installation
Steps
* Creating service account
* R.C on My Computer --> Manage
* Local users and groups
* R.C on Users
* New User
Name: SQLDBEngine_user
pwd hyd@123
Confirm p hyd@123
Uncheck "User must change...."
Click on Create button
* Go to SQL Server dump or use DVD
* Double click on Setup
* Click on "Installation"
* Click on "New Stand Alone ........"
* OK
* Under Setup Support Files --> Click Install
FAQ: - I am installing SQL Server in d: drive where 20GB free space is available. But in system
drive (c:)
there is 100MB free space. Setup continue or fail?
Ans:
* Setup support files are installed in system drive where 118MB free space is required.
Setup fails.
* Next
* Under Installation Type select
Perform a new ............. option
Next

* Next
* Select "I Accept........"
* Next
* Under Feature Selection
select
Database Engine Services
SQL Server Replication
Full Text Search....
* Next
Note:
1. First time, select shared features also
2. Install SQL Server always in NTFS not in FAT32. NTFS provides high security and high
performance.
* Select "Named Instance and enter: TEST
Instance Root Directory: d:\SQLSERVER
Next
* Next
* Under "Server Configuration" Click on button
Use Same account..........
Enter
Account Name: SQLDBEngine_user
password
hyd@123
OK
* Next
FAQ:- Once I install SQL Server what are the logins created automatically?
Ans:
SS 2005
* sa
* BUILTIN\Administrators
SS 2008
* sa
* Login for service account
* Logins for added users in wizard

* Under Database Engine Configuration


* Click on "Add Current User"
* Next
* Next
* Next
* Install

FAQ: - What are the differences in the installation of SS 2005 and SS 2008?
Ans
* In SQL Server 2005 there is no
* Installation Center
* Upgrading, edition change, rebuilding system databases etc we have to perform
by running setup from command prompt.
* There is no "ConfigurationFile.ini"
* Next
* Close
Post Installation Steps
1. Verifying installation
* We can verify installation process using Summary.txt file, present in
C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log
2. Verifying instance folders
* For every instance the following folders are created in installation directory.
MSSQL10.<instanceName>
MSSQL
* BACKUP
* Binn - Consists of .exe and .dlls
* DATA - data and T.log files
* FTData
* Install - consists of .sql files
* Jobs - jobs and maintenance plans
* Log
- Error logs
* repldata- Replication snapshot folder
* Upgrade (2008)
3. Configuring Error logs
* Error logs maintains events raised by SQL Server.
* Consists of both errors and information.
* Error logs are present in LOG folder of respective instance.
* SS supports
1 Current Log
6 Archieve logs by default.
* Current Log has all the events from server restart.
* Once the server is restarted all the events are flushed from current log into Archieve1 and from
Archieve1 to Archieve2 etc. This process is called recycling error logs.
* We can recycle error logs explicitly without server restart using

sp_cycle_errorlog
* We can read error logs using sp_readerrorlog
* SS supports min 6 and max 99 error logs
* How to configure?
* Go to SSMS --> Object Explorer --> Management
* R.C on SQL Server Logs --> Configure
select checkbox "Limit no of ............"
Maximum logs..........: 20
* OK
FAQ: - When I am starting SQL Server it was not started successfully. What are the possible
scenarios? and how to troubleshoot it?
Possible Scenarios
1. Logon Failure
* Problem with service account.
2. 17113
* Master files are moved or corrupted
3. 3417
* After moving master files we have not granted read/write permissions on new folder
* Or any one startup parameter is missing.
How to troubleshoot?
* Check windows event log
* start --> run --> eventvwr
* System
* In the right pane double click on Error
* Using SQL Server error log
* Go to LOG folder of respective instance
* Open ERRORLOG in notepad and check for errors
Dead Locks in SQL Server
One thing that will you most certainly face at some time as a DBA is dealing with
deadlocks. A deadlock occurs when two processes are trying to update the same record or
set of records, but the processing is done in a different order and therefore SQL Server
selects one of the processes as a deadlock victim and rolls back the statements.
You have two sessions that are updating the same data, session 1 starts a transaction
updates table A and then session 2 starts a transaction and updates table B and then
updates the same records in table A.

Session 1 then tries to update the same records in table B. At this point it is impossible for
the transactions to be committed, because the data was updated in a different order and
SQL Server selects one of the processes as a deadlock victim.
To further illustrate how deadlocks work you can run the following code in the Northwind
database.
To create a deadlock you can issue commands similar to the commands below.
Step Commands
1

--open a query window (1) and run these commands


begin tran
update products set supplierid = 2

-- open another query window (2) and run these commands


begin tran
update employees set firstname = 'Bob'
update products set supplierid = 1

-- go back to query window (1) and run these commands


update employees set firstname = 'Greg'
At this point SQL Server will select one of the process as a deadlock victim and roll back the statement

--issue this command in query window (1) to undo all of the changes
Rollback

--go back to query window (2) and run these commands to undo changes
Rollback

Steps to capture dead lock information into error log file


Enable trace flag 1204
DBCC TRACEON (1204)
2.
Create the event alert for the error number 1205 so that it should send response to
required operator.
3.
Capturing deadlocks with Profiler
1.

a.
b.
c.
d.
e.
f.
g.
h.
i.
j.
k.
l.
m.
n.

Start Run Profiler


Go to File menu New Trace
Select Server Name
Click on Options
Connect to a database = Browse Server
Yes
Select Northwind (required database)
OK
Connect
Enter Trace Name: Northwind_DeadLocks_Trace
Use the Template : Tuning
Select checkbox Save to File Save
Select Events Selection tab
Select checkbox Show all events

o. Under Locks node select DeadLock graph and DeadLock chain


p. Run
q. Go to SSMS Run the above queries
--open a query window (1) and run these commands
begin tran
update products set supplierid = 2
-- open another query window (2) and run these commands
begin tran
update employees set firstname = 'Bob'
update products set supplierid = 1
-- go back to query window (1) and run these commands
update employees set firstname = 'Greg'
At this point SQL Server will select one of the process as a deadlock victim and roll back the statement
r.
s.

Stop trace in Profiler


Under Event Class click on Dead Lock graph
Resources That Can Deadlock

Each user session might have one or more tasks running on its behalf where each task might acquire or wait to
acquire a variety of resources. The following types of resources can cause blocking that could result in a deadlock.
Locks. Waiting to acquire locks on resources, such as objects, pages, rows, metadata, and
applications can cause deadlock. For example, transaction T1 has a shared (S) lock on row r1 and is waiting to get
an exclusive (X) lock on r2. Transaction T2 has a shared (S) lock on r2 and is waiting to get an exclusive (X) lock on
row r1. This results in a lock cycle in which T1 and T2 wait for each other to release the locked resources.
Worker threads. A queued task waiting for an available worker thread can cause deadlock. If the
queued task owns resources that are blocking all worker threads, a deadlock will result. For example, session S1
starts a transaction and acquires a shared (S) lock on row r1 and then goes to sleep. Active sessions running on all
available worker threads are trying to acquire exclusive (X) locks on row r1. Because session S1 cannot acquire a
worker thread, it cannot commit the transaction and release the lock on row r1. This results in a deadlock.
Memory. When concurrent requests are waiting for memory grants that cannot be satisfied with
the available memory, a deadlock can occur. For example, two concurrent queries, Q1 and Q2, execute as userdefined functions that acquire 10MB and 20MB of memory respectively. If each query needs 30MB and the total
available memory is 20MB, then Q1 and Q2 must wait for each other to release memory, and this results in a
deadlock.
Parallel query execution-related resources Coordinator, producer, or consumer threads
associated with an exchange port may block each other causing a deadlock usually when including at least one other
process that is not a part of the parallel query. Also, when a parallel query starts execution, SQL Server determines
the degree of parallelism, or the number of worker threads, based upon the current workload. If the system workload
unexpectedly changes, for example, where new queries start running on the server or the system runs out of worker
threads, then a deadlock could occur.
Multiple Active Result Sets (MARS) resources. These resources are used to control interleaving
of multiple active requests under MARS
User resource. When a thread is waiting for a resource that is potentially controlled by a
user application, the resource is considered to be an external or user resource and is treated like a lock.
Session mutex. The tasks running in one session are interleaved, meaning that only one
task can run under the session at a given time. Before the task can run, it must have exclusive access to the session
mutex.
Transaction mutex. All tasks running in one transaction are interleaved, meaning that
only one task can run under the transaction at a given time. Before the task can run, it must have exclusive access to
the transaction mutex.
Deadlock Detection

All of the resources listed in the section above participate in the Database Engine deadlock detection scheme.
Deadlock detection is performed by a lock monitor thread that periodically initiates a search through all of the tasks
in an instance of the Database Engine. The following points describe the search process:
The default interval is 5 seconds.
If the lock monitor thread finds deadlocks, the deadlock detection interval will drop from 5
seconds to as low as 100 milliseconds depending on the frequency of deadlocks.
If the lock monitor thread stops finding deadlocks, the Database Engine increases the intervals
between searches to 5 seconds.
If a deadlock has just been detected, it is assumed that the next threads that must wait for a lock
are entering the deadlock cycle. The first couple of lock waits after a deadlock has been detected will immediately
trigger a deadlock search rather than wait for the next deadlock detection interval. For example, if the current
interval is 5 seconds, and a deadlock was just detected, the next lock wait will kick off the deadlock detector
immediately. If this lock wait is part of a deadlock, it will be detected right away rather than during next deadlock
search.
To help minimize deadlocks:
Access objects in the same order.
Avoid user interaction in transactions.
Keep transactions short and in one batch.
Use a lower isolation level.
Use a row versioning-based isolation level.
Set READ_COMMITTED_SNAPSHOT database option ON to enable read-committed
transactions to use row versioning.
Use snapshot isolation.
Use bound connections.
Query Architecture
Performance Tuning, Monitoring and Troubleshooting
* As part of performance tuning we have to analyze and work with
* Physical I/O and Logical I/O
* CPU usage
* Memory usage
* Database Design
* Application's db programming methods
Query Architecture
* Once the query is submitted to Database Engine for first time it performs the following
* Parsing
(Compiling)
* Resolving (Verifying syntax, table, col names etc)
* Optimizing (Generating execution plan)
* Executing (Executing query)
* For next time if the query was executed with same case and same no of characters i.e
extra spaces then the query is executed by taking existing plan.
* To display cached plans
SELECT cp.objtype AS PlanType,
OBJECT_NAME(st.objectid,st.dbid) AS ObjectName,
cp.refcounts AS ReferenceCounts,cp.usecounts AS UseCounts,

tasks.

with no

st.text AS SQLBatch,qp.query_plan AS QueryPlan


FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) AS qp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) AS st;
GO
* To remove plans from cache memory
DBCC FREEPROCCACHE
Execution Plan
* Step by step process followed by SS to execute a query is called execution plan.
* It is prepared by Query Optimizer using STATISTICS.
* Query optimizer prepares execution plan and stores in Procedurec Cache.
* Execution plans are different for
* Different case statements
* Different size statements (spaces.)
* To view graphical execution plan
* select the query --> press ctrl+M/L
* To view xml execution plan
* set showplan_xml on/off
* Execute the query
* To view text based execution plan
* set showplan_text on/off
* Execute the query.
Statistics
* Consists of meta data of the table or index.
* If statistics are out of date, query optimizer may prepare poor plan.
* We have to update statistics weekly with maintenance plan.
USE master
GO
-- Enable Auto Update of Statistics
ALTER DATABASE AdventureWorks SET AUTO_UPDATE_STATISTICS ON;
GO
-- Update Statistics for whole database
EXEC sp_updatestats
GO
-- Get List of All the Statistics of Employee table
sp_helpstats 'HumanResources.Employee', 'ALL'
GO
-- Get List of statistics of AK_Employee_NationalIDNumber index
DBCC SHOW_STATISTICS ("HumanResources.Employee",AK_Employee_NationalIDNumber)
-- Update Statistics for single table
UPDATE STATISTICS HumanResources.Employee
GO
-- Update Statistics for single index on single table

UPDATE STATISTICS HumanResources.Employee AK_Employee_NationalIDNumber


GO
Index
* It is another database objects which can be used
* To reduce searching process
* To enforce uniqueness
* By default SS search for the rows by following the process called table scan.
* If the table consists of huge data then table scan provides less performance.
* Index is created in tree-like structure which consists of root,node and leaf level.
* At leaf level, index pages are present by default.
* We can place max 250 indexes per table.
* Indexes are automatically placed if we place
* Primary key (clustered)
* Unique
(unique non clustered index)
* We can place indexes as follows
create [unique][clustered/nonclustered] index <indexName> on
<tname>/<viewName>(col1,col2,....)
[include(.....)]
Types
------* Clustered
* NonClustered
1. Clustered Index----------------------* It physically sorts the rows in the table.
* A table can have only ONE clustered index.
* Both data and index pages are merged and stored at third level (Leaf level).
* We can place on columns which are used to search a range of rows,
Ex:
Create table prods(pid int,pname varchar(40), qty int)
insert prods values(4,'Books',50),(2,'Pens',400)
select * from prods (run the query by pressing ctrl+L)
create clustered index pid_indx on prods(pid)
select * from prods -- check the rows are sorted in asc order to pid

FAQ:- Difference between P.K and Clustered Index?


* Primary key enforce uniqueness and allows to eshtablish relationship. But by default
cannot.

clustered index

select * from prods where pid=2 -- press ctrl+L to check execution plan
insert prods values(3,'Pencils',500) -- Check this row is inserted as second record.
Note: A table without clustered index is called HEAP where the rows and pages of the
not present in any order.

table are

NonClustered Index----------------------* It cannot sort the rows physically.


* We can place max 249 nonclustered indexes on table.
* Both data and index pages are stored seperately.
* It locates rows either from heap (Table scan) or from clustered index.
* Always we have to place first clustered index then nonclustered.
* If the table is heap the index page consists of
IndexKeyColvalues
rowreference
* If the table consists of clustered index then index page consists of
IndexKeyColValues
Clusteredindexkeycolvalues
* Nonclustered indexes are rebuilded when
* Clustered index is created/droped/modified
Ex: Create nonclustered index on pname column of prods table.
create index indx1 on prods(pname)
select * from prods where pname='Books' -- check execution plan
* To disp indexes present on a table
sp_helpindex <tname>
* To drop index
drop index prods.pid_indx
* To disp space used by the index
sp_spaceused prods
Using Included Columns in NonClustered Index-------------------------------------------------------* We can maintain regularly used columns in nonclustered index so that no need that
SQL Server
should take data from heap or clustered index.
* If the no of rows are more it provides better performance.
Ex:
--step1
USE AdventureWorks
GO
CREATE NONCLUSTERED INDEX IX_Address_PostalCode
ON Person.Address (PostalCode)
INCLUDE (AddressLine1, AddressLine2, City, StateProvinceID)
GO
--step2

SELECT AddressLine1, AddressLine2, City, StateProvinceID, PostalCode


FROM Person.Address
WHERE PostalCode BETWEEN '98000'
AND '99999';
GO
Index Management
FillFactor-----------* Percentage of space used in leaf level index pages.
* By default it is 100%.
* To reduce page splits when the data is manipulated in the base table we can set proper
* It allows online index processing
* While the index rebuilding process is going on users can work with the table.

FillFactor.

Page Split-----------* Due to regular changes in the table if the index pages are full to allocate memory for
the index
key columns SS takes remaining rows into new page. This process is called
Page split.
* Page split increases size of index and the index pages order changes.
* This situation where unused free space is available and the index pages are not in the
order of key
column values is called fragmentation.
* To find fragmentation level we can use
dbcc showcontig
or
We can use sys.dm_db_index_physical_stats DMF as follows
SELECT a.index_id, name, avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats
(DB_ID('AdventureWorks'),
OBJECT_ID('Production.Product'), NULL, NULL, NULL)
AS a JOIN sys.indexes AS b
ON a.object_id = b.object_id
AND a.index_id =b.index_id;
* To control fragmentation we can either reorganize the index or rebuild the index.
1. Reorganizing Index * It is the process of arranging the index pages according to the order of index
key column values.
* If the fragmentation level is more than 5 to 8% and less than 28to 30% then we can reorganize the
indexes.
* It cannot reduce the index size as well as statistics are not updated.
syn:
ALTER INDEX <indexName>/<All> on <tname> REORGANIZE
2. Index Rebuilding * It is the process of deleting and creating fresh index.
* It reduces the size of index and updates statistics

* If the fragmentation level is more than 30% then we can rebuild indexes.
syn:
ALTER INDEX <indexName>/<ALL> on <tname> REBUILD
Note:
If we have mentioned ONLINE INDEX PROCESSING option then rebuilding takes space in TEMPDB.
To check consistancy of a database we can use DBCC CHECKDB('dbName') it disp if any corrupted
pages are present, use space in tempdb.
Transactions and Locks
---------------------------* A transaction is single unit of work which may consists of one or more commands.
* Transactions works with ACID properties
* Automicity
* Consistancy
* Isolation
* Durability
* SQL Server supports 2 types of transactions
* Implicit
* Explicit
* By default SS supports implicit transaction where for every insert, update and delete 3
are stored in T.Log file
Begin tran
insert/update/delete
commit tran
* To implement business logic i.e. according to the required if we want to commit or
changes we can use explicit transactions.
Begin Tran
---commit/rollback tran
* Any transaction which consists of manipulations places locks on the tables.
* By default when we make a db as current db automatically Shared Lock is placed.
* While working with insert,update,delete by default SS places Exclusive lock.
* Type of locks placed on objects depends on isolation levels.
Isolation Levels
------------------* It is a transaction property.
* Types of locks placed by SS on the resource depends on isolation levels.
* SS supports 5 isolation levels
* Read Committed (Default)
* Read Uncommitted
* Repeatable Reads
* Snapshot
* Serializable
* To check the isolation level

records

rollback the

dbcc useroptions
* To set the isolation level
SET TRANSACTION ISOLATION LEVEL <requiredisolationlevel>
* To handle the concurrency related problems SS places locks
* SS supports 2 types of concurrencies
* Optimistic Concurrency
* Uses Shared Locks
* More concurrency
* Pessimistic Concurrency
* Uses Exclusive Locks
* Low concurrency
Ex: Open new query window
--user1
use Test
go
begin tran
update emp set sal=5000
Take new query -->
--user2
use Test
go
select * from emp (--query runs continuesly till user1 session releases lock)
Take new query
--user3
set transaction isolation level read uncommitted
select * from emp
--Take new query
sp_lock
-- To view locks information
or
select * from sys.dm_tran_locks
--check blocking using
sp_who/sp_who2
-- To check locks placed by a particular session
sp_lock <spid>
sp_lock 56
Database Mirroring
Introduction

It is another high availability feature available from SQL Server 2005. Previous versions support
the simple high availability feature called Transaction Log Shipping. Log shipping has its own limitation as
it doesnt support automatic failover as well as there might be data loss. All these limitation we can
overcome with database mirroring.
Database mirroring supports automatic failover and the transactions are applied to standby (Mirror) server
immediately once they are committed at principle server. Like Log shipping no need of backup, copy and
restore operations and jobs.
Points to remember
In Principal server database is in ONLINE state.
In mirror server database is in a restoring state, which means it is not available for incoming requests.
However, we can create a database snapshot of the mirror database, which provides a point-in-time
read-only view of the database.
Advantages and benefits:

Protection against database failures


Automatic failure detection and failover
Support of easy manual failover
Automatic client redirection
Multiple operating modes
No special hardware requirements
Minimized chance of data loss
Relatively ease of set up and configuration
FAQ: - What are new features introduced in SQL Server 2008 mirroring?
1. Automatic page repair
Database mirroring can, however, recover from the following errors:
Error 823: Operating system Cyclic Redundancy Check (CRC) failure
Error 824: Logical errors including a bad page checksum or torn write
Error 829: Page has been marked as restore pending
* To view the repaired pages
Select * from sys.dm_db_mirroring_auto_page_repair
2. Log Stream Compression
Log stream compression between the principal and the mirror server to
network
bandwidth.
2. Mirroring Architecture
Mirroring Operating Modes
* Synchronous
* High Availability
(High safety with automatic failover)
* Principal, Mirror and witness
* Supports automatic failover
* High protection
* Principal, Mirror

minimize

* No automatic failover
* Asynchronous
* High performance
* Principal, Mirror
* No automatic failover
FAQ: - How to enable mirroring feature in SS 2005 RTM?

1.
2.
3.
4.
5.
6.
1.
a.
b.
c.
2.
a.
3.
1.
2.

3.
a.
b.
c.
4.

5.
6.
7.

Requirements
SQL Server 2005 with SP1 or SQL Server 2008
Database should be in FULL recovery model.
Service Broker should be enabled on the database.
Both the servers should have either Enterprise or standard editions.
Both the servers should have same edition.
Witness server can have any edition.
Configuring Mirroring Steps
Configuring security and communication between instances
Configuring endpoint
Creating logins for other servers service accounts
Grant connect permission to this logins on endpoints
Create mirror database
Take full and T.Log backup from principle server and restore it in mirror server with NORECOVERY.
Establish mirroring session using ALTER DATABASE command
Steps
Go to SSMS
Connect 2 or 3 instances
For example
CLASS2\sql2K8
Principal
CLASS2\FIRST
Mirror
CLASS2\THIRD
Witness
Note down the above instances service accounts
CLASS2\SQL2K8
(CLASS2\KAREEM)
CLASS2\FIRST
(CLASS2\KAREEM)
CLASS2\THIRD
(CLASS2\SQLUSER)
Verify both Principal and Mirror has same editions or not i.e. Enterprise or Standard.
By running the following command in both the servers
Select serverproperty('edition')
Go to Principal server and create a sample database (In real time environment we have to use existing
database) with the name OptimizeSQL
Create one sample table in the database with some rows.
Take FULL and Transaction Log Backup of OptimizeSQL database in principal server.
Use master
go
backup database OptimizeSQL to disk='\\Class2\backups\OptimizeSQL.bak'

8.

9.
10.
11.

go
backup log OptimizeSQL to disk='\\Class2\backups\OptimizeSQL.bak'
go
Go to Mirror Server and create a folder with the name d:\OptimizeSQL_Files and grant read write
permissions to service account. Restore database by using the Recovery State WITH NORECOVERY
RESTOREDATABASE OptimizeSQL
FROM DISK='\\Class2\backups\OptimizeSQL.bak'
WITH FILE= 1,
MOVE 'OptimizeSQL' TO 'd:\OptimizeSQL_Files\OptimizeSQL.mdf',
MOVE 'OptimizeSQL_log' TO '
d:\OptimizeSQL_Files\OptimizeSQL_1.ldf' ,NORECOVERY
GO
RESTORE LOG OptimizeSQL
FROM DISK='\\Class2\backups\OptimizeSQL.bak'
WITH FILE= 2,NORECOVERY
GO
Configuring Mirroring
Go to Principal Server Right Click on database OptimizeSQL Tasks Mirror
Click on Configure Security Click Next

Next
12. Select Yes if you have witness instance otherwise select No.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.

Next Next
Select principal instance Next
Click on Connect select Mirror Server instance name (e.g class2\First)
Select Connect Next
Once again click on Connect button select Witness Server instance name (CLASS2\THIRD) Next
Enter service accounts
Click Next Finish
Close.
Select Do Not Start Mirroring.
Select Start Mirroring
Check the status OK

FAQ: - While configuring mirroring what errors you have faced?


Answer:

1.
2.
3.
a.
b.

Points to Remember
One job is created on both the servers
Database Mirroring Monitor Job
Default Partner Timeout is 10Sec.
How can you say that both the dbs are 100% sync?
We can view unsent log and un restored log values. If both are 0 then 100% sync. (In Mirroring Monitor)
We can view Mirroring Failover LSN and Replication LSN with sys.database_mirroring. Both should
be same.

1.
2.
3.
4.
5.

o
o
o
o
1.
o
o
o
2.
o

3.
a.

4. Mirroring States
Synchronizing
Synchronized
Disconnected (If mirror or principal failed)
Suspended (If the principal is un available or unable to send transactions to mirror)
Pending Failover If the unsent log is >0.
5. To change mirroring timeout (Run in principal server)
Alter database OptimizeSQL SET PARTNER TIMEOUT 30
Monitoring Mirroring
We can monitor mirroring using the following options
Using MSDB tables and Views
Using Database Mirroring Monitor
Using Performance Monitor
Using Profiler
Using MSDB tables and Views
To view complete details of mirroring (In Principal Server)
Select * from sys.database_mirroring
To view mirroring endpoint details (In Principal Server)
Select * from sys.database_mirroring_endpoints
To view about Principal, mirror server details and mirroring state run the following query in witness server
Select * from sys.database_mirroring_witnesses
Using Database Mirroring Monitor
We can monitor the following features
* Unsent Log (at principal)
* Un restored Log (at mirror)
* Transaction Rate
* Commit Overhead (Transactions applied rate at mirror)
Ex: Go to principal server and run the following query
use OptimizeSQL
go
declare @n int=100
while @n<=1000000
begin
insert emp values(@n,'Rajesh',60)
set @n+=1
end
b. Right click on OptimizeSQL db --> Tasks -->Launch Database Mirroring Monitor
c. Select "Database Mirroring Monitor"
d. Click on Register Mirror databases
e. Click on Connect and select Mirror Server
f. Select the database OptimizeSQL --> OK
g. Observe the parameters by refreshing (F5) the monitor.
Configuring Thresholds
Go to Mirroring Monitor Select Warnings tab Set Thresholds
Using Performance Monitor
We can monitor the following counters for Mirrored databases
* We can use the performance object called

"<instanceName>:Database Mirroring"
* Counters which we have to observe regularly
* Bytes sends/sec
* Log Harden time (Commit overhead)
* Total sends/sec
* Transaction delay (Principal)
* Pages Sends/sec

1.
2.
3.
4.
5.

1.
2.
3.
1.
2.
3.
1.
2.

Steps
Start run perfmon
Add counter (Ctrl + I) or Click on +symbol, present on toolbar.
Add required counters by selectingMSSQL$SQL2K8:DatabaseMirroringperformance object
To view the changes run the previous script.
Configuring Alerts
Performing Fail Over
Fail over process depends on Operating Modes.
If the operating mode is "High safety with automatic failover" then witness server makes mirror db online
automatically within the configured timeout.
In case of other operating modes we have to perform fail over manually.
In High Performance
Run the following command in mirror server
ALTER DATABASE <dbname> SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSS
Transfer the logins.
Make the database available to the users and applications.
In High Protection
Run the following commands in Mirror server
ALTER DATABASE <dbname> SET PARTNER OFF; (To break mirroring)
Database comes into restoring state run the following command to take it online
RESTORE DATABASE <dbname> WITH RECOVERY

Threads created for database mirroring


FAQ :- If mirror server is failed then what is the effect on principle database T.Log file?

Quorum
Quorum is a relationship that exists when two or more server instances in a database mirroring session
are connected to each other. Typically, quorum involves three interconnected server instances. When a

witness is set, quorum is required to make the database available. Designed for high-safety mode with
automatic failover, quorum makes sure that a database is owned by only one partner at a time.
Three types of quorum are possible:

A full quorum includes both partners and the witness.


A witness-to-partner quorum consists of the witness and either partner.
A partner-to-partner quorum consists of the two partners.
Possible Failures during Database Mirroring

As part of mirroring generally we have two types of errors


Soft errors
Hard Errors
Soft Errors
Errors identified by SQL Server service i.e. sqlservr.exe is called soft error.
Network errors such as TCP link time-outs, dropped or corrupted packets, or
packets that are in an incorrect order.

A hanging operating system, server, or database state.

A Windows server timing out.

Hard Errors
Errors identified by windows and notified to sqlservr.exe file are called hard errors.
A broken connection or wire
A bad network card
A router change
Changes in the firewall
Endpoint reconfiguration
Loss of the drive where the transaction log resides
Operating system or process failure

Transferring Logins from one instance to another


First 2 steps should be executed in primary server and copy 2nd step output and run in standby or second
server
--step1
USE master
GO
IF OBJECT_ID ('sp_hexadecimal') IS NOT NULL

DROP PROCEDURE sp_hexadecimal


GO
CREATE PROCEDURE sp_hexadecimal

@binvalue varbinary(256),
@hexvalue varchar (514) OUTPUT
AS
DECLARE @charvalue varchar (514)
DECLARE @i int
DECLARE @length int
DECLARE @hexstring char(16)
SELECT @charvalue = '0x'
SELECT @i = 1
SELECT @length = DATALENGTH (@binvalue)
SELECT @hexstring = '0123456789ABCDEF'
WHILE (@i <= @length)
BEGIN
DECLARE @tempint int
DECLARE @firstint int
DECLARE @secondint int
SELECT @tempint = CONVERT(int, SUBSTRING(@binvalue,@i,1))
SELECT @firstint = FLOOR(@tempint/16)
SELECT @secondint = @tempint - (@firstint*16)
SELECT @charvalue = @charvalue +
SUBSTRING(@hexstring, @firstint+1, 1) +
SUBSTRING(@hexstring, @secondint+1, 1)
SELECT @i = @i + 1
END
SELECT @hexvalue = @charvalue
GO
IF OBJECT_ID ('sp_help_revlogin') IS NOT NULL
DROP PROCEDURE sp_help_revlogin
GO
CREATE PROCEDURE sp_help_revlogin @login_name sysname = NULL AS
DECLARE @name sysname
DECLARE @type varchar (1)
DECLARE @hasaccess int
DECLARE @denylogin int
DECLARE @is_disabled int
DECLARE @PWD_varbinary varbinary (256)
DECLARE @PWD_string varchar (514)
DECLARE @SID_varbinary varbinary (85)
DECLARE @SID_string varchar (514)
DECLARE @tmpstr varchar (1024)
DECLARE @is_policy_checked varchar (3)
DECLARE @is_expiration_checked varchar (3)

DECLARE @defaultdb sysname


IF (@login_name IS NULL)
DECLARE login_curs CURSOR FOR
SELECT p.sid, p.name, p.type, p.is_disabled, p.default_database_name, l.hasaccess, l.denylogin
FROM
sys.server_principals p LEFT JOIN sys.syslogins l
ON ( l.name = p.name ) WHERE p.type IN ( 'S', 'G', 'U' ) AND p.name <> 'sa'
ELSE
DECLARE login_curs CURSOR FOR

SELECT p.sid, p.name, p.type, p.is_disabled, p.default_database_name, l.hasaccess, l.denylogin


FROM
sys.server_principals p LEFT JOIN sys.syslogins l
ON ( l.name = p.name ) WHERE p.type IN ( 'S', 'G', 'U' ) AND p.name = @login_name
OPEN login_curs
FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @type, @is_disabled, @defaultdb,
@hasaccess, @denylogin
IF (@@fetch_status = -1)
BEGIN
PRINT 'No login(s) found.'
CLOSE login_curs
DEALLOCATE login_curs
RETURN -1
END
SET @tmpstr = '/* sp_help_revlogin script '
PRINT @tmpstr
SET @tmpstr = '** Generated ' + CONVERT (varchar, GETDATE()) + ' on ' + @@SERVERNAME + ' */'
PRINT @tmpstr
PRINT ''
WHILE (@@fetch_status <> -1)
BEGIN
IF (@@fetch_status <> -2)
BEGIN
PRINT ''
SET @tmpstr = '-- Login: ' + @name
PRINT @tmpstr
IF (@type IN ( 'G', 'U'))
BEGIN -- NT authenticated account/group
SET @tmpstr = 'CREATE LOGIN ' + QUOTENAME( @name ) + ' FROM WINDOWS WITH
DEFAULT_DATABASE = [' + @defaultdb + ']'

END
ELSE BEGIN -- SQL Server authentication
-- obtain password and sid
SET @PWD_varbinary = CAST( LOGINPROPERTY( @name, 'PasswordHash' ) AS varbinary
(256) )
EXEC sp_hexadecimal @PWD_varbinary, @PWD_string OUT
EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT
-- obtain password policy state
SELECT @is_policy_checked = CASE is_policy_checked WHEN 1 THEN 'ON' WHEN 0 THEN
'OFF' ELSE NULL END FROM sys.sql_logins WHERE name = @name
SELECT @is_expiration_checked = CASE is_expiration_checked WHEN 1 THEN 'ON' WHEN 0
THEN 'OFF' ELSE NULL END FROM sys.sql_logins WHERE name = @name
SET @tmpstr = 'CREATE LOGIN ' + QUOTENAME( @name ) + ' WITH PASSWORD = ' +
@PWD_string + ' HASHED, SID = ' + @SID_string + ', DEFAULT_DATABASE = [' + @defaultdb + ']'
IF ( @is_policy_checked IS NOT NULL )
BEGIN
SET @tmpstr = @tmpstr + ', CHECK_POLICY = ' + @is_policy_checked
END
IF ( @is_expiration_checked IS NOT NULL )
BEGIN
SET @tmpstr = @tmpstr + ', CHECK_EXPIRATION = ' + @is_expiration_checked
END
END
IF (@denylogin = 1)
BEGIN -- login is denied access
SET @tmpstr = @tmpstr + '; DENY CONNECT SQL TO ' + QUOTENAME( @name )
END
ELSE IF (@hasaccess = 0)
BEGIN -- login exists but does not have access
SET @tmpstr = @tmpstr + '; REVOKE CONNECT SQL TO ' + QUOTENAME( @name )
END
IF (@is_disabled = 1)
BEGIN -- login is disabled
SET @tmpstr = @tmpstr + '; ALTER LOGIN ' + QUOTENAME( @name ) + ' DISABLE'
END
PRINT @tmpstr
END
FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @type, @is_disabled, @defaultdb,
@hasaccess, @denylogin
END
CLOSE login_curs

DEALLOCATE login_curs
RETURN 0
GO
--step2
EXEC sp_help_revlogin
--step3:
The above s.p generates some output as follows
/* sp_help_revlogin script
** Generated May 25 2009 9:11PM on ONLINE */
-- Login: BUILTIN\Administrators
CREATE LOGIN [BUILTIN\Administrators] FROM WINDOWS WITH DEFAULT_DATABASE = [master]
-- Login: NT AUTHORITY\SYSTEM
CREATE LOGIN [NT AUTHORITY\SYSTEM] FROM WINDOWS WITH DEFAULT_DATABASE = [master]
Copy, paste above output in Stand by server instance and run for required logins.
Alternatively you can download the script from
http://support.microsoft.com/kb/918992/
Steps to configure replication
Configuring distributor
Configuring publisher
Creating publication of required type
Creating subscription(s)
Step1: Configuring distributor and publisher
a.
Take three instances
1.
2.
3.
4.

b.
c.

Go to second instance Right click on Replication Configure Distribution


Next Select SERVER2 will act as its own distributor;

d.
e.

Next

f.
g.
h.

Next
Next
Uncheck the check box present at Server2 Add

i.
Select instance Server1
Next
Enter strong password. (Automatically one login is created in distributor with the name
Distributor_Admin)
m. Next
n.
Next
o.
Finish
j.
k.
l.

1.
2.
3.
4.

Observations
Go to distributor Databases Find the new database Distribution
Go to Security Logins Find a new login Distributor_admin
Go to Server Objects Linked servers Find new linked server repl_distributor
Right Click on Replication Select distributor Properties..

Transactions stored in distribution database are removed after 72 hrs and agents history is removed
after 48 hrs.
To view snapshot folder path Click on publishers click on browse button () present to right side
of publisher name.

Go to SQL Server Agent Jobs Find 6 new jobs are created automatically.

Configuring Peer to Peer Replication


Steps
1.
Connect to SSMS and take 2 instances.
2.
3.
4.
5.
6.
7.

Make both the instances as publisher as well as distributor.


Go to first instance create one transactional publication from required database.
Right click on publication take properties
Select Subscription Options and Allow Peer to peer subscriptions= True
OK
Go to Node1 i.e. first server take full backup of Galactic database. (We have to take full backup of
database from which publication was created)
use master
go
BACKUP DATABASE Galactic TO DISK='c:\backups\Galactic.bak'
Go

8.
1.

2.

9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.

Now restoring backup in Node2 (server2)


Create a folder for data and transaction log files in server2 and grant read write permissions to server2
service account
C:\Galactic_Files
Restoring database
Right click on Databases in second server
Restore Database.
To database = Galactic
Select From device click on browse button Add
Select backup file (c:\backups\Galactic.bak)
OK
OK
Select checkbox under Restore option
Click Options, under Restore As change the file paths as follows
Click Options, under Restore As change the file paths as follows
OK
Right click on publication select configure peer to peer topology
Click Next
(Select publication )Next
Right on the surface and select Add New peer node as follows
Select second instance Connect
Select Database= Galactic and Peer Originator ID=2 as follows
OK
Right click on first node Select Connect to all displayed node
Next
Configure Log Reader Agent Security Next
Configure Distribution Agent security Next
Select first option Next
Next Finish
Close
Test by performing changes in both the servers.
Observe that publication is created in second node.
Transparent Data Encryption
* To provide security for data we can use encryption option.
* To provide security for data and T.Log files of a database as well as backups, we need TDE which was
introduced in SQL Server 2008.
Steps
-----Create a master key
Create or obtain a certificate protected by master key.
Create a database key and protect it by the certificate.
Set the database you want to protect to use the encryption.
--step1:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'p@ssw0rd';


GO
--step2:
CREATE CERTIFICATE MyTDECert WITH SUBJECT = 'My TDE Certificate'
GO
--step3: To check existing certificates
SELECT * FROM sys.certificates where [name] = 'MyTDECert'
GO
--step4:
Use AdventureWorks
GO
CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_128
ENCRYPTION BY SERVER CERTIFICATE MyTDECert
GO
--step5
ALTER DATABASE AdventureWorks SET ENCRYPTION ON
GO
--Verifying TDE
--step1
* R.C on database--> Properties -->Options --> Check
Encryption Enabled: True
--step2:
Find the current path of the files
sp_helpdb AdventureWorks
--step3: Detach database
use master
go
sp_detach_db AdventureWorks
go
--step4:
Copy data and T.Log files of AdventureWorks into
d:\AdventureWorks_Files
--step5: Connect to another instance of SQL Server
CREATE DATABASE [AdventureWorks] ON
( FILENAME = 'D:\AdventureWorks_Files\AdventureWorks.mdf'),
( FILENAME = 'D:\AdventureWorks_Files\AdventureWorks_log.ldf')
FOR ATTACH
GO
--The above step fails
--step6: Go to first server take backup of certificate
Use Master
GO
BACKUP CERTIFICATE MyTDECert TO FILE = 'D:\MyTDECert.cert'
WITH PRIVATE KEY

(
FILE = 'D:\EncryptPrivateKey.key',
ENCRYPTION BY PASSWORD = 'TryToUseOnlyStrongPassword'
)
GO
--step7: Go to second server where need to attach db
USE [master]
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'p@ssw0rd';
GO
CREATE CERTIFICATE MyTDECert
FROM FILE = 'D:\MyTDECert.cert'
WITH PRIVATE KEY (
FILE = 'D:\EncryptPrivateKey.key'
, DECRYPTION BY PASSWORD = 'TryToUseOnlyStrongPassword'
)
--step8: Run step5 in second server. Now db is attached successfully

You might also like