Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lesson 01 5day

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 59

Oracle Grid Infrastructure Architecture

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Objectives
After completing this lesson, you should be able to: Explain the principles and purposes of clusters Describe Cluster hardware best practices Describe the Oracle Clusterware architecture Describe how Grid Plug and Play affects Clusterware Describe the Automatic Storage Management (ASM) architecture Describe the components of ASM

1-2

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Oracle Grid Infrastructure

ASM and Oracle Clusterware are installed into a single home directory called Oracle Grid Infrastructure 11g Release 2. This directory is referred to as the Grid Infrastructure home.

1-3

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Module 1: Oracle Clusterware Concepts

1-4

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

What Is a Cluster?
A group of independent, but interconnected, computers that act as a single system Usually deployed to Network increase availability and Interconnect performance or Users to balance a dynamically changing workload

Storage Network

1-5

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

What Is Clusterware?
Software that provides various interfaces and services for a cluster. Typically, this includes capabilities that: Allow the cluster to be managed as a whole Protect the integrity of the cluster Maintain a registry of resources across the cluster Deal with changes to the cluster Provide a common view of resources

1-6

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware
Oracle Clusterware is: A key part of Oracle Grid Infrastructure Integrated with Oracle Automatic Storage Management (ASM) The basis for ASM Cluster File System (ACFS) A foundation for Oracle Real Application Clusters (RAC) A generalized cluster infrastructure for all kinds of applications
1-7

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Architecture and Services


Shared disk cluster architecture supporting application load balancing and failover Services include:
Cluster management Node monitoring Event services Time synchronization Network management High availability

1-8

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Goals for Oracle Clusterware


Easier installation Easier management Continuing tight integration with Oracle RAC ASM enhancements with benefits for all applications No additional clusterware required

1-9

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Networking


Each node must have at least two network adapters. Each public network adapter must support TCP/IP. The interconnect adapter must support:
User Datagram Protocol (UDP) or Reliable Data Socket (RDS) for UNIX and Linux for database communication TCP for Windows platforms for database communication

All platforms use Grid Interprocess Communication (GIPc)


Public network

Interconnect: Private network


1 - 10

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Interconnect Link Aggregation: Single Switch


Link aggregation can be used to increase redundancy for higher availability with an Active/Standby configuration. Link aggregation can be used to increase bandwidth for performance with an Active/Active configuration.
Interconnect

Interconnect

Each node
bond0 eth2

Each node
bond0 eth2

Active

Standby

Active

Active

1 - 12

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Interconnect Link Aggregation: Multiswitch


Redundant switches connected with an Inter-Switch Trunk may be used for an enhanced highly available design. This is the best practice configuration for the interconnect.

Interconnect Each node


bond0 eth2

eth1

eth1

Inter Switch Trunk

bond0 eth2

Active

Standby

1 - 14

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Interconnect NIC Guidelines


Optimal interconnect NIC settings can vary depending on the driver used. Consider the following guidelines: Configure the interconnect NIC on the fastest PCI bus. Ensure that NIC names and slots are identical on all nodes. Define flow control: receive=on, transmit=off. Define full bit rate supported by NIC. Define full duplex autonegotiate. Ensure compatible switch settings:
If 802.3ad is used on NIC, it must be used and supported on the switch. The Maximum Transmission Unit (MTU) should be the same between NIC and the switch.

1 - 15

Driver settings can change between software releases.


Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Additional Interconnect Guidelines


UDP socket buffer (rx) Default settings are adequate for the majority of customers. It may be necessary to increase the allocated buffer size when the:
MTU size has been increased netstat command reports errors ifconfig command reports dropped packets or overflow

Jumbo frames: Are not an Institute of Electrical and Electronics Engineers (IEEE) standard Are useful for Network-Attached Storage (NAS)/iSCSI storage Have network device interoperability concerns Need to be configured with care and tested rigorously
1 - 16

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Quiz
Each cluster nodes public Ethernet adapter must support UDP or RDS. 1. True 2. False

1 - 17

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Module 2: Oracle Clusterware Architecture

1 - 18

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Startup


Oracle Clusterware is started by the OS init daemon.
Operating system init daemon init Clusterware startup script Oracle Clusterware processes
ohasd.bin octssd.bin oraagent.bin diskmon.bin ocssd.bin evmd.bin cssdagent orarootagent.bin oclskd.bin crsd.bin gipcd.bin mdnsd.bin gpnpd.bin scriptagent.bin oraagent.bin

/etc/init.d/init.ohasd

Oracle Clusterware installation modifies the /etc/inittab file to restart ohasd in the event of a crash.

# cat /etc/inittab .. h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null


1 - 19

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware Process Architecture


Clusterware processes are organized into several component groups. They include:
Component
Cluster Ready Service (CRS) Cluster Synchronization Service (CSS)

Processes
crsd ocssd,cssdmonitor, cssdagent evmd, evmlogger octssd ons, eons oraagent

Owner
root grid owner, root, root grid owner root grid owner grid owner

Event Manager (EVM) Cluster Time Synchronization Service (CTSS) Oracle Notification Service (ONS) Oracle Agent

Oracle Root Agent


Grid Naming Service (GNS) Grid Plug and Play (GPnP) Multicast domain name service (mDNS)
1 - 20

orarootagent
gnsd gpnpd mdnsd

root
root grid owner grid owner

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Grid Plug and Play

In previous releases, adding or removing servers in a cluster required extensive manual preparation. In Oracle Database 11g Release 2, GPnP allows each node to perform the following tasks dynamically:
Negotiating appropriate network identities for itself Acquiring additional information from a configuration profile Configuring or reconfiguring itself using profile data, making host names and addresses resolvable on the network

To add a node, simply connect the server to the cluster and allow the cluster to configure the node.

1 - 22

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

GPnP Domain
The GPnP domain is a collection of nodes belonging to a single cluster served by the GPnP service:
Cluster name: cluster01 Network domain: example.com GPnP domain: cluster01.example.com

Each node participating in a GPnP domain has the following characteristics:


Must have at least one routable interface with connectivity outside of the GPnP domain for the public interface A unique identifier that is unique within the GPnP domain A personality affected by the GPnP profile, physical characteristics, and software image of the node

1 - 23

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

GPnP Components
Software image A software image is a read-only collection of software to be run on nodes of the same type. At a minimum, the image must contain:
An operating system The GPnP software A security certificate from the provisioning authority Other software required to configure the node when it starts up

1 - 24

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

GPnP Profile
The profile.xml file:
$ cat GRID_HOME/gpnp/profiles/peer/profile.xml <?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" ... xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="4" ClusterUId="2deb88730e0b5f1bffc9682556bd548e" ClusterName="cluster01" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.0.2.0" Adapter="eth0" Use="public"/><gpnp:Network id="net2" IP="192.168.1.0" Adapter="eth1" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/sd*" SPFile="+data/spfile.ora"/><ds:Signature <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI=""> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/> ... <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>gIBakmtUNi9EVW/XQoE1mym3Bnw=</ds:DigestValue> ... <ds:SignatureValue>cgw3yhP/2oEm5DJzdachtfDMbEr2RSfFFUlZujLemnOgsM...=</ds:SignatureValue>

1 - 25

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Grid Naming Service


GNS is an integral component of Grid Plug and Play. The only static IP address required for the cluster is the GNS virtual IP address. The cluster subdomain is defined as a delegated domain.

[root@my-dns-server ~]# cat /etc/named.conf // Default initial "Caching Only" name server configuration ... # Delegate to gns on cluster01 cluster01.example.com #cluster sub-domain# NS cluster01-gns.example.com # Let the world know to go to the GNS vip cluster01-gns.example.com 192.0.2.155 # cluster GNS Address

A request to resolve cluster01-scan.cluster01.example.com would be forwarded to the GNS on 192.0.2.155. Each node in the cluster runs a multicast DNS (mDNS) process.
Copyright 2010, Oracle and/or its affiliates. All rights reserved.

1 - 26

Single Client Access Name


The single client access name (SCAN) is the address used by clients connecting to the cluster. The SCAN is a fully qualified host name located in the GNS subdomain registered to three IP addresses.
# dig @192.0.2.155 cluster01-scan.cluster01.example.com ... ;; QUESTION SECTION: ;cluster01-scan.cluster01.example.com. IN A ;; ANSWER SECTION: cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.244 cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.246 cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.245 ;; AUTHORITY SECTION: cluster01.example.com. 10800 IN A 192.0.2.155 ;; SERVER: 192.0.2.155#53(192.0.2.155)

The SCAN provides a stable, highly available name for clients to use, independent of the nodes that make up the cluster.
Copyright 2010, Oracle and/or its affiliates. All rights reserved.

1 - 27

GPnP Architecture Overview


VIP resolution Host name to address resolution Gets the three SCAN VIPS

DNS
Use 1 SCAN VIP

VIP name and hostname resolution

GNS

Register node VIP & SCAN VIP One name that resolves to three VIPs

GNS Static VIP Load balacing

SL LL
Node1
profile.xml

SL LL
Node2
Least loaded node for service

SL LL
Node4
Node & SCAN VIP agents orarootagent

remote_listener

LL
Node3

LL
local_listener

Client

Scan+port+service

Noden

Dynamic VIP addresses obtained by orarootagent

Get Node VIP & SCAN VIP

DHCP
GPnPd discovery

GPnP mDNS

GPnP mDNS

GPnP mDNS

GPnP mDNS

Profile GPnP replication

mDNS

1 - 29

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

How GPnP Works: Cluster Node Startup


1. IP addresses are negotiated for public interfaces using DHCP:
VIPs SCAN VIPs

2. A GPnP agent is started from the nodes Clusterware home. 3. The GPnP agent either gets its profile locally or from one of the peer GPnP agents that responds. 4. Shared storage is configured to match profile requirements. 5. Service startup is specified in the profile, which includes:
Grid Naming Service for external names resolution Single client access name (SCAN) listener
1 - 31

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

How GPnP Works: Client Database Connections


GNS DNS

Database client

SCAN listener SCAN listener SCAN listener

Listener1

Listener2

Listener3

1 - 32

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Quiz
The init.ohasd entry in the /etc/inittab file is responsible for: 1. Starting Oracle Clusterware when the node boots 2. Mounting shared volumes as required by Oracle Clusterware 3. Managing node evictions 4. Restarting ohasd in the event of a crash

1 - 33

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Quiz
Which of the following statements regarding Grid Naming Service is not true? 1. GNS is an integral component of Grid Plug and Play. 2. Each node in the cluster runs a multicast DNS (mDNS) process. 3. The GNS virtual IP address must be assigned by DHCP. 4. The cluster subdomain is defined as a delegated domain.

1 - 34

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Module 3: ASM Architecture

1 - 35

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

What Is Oracle ASM?

Application
File system

Application

ASM
Logical Volume Manager

Operating system Hardware

Operating system Hardware

1 - 36

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM and ASM Cluster File System


ASM manages Oracle database files. ACFS manages other files. Spreads data across disks to balance load Provides integrated mirroring across disks Solves many storage management challenges

Application

Database

ACFS ASM/ADVM
Operating System

3rd Party FS

1 - 37

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Key Features and Benefits


Stripes files rather than logical volumes Provides redundancy on a file basis Enables online disk reconfiguration and dynamic rebalancing Reduces the time significantly to resynchronize a transient failure by tracking changes while disk is offline Provides adjustable rebalancing speed Is cluster-aware Supports reading from mirrored copy instead of primary copy for extended clusters Is automatically installed as part of the Grid Infrastructure

1 - 39

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Instance Designs: Nonclustered ASM and Oracle Databases


Oracle DB A instance Oracle DB B instance Oracle DB C instance

ASM instance

Single-instance database server

ASM disk group A


1 - 40

ASM disk group B

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Instance Designs: Clustered ASM for Clustered Databases


Oracle DB A instance 1 Oracle DB A instance 2 Oracle DB B instance 1 Oracle DB B instance 2

ASM instance 1

ASM instance 2

ASM instance 3

ASM instance 4

Oracle RAC servers

ASM disk group A


1 - 41

ASM disk group B

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Instance Designs: Clustered ASM for Mixed Databases


Oracle DB A instance 1 Oracle DB B instance 1 Oracle DB C instance 2 Oracle DB C instance 1

ASM instance 1

ASM instance 2

ASM instance 3

Database servers

ASM disk group A


1 - 42

ASM disk group B

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM System Privileges


An ASM instance does not have a data dictionary, so the only way to connect to ASM is by using these system privileges:

ASM Privilege Privilege Group Privilege


SYSASM OSASM Full administrative privilege

SYSDBA

OSDBA for ASM

Access to data stored on ASM Create and delete files Grant and revoke file access

SYSOPER

OSOPER for ASM Limited privileges to start and stop the ASM instance along with a set of nondestructive ALTER DISKGROUP commands

The SYS user on ASM is automatically created with the SYSASM privilege.
Copyright 2010, Oracle and/or its affiliates. All rights reserved.

1 - 43

ASM OS Groups with Role Separation


To separate the duties of ASM administrators and DBAs, there are six OS groups:
Group
OSASM
OSDBA OSOPER oraInventory group OSDBA OSOPER

For
ASM
ASM ASM Both DB DB

Example OS Group
asmadmin
asmdba asmoper oinstall dba oper

Privilege
SYSASM
SYSDBA SYSOPER

SYSDBA SYSOPER

1 - 44

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Authentication for Accessing ASM Instances


There are three modes of connecting to ASM instances: Local connection using operating system authentication
$ sqlplus / AS SYSASM SQL> CONNECT / AS SYSOPER

Local connection using password file authentication

$ sqlplus fred/xyzabc AS SYSASM SQL> CONNECT bill/abc123 AS SYSASM

Remote connection using Oracle Net Services and password authentication

$ sqlplus bill/abc123@asm1 AS SYSASM

SQL> CONNECT fred/xyzabc@asm2 AS SYSDBA


1 - 45

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Password-Based Authentication for ASM


Password-based authentication:
Uses a password file Can work both locally and remotely

REMOTE_LOGIN_PASSWORDFILE must be set to a value other than NONE to enable remote password-based authentication.

A password file is created initially:


By Oracle Universal Installer when installing ASM Manually with the orapwd utility Containing only the SYS and ASMSNMP users

Users can be added to the password file using:


SQL*Plus GRANT command ASMCMD orapwuser command

1 - 46

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Managing the ASM Password File


For the ASM instance, the password file: Can be created by a user that owns the ASM software Holds roles assigned to users Is required for Oracle Enterprise Manager to connect to ASM remotely Can be viewed from
SQL*Plus SELECT * FROM V$PWFILE_USERS ASMCMD lspwusr

1 - 47

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Using a Single OS Group

Role/Software
Oracle ASM administrator/Oracle Grid Infrastructure home Database administrator 1/Database home 1 Database administrator 2/Database home 2 Operating system disk device owner

Software Owner
oracle oracle oracle oracle

Groups/Privileges
dba/SYSASM, SYSDBA, SYSOPER dba/SYSASM, SYSDBA, SYSOPER dba/SYSASM, SYSDBA, SYSOPER dba

1 - 48

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Using Separate OS Groups


Role/Software Software Owner Groups/Privileges/OS Group
asmadmin/SYSASM/OSASM asmdba/SYSDBA/OSDBA for ASM asmoper/SYSOPER /OSOPER for ASM asmdba/SYSDBA dba1/SYSDBA for db1 oper1/SYSOPER for db1 asmdba/SYSDBA dba2/SYSDBA oper2/SYSOPER asmadmin

Oracle ASM administrator grid Oracle Grid Infrastructure home Database administrator 1 Database home 1 Database administrator 2 Database home 2 Operating system disk device owner oracle1

oracle2

grid

1 - 49

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Components: Software


For ASM installation of software: The directories are located by the operating system environment variables.
ORACLE_BASE is the top-level directory for a particular software owner. ORACLE_HOME is used to identify the top-level directory of the Grid Infrastructure software.


1 - 50

Use a common ORACLE_BASE for all Oracle products owned by the same user. Use an isolated ORACLE_HOME location from other Oracle products even if they are the same version. Do not place Grid ORACLE_HOME below ORACLE_BASE. ORACLE_HOME requires 3 GB to 5 GB of disk space.
Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Components: ASM Instance


The ASM instance comprises the process and memory components for ASM.
System Global Area (SGA) memory

Shared pool

Large pool

ASM cache

Free memory

Processes CPU components RBAL MARK ARBn GMON Onnn PZ9n

Other misc. processes ASM instance

1 - 51

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Components: ASM Instance Primary Processes


The ASM instance primary processes are responsible for ASMrelated activities.
Process RBAL ARBn GMON MARK Onnn PZ9n Description
Opens all device files as part of discovery and coordinates the rebalance activity One or more slave processes that do the rebalance activity Responsible for managing the disk-level activities such as drop or offline and advancing the ASM disk group compatibility Marks ASM allocation units as stale when needed One or more ASM slave processes forming a pool of connections to the ASM instance for exchanging messages One or more parallel slave processes used in fetching data on clustered ASM installation from GV$ views

1 - 53

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Components: Node Listener


The node listener is a process that helps establish network connections from ASM clients to the ASM instance. Runs by default from the Grid $ORACLE_HOME/bin directory Listens on port 1521 by default Is the same as a database instance listener Is capable of listening for all database instances on the same machine in addition to the ASM instance Can run concurrently with separate database listeners or be replaced by a separate database listener Is named tnslsnr on the Linux platform

1 - 54

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Components: Configuration Files


The ASM installation of software uses several configuration files to define the environment. ASM instance configuration files include:
The server parameter file (SPFILE), which initializes the ASM instance and defines startup parameters orapw+ASM which is the binary password file used for remote authentication to the ASM instance

Node listener configuration files include:


listener.ora, a text file that defines the node listener sqlnet.ora, an optional text file that provides additional listener options

Other miscellaneous text configuration files include:


/etc/oratab, which lists all the instances on the host machine /etc/oraInst.loc, which defines the Oracle inventory directory

1 - 55

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Components: Group Services


Group services provided by Oracle Clusterware allow for cooperating applications to communicate in a peer environment. Group services for the Oracle environment:
Provide information necessary to establish connections Provide assistance in doing lock recovery Guarantee ASM disk group number uniqueness Monitor node membership, evictions, and cluster locks Starting and stopping ASM instances Starting and stopping dependent database instances Mounting and dismounting disk groups Mounting and dismounting ACFS volumes
Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Oracle Clusterware is responsible for:


1 - 56

ASM Components: ASM Disk Group


The ASM disk group is the fundamental object that ASM manages; it: Consists of one or more ASM disks that provide space Includes self-contained metadata and logging information for management of space within each disk group Is the basis for storage of ASM files Supports three disk group redundancy levels:
Normal defaults to internal two-way mirroring of ASM files. High defaults to internal three-way mirroring of ASM files. External uses no ASM mirroring and relies on external disk hardware or redundant array of inexpensive disks (RAID) to provide redundancy.

1 - 57

Supports ASM files from multiple databases


Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Disk Group: Failure Groups


A failure group is a subset of the disks in a disk group, which could fail at the same time because of shared hardware. Failure groups enable the mirroring of metadata and user data. The default failure group creation puts every disk in its own failure group. Multiple disks can be placed in a singe failure group at disk group creation. Failure groups apply only to normal and high redundancy disk groups.
A normal redundancy disk group requires at least two failure groups to implement two-way mirroring of files. A high redundancy disk group requires at least three failure groups to implement three-way mirroring of files.
1 - 58

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Components: ASM Disks


ASM disks are the storage devices provisioned to ASM disk groups. Are formed from five sources as follows:
A disk or partition from a storage array An entire physical disk or partitions of a physical disk Logical volumes (LV) or logical units (LUN) Network-Attached Files (NFS) Exadata grid disk

1 - 59

Are named when added to a disk group using a different name than the operating system device name May use different operating system device names on different nodes in a cluster for the same ASM disk Are divided into allocation units (AU) with sizes 1, 2, 4, 8, 16, 32, or 64 MB allowed
Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Components: ASM Files


ASM files are a limited set of file types stored in an ASM disk group. Some supported file types:
Control files Data files Flashback logs DB SPFILE Data Pump dump sets Data Guard configuration

Temporary data files


Online redo logs Archive logs

RMAN backup sets


RMAN data file copies Transport data files

Change tracking bitmaps


OCR files ASM SPFILE

Are stored as a set or collection of data extents Are striped across all disks in a disk group Use names that begin with a plus sign (+), which are automatically generated or from user-defined aliases
Copyright 2010, Oracle and/or its affiliates. All rights reserved.

1 - 60

ASM Files: Extents and Striping


ASM can use variable size data extents to support larger files, reduce memory requirements, and improve performance. Each data extent resides on an individual disk. Data extents consist of one or more allocation units. The data extent size is:
Equal to AU for the first 20,000 extents (019999) Equal to 4 AU for the next 20,000 extents (2000039999) Equal to 16 AU for extents above 40,000

ASM stripes files using extents with a coarse method for load balancing or a fine method to reduce latency. Coarse-grained striping is always equal to the effective AU size. Fine-grained striping is always equal to 128 KB.
1 - 61

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Files: Mirroring


ASM mirroring is specified at the file level. Two files can share the same disk group with one file being mirrored while the other is not. ASM will allocate the extents for a file with the primary and mirrored copies in different failure groups. The mirroring options for ASM disk group types are:
Disk Group Type
External redundancy
Normal redundancy

Supported Mirroring Levels


Unprotected (None)
Two-way Three-way Unprotected (None) Three-way

Default Mirroring Level


Unprotected (None)
Two-way

High redundancy

Three-way

1 - 62

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Components: ASM Clients


Any active database instance that is using ASM storage and currently connected to the ASM instance is an ASM client. ASM clients are tracked in the v$asm_client dynamic performance view. Each file in ASM is associated with a single database.
Oracle DB B instance Oracle DB A instance
ASM instance

Oracle DB C instance

OCR

ADVM

Mounted ACFS

1 - 63

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Components: ASM Utilities


Many utilities can be used for ASM administration. These utilities may include: Oracle Universal Installer (OUI) ASM Configuration Assistant (ASMCA) Oracle Enterprise Manager (EM) SQL*Plus ASM Command-Line utility (ASMCMD) Listener controller utility (lsnrctl) Server controller utility (srvctl) XML DB (FTP and HTTP)

1 - 64

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

ASM Scalability
ASM imposes the following limits: 63 disk groups in a storage system 10,000 ASM disks in a storage system Two-terabyte maximum storage for each ASM disk (nonExadata) Four-petabyte maximum storage for each ASM disk (Exadata) 40-exabyte maximum storage for each storage system 1 million files for each disk group ASM file size limits (database limit is 128 TB):
External redundancy maximum file size is 140 PB. Normal redundancy maximum file size is 42 PB. High redundancy maximum file size is 15 PB.
1 - 65

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

Summary
In this lesson, you should have learned how to: Explain the principles and purposes of clusters Describe Cluster hardware best practices Describe the Oracle Clusterware architecture Describe how Grid Plug and Play affects Clusterware Describe the ASM architecture Describe the components of ASM

1 - 66

Copyright 2010, Oracle and/or its affiliates. All rights reserved.

You might also like