Vmware Oracle Databases On Vmware Best Practices Guide
Vmware Oracle Databases On Vmware Best Practices Guide
Oracle Workloads
Version 1.0
May 2016
VMware, Inc.
3401 Hillview Ave
Palo Alto, CA 94304
www.vmware.com
Contents
1. Introduction ...................................................................................... 9
2. vSphere ......................................................................................... 10
3. VMware Support for Oracle Databases on vSphere ....................... 11
3.1 VMware Oracle Support Policy .................................................................................... 11
3.2 VMware Oracle Support Process................................................................................. 12
List of Figures
Figure 1. VMware vSphere Oracle Support Process .................................................................................. 12
Figure 2. Example of a 12 vCPU Wide Virtual Machine ............................................................................. 19
Figure 3. Different Layers of Storage Technology ...................................................................................... 27
Figure 4. VMFS Multi-Writer Flag Supported/Unsupported Features ......................................................... 32
Figure 5. Virtual SAN Cluster Datastore ..................................................................................................... 34
Figure 6. Virtual SAN Cluster for Oracle RAC ............................................................................................ 35
Figure 7. Example Network Layout of Oracle RAC Database on VMware ................................................. 37
Figure 8. Standard Switch ........................................................................................................................... 38
Figure 9. Distributed Switch ........................................................................................................................ 39
Figure 10. Architecture Deploying VMware NSX ........................................................................................ 40
Figure 11. Jumbo Frame Network .............................................................................................................. 42
Figure 12. Oracle Database Details (1 of 3) ............................................................................................... 50
Figure 13. Oracle Database Details (2 of 3) ............................................................................................... 51
Figure 14. Oracle Database Details (3 of 3) ............................................................................................... 52
Figure 15. Top 5 Timed Foreground Events Listing ................................................................................... 53
Figure 16. vSphere Data Protection ............................................................................................................ 55
Figure 18. Heartbeat and Status Signals .................................................................................................... 58
Figure 19. vSphere Fault Tolerance ........................................................................................................... 61
Figure 20. vSphere Metro Storage Cluster ................................................................................................. 62
Figure 23. Oracle Data Guard Based Replication of Oracle Databases .................................................... 64
Figure 24. VMware Continuent Features .................................................................................................... 66
Figure 26. vSphere Based Replication of Virtual Machine Containing Oracle Databases ......................... 67
Figure 27. Storage-Based Replication of Virtual Machine Containing Oracle Databases .......................... 68
Figure 28. Site Recovery Manager Components ........................................................................................ 69
Figure 29. EVO:RAIL Capabilities ............................................................................................................... 71
Figure 30. EVO:RAIL Hardware System Details ........................................................................................ 72
Figure 31. EVO:RACK Configuration .......................................................................................................... 72
Figure 32. EVO:RACK Model...................................................................................................................... 73
Figure 33. Virtual Machine Memory Settings .............................................................................................. 78
List of Tables
Table 1. Host Systems Best Practices ........................................................................................................ 13
Table 2. BIOS Settings Maximized for Performance .................................................................................. 14
Table 3. Virtual CPU-Related Best Practices.............................................................................................. 17
Table 4. rESXTOP NUMA Metrics .............................................................................................................. 20
Table 5. Memory-Related Best Practices ................................................................................................... 22
Table 6. Recommended Storage Best Practices ........................................................................................ 26
Table 7. VMFS and Raw Disk Mapping Trade-Offs .................................................................................... 28
Table 8. Recommended Networking-Related Best Practices ..................................................................... 36
Table 9. Recommendation for Host Processes .......................................................................................... 43
Table 10. Virtual Machine Timekeeping Best Practice Recommendation .................................................. 43
Table 11. Performance Monitoring Recommendation ................................................................................ 48
Table 12. ESX/ESXi Performance Counters ............................................................................................... 48
Table 13. High Availability Best Practice Recommendation ....................................................................... 59
1. Introduction
This Oracle Databases on VMware Best Practices Guide provides best practice guidelines for deploying
Oracle databases on VMware vSphere®. The recommendations in this guide are not specific to any
particular set of hardware, or size and scope of any particular Oracle database implementation. The
examples and considerations provide guidance, but do not represent strict design requirements.
The successful deployment of Oracle on vSphere 5.x/6.0 is not significantly different from deploying
Oracle on physical servers. DBAs can fully leverage their current skill set while also delivering the benefits
associated with virtualization.
In addition to this guide, VMware has created separate best practice documents for storage, networking,
and performance.
This document also includes information from two white papers, Performance Best Practices for VMware
vSphere 5.5 (https://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf) and Performance Best
Practices for VMware vSphere 6.0 (http://www.vmware.com/files/pdf/techpaper/VMware-PerfBest-
Practices-vSphere6-0.pdf)
See Section 17, References for a list of other documents that can help you successfully deploy Oracle on
vSphere.
2. vSphere
VMware virtualization solutions provide numerous benefits to DBA administrators. VMware virtualization
creates a layer of abstraction between the resources required by an application and operating system,
and the underlying hardware that provides those resources. This abstraction layer provides value for the
following:
• Consolidation – VMware technology allows multiple application servers to be consolidated onto one
physical server, with little or no decrease in overall performance.
• Ease of provisioning – VMware virtualization encapsulates an application into an image that can be
duplicated or moved, greatly reducing the cost of application provisioning and deployment.
• Manageability – Virtual machines can be moved from server to server with no downtime using
VMware vSphere® vMotion®, which simplifies common operations such as hardware maintenance,
and reduces planned downtime.
• Availability – If an unplanned hardware failure occurs, VMware vSphere High Availability (HA) restarts
affected virtual machines on another host in a VMware cluster. With VMware HA you can reduce
unplanned downtime and provide higher service levels to an application. VMware vSphere Fault
Tolerance (FT) features zero downtime and zero data loss, providing continuous availability in the
face of server hardware failures for any application running in a virtual machine.
4. Server Guidelines
4.1 General Guidelines
The following table lists general best practices for host systems.
Table 1. Host Systems Best Practices
Recommendation Justification
Create a computing environment optimized The VMware ESX® or VMware ESXi™ host BIOS
for vSphere. settings can be specifically adjusted to maximize
compute resources (such as disabling unnecessary
processes and peripherals) to Oracle databases.
Create golden images of optimized After the operating system has been prepared with the
operating systems using vSphere cloning appropriate patches and kernel settings, Oracle can be
technologies. installed in a virtual machine the same way it is installed
on a physical system. This speeds up the installation of
a new database.
Upgrade to the latest version of ESXi and VMware and database administrators can realize a
vSphere. significant performance boost after upgrading to the
latest vSphere release from prior versions.
Allow vSphere to choose the best virtual Confirm that the virtual machine setting has Automatic
machine monitor based on the CPU and selected for the CPU/MMU Virtualization option.
guest operating system combination.
Verify that all hardware in the system is on Verify that the hardware meets the minimum
the hardware compatibility list for the configuration supported by the VMware software
specific version of VMware software you will installed.
be running.
Power Management OS Controlled Mode Allow ESXi to control CPU power-saving features.
Execute Disable Yes Required for VMware vSphere vMotion® and VMware
vSphere Distributed Resource Scheduler™ (DRS)
features.
vSphere 4.0 vSphere 5.0 vSphere 5.1 vSphere 5.5 vSphere 6.0
Network 30 Gb/Sec > 36 Gb/Sec > 36 Gb/Sec > 40 Gb/sec > 80 Gb/sec
vSphere supports large capacity virtual machines, so it can accommodate larger-sized Oracle databases
and SGA footprints. vSphere 6.0 host and virtual machine specifications are as follows:
• Each ESXi host supports up to 6 TB RAM, 1024 virtual machines, and 4096 virtual CPUs.
• Each virtual machine can support up to 64 virtual CPUs and 4 TB RAM.
For more information about the configuration maximum, see the references listed in Appendix C.Error!
Not a valid bookmark self-reference. Configuration Maximums.
Recommendation Justification
Do not over-allocate vCPUs – try to match If monitoring of the actual workload shows that the
the exact workload. Oracle database is not benefitting from the increased
virtual CPUs, the excess vCPUs impose scheduling
constraints and can degrade overall performance of the
virtual machine.
Enable hyperthreading for Intel Core i7 With the release of Intel Xeon 5500 series processors,
processors. enabling hyperthreading is recommended.
Keep default 1 core per socket for vNuma to If vNUMA is different from the actual physical NUMA
match physical NUMA topology and try to topology, it might result in NUMA imbalance and can
align VMs with physical NUMA boundaries. degrade overall performance of the virtual machine.
Leave the latency sensitive setting at the Examples of applications that require the setting to be
default of Normal High include VOIP, media player apps, and apps that
require frequent access to the mouse or keyboard
devices.
VMware recommends the following practices for allocating CPU to Oracle Business Critical Application
(BCA) database virtual machines:
• Start with a thorough understanding of your workload. Database server utilization varies widely by
application.
o If the application is commercial, follow published guidelines where appropriate.
o If the application is custom-written, work with the application developers to determine resource
requirements.
• Capacity Planner can analyze your current environment and provide resource utilization metrics that
can aid in the sizing process.
• If the exact workload is not known, start with fewer virtual CPUs and increase the number later if
necessary. Allocate multiple vCPUs to a virtual machine only if the anticipated database workload can
take advantage of all the vCPUs.
• If unsure of the workload, use hardware vendor recommended Oracle sizing guidelines.
• After the workload is established, vCPU over commitment can be done with caution. Verify that
proper monitoring processes and procedures are set in place to monitor the system.
• For Tier 1 production BCA databases, the recommendation is to avoid over commitment of processor
resources (maintain 1:1 ratio of physical cores to vCPUs).
• For lower-tiered workloads, reasonable over commitment can increase aggregate throughput and
maximize license savings. The consolidation ratio varies depending on workloads.
• When consolidating multiple virtual machines on single ESX / ESXi host, proper hardware sizing is
critical for optimal performance. Confirm that cumulative physical CPU resources on a host are
adequate to meet the needs of the virtual machines by testing your workload in the planned
virtualized environment.
• CPU over commitment should be based upon actual performance data to avoid adversely affecting
virtual machine performance.
5.3 Hyperthreading
Hyperthreading enables a single physical processor core to behave like two logical processors, allowing
two independent threads to run simultaneously. Unlike having twice as many processor cores—which can
roughly double performance—hyperthreading can provide anywhere from a slight to a significant increase
(up to 24 percent) in system performance by keeping the processor pipeline busier.
For example, an ESX / ESXi host system enabled for hyperthreading on an 8-core server is aware of 16
threads that appear as 16 logical processors. With the release of Intel Xeon 5500 series processors,
enabling hyperthreading is recommended. Prior to the 5500 series, VMware had no uniform
recommendation with respect to hyperthreading because the measured performance results were not
consistent across applications, run environments, or database workloads.
Avoid CPU affinity on systems with Hyper-threading. Pinning vCPUs from different/single SMP VMs to
both logical processors on one core causes poor performance because logical processors share
processor resources and that will affect the ability of the NUMA scheduler to rebalance VMs across
NUMA nodes for fairness.
The recommendation is to keep the default of one core per socket (with the number of virtual sockets
therefore equal to the number of vCPUs) for vNUMA to match physical NUMA topology. Try to align VMs
with physical NUMA boundaries.
Factors to keep in mind about vNUMA include the following:
• vNUMA requires virtual hardware version 8 or later.
• vNUMA for VM is enabled with vCPUs greater than 8 (set numa.vcpu.min to a lower value if there
is a need to expose vNUMA to a guest with less vCPUs).
• Hot add CPU
When configuring vNUMA with vCPU hot plug settings, the virtual machine will be started without
virtual NUMA enabled. Instead it will use Uniform Memory Access with interleaved memory access.
See vNUMA is disabled if VCPU hotplug is enabled (http://kb.vmware.com/kb/2040375).
• Use ESXTOP to monitor NUMA performance at vSphere
The following table lists a few rESXTOP metrics to consider. NMIG is the key metric to examine for
NUMA imbalances.
Table 4. rESXTOP NUMA Metrics
Metric Explanation
GST_NDx (MB) The guest memory being allocated for VM on NUMA node x. “x” is the node
number.
OVD_NDx (MB) The VMM overhead memory being allocated for VM on NUMA node x.
For additional details, refer to “ESXi CPU Considerations” in Performance Best Practices for VMware
vSphere 6.0 (http://www.vmware.com/pdf/Perf_Best_Practices_vSphere6.0.pdf).
In some circumstances, enabling Oracle NUMA support might improve performance, and the Oracle
documentation suggests that it be tested in a test environment before deciding to use it with production
system.
VMware recommends keeping NUMA enabled in server hardware BIOS and at the guest operating
system level which should also be the default settings for NUMA support with most servers and guest
operating systems.
6. Memory Guidelines
6.1 General Guidelines for Memory
The following tables lists memory-related best practices.
Table 5. Memory-Related Best Practices
Recommendation Justification
Set memory reservations equal to the sum The memory reservation should be large enough to
total of the size of the Oracle SGA, the avoid kernel swapping between ESX and the guest OS
Oracle PGA , the Oracle Background because Oracle databases can be memory-intensive.
processes stack space and Operating
System Used Memory.
Use large memory pages. Large page support is enabled by default in ESX 3.5
and later, and is supported from Oracle 9i R2 for Linux
operating systems and 10g R2 for Windows. Enable
large pages in the guest OS to improve the
performance of Oracle databases on vSphere.
Do not turn off ESXI memory management ESXI uses various memory management mechanisms
mechanisms unless directed by VMware to reclaim virtual machine memory when under memory
support. duress. Disabling it will lead to performance
degradation and a possible crash.
Appendix A provides a description of virtual machine memory settings that are discussed in this section.
For further background on VMware memory management concepts, see vSphere Resource Management
(https://pubs.vmware.com/vsphere-60/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-60-
resource-management-guide.pdf).
In production environments, carefully consider the impact of overcommitting memory and overcommit
only after collecting data to determine the amount of overcommitment possible. To determine the
effectiveness of memory sharing and the degree of acceptable overcommitment for a given database, run
the workload and use resxtop or esxtop to observe the actual savings.
The general recommendation is to not disable any of the above memory management mechanisms
including balloon driver, Transparent Page Sharing (TPS) and memory compression.
Starting with vSphere 6.0, hot-adding memory to a virtual machine will distribute the new memory regions
across the vNUMA nodes.
In these environments, a DBA can introduce memory overcommitment to take advantage of VMware
memory reclamation features and techniques. Even in these environments, the type and number of
databases that can be deployed using overcommitment largely depend on their actual workload.
In addition to the usual 4 KB memory pages, ESXi also provides 2 MB memory pages. These huge pages
improve performance by significantly reducing TLB misses (applications with large active memory working
sets).
ESXi will use large pages to back the guest operating system memory pages even if the guest operating
system does not make use of large memory pages. The full benefit of huge pages occurs when the guest
operating system uses them as well as ESXi does. ESXi does not share large pages unless under
memory pressure. See Transparent Page Sharing (TPS) in hardware MMU systems at
https://kb.vmware.com/kb/1021095 and Use of large pages can cause memory to be fully allocated at
https://kb.vmware.com/kb/1021896).
VMware recommends using guest operating system level huge memory pages.
Huge pages are not compatible with the Oracle 11g Automatic Memory Management (AMM) model. The
recommendation is to use huge pages with 10g Automatic Shared Memory Management model (ASMM).
See Metalink Notes 401749.1 Shell Script to Calculate Values Recommended Huge Pages/Huge TLB.
Oracle recommends disabling transparent huge pages because it causes performance issues with Oracle
databases and causes node reboots in RAC. See Metalink Note 1557478.1.
7. Storage Guidelines
7.1 General Guidelines
The following table describes storage-related best practices.
Table 6. Recommended Storage Best Practices
Recommendation Justification
Enable jumbo frames for IP-based storage Jumbo frames enable Ethernet frames to have a larger
using iSCSI and NFS. payload, allowing for improved performance.
Create dedicated datastores to service The creation of dedicated datastores for I/O-intensive
database workloads. databases is analogous to provisioning dedicated
LUNs in the physical world. This is a typical design for
a mission-critical enterprise workload.
Enable jumbo frames for IP-based storage Jumbo frames enable Ethernet frames to have a larger
using iSCSI and NFS. payload, allowing for improved performance.
Create dedicated datastores to service The creation of dedicated datastores for I/O-intensive
database workloads. databases is analogous to provisioning dedicated
LUNs in the physical world. This is a typical design for
a mission-critical enterprise workload.
Use VMware vSphere VMFS for Oracle To balance performance and manageability in a virtual
database deployments. environment, deploy Oracle using VMFS.
Align VMFS properly. Like other disk-based file systems, VMFS suffers a
penalty when the partition is unaligned. Use VMware
vCenter® to create VMFS partitions because it
automatically aligns the partitions.
Use Oracle Automatic Storage Management. Oracle ASM provides integrated clustered file system
and volume management capabilities for managing
Oracle database files. ASM simplifies database file
creation while delivering near-raw device file system
performance.
Use your storage vendor’s best practices Oracle ASM cannot determine the optimal data
documentation when laying out the Oracle placement or LUN selection with respect to the
database. underlying storage infrastructure. For that reason,
Oracle ASM is not a substitute for close
communication between the storage administrator and
the database administrator.
Avoid silos when designing the storage At a minimum, designing the optimized architecture
architecture. should involve the database administrator, storage
administrator, network administrator, VMware
administrator, and application owner.
Recommendation Justification
Use paravirtualized SCSI adapters for Oracle The combination of the new paravirtualized SCSI driver
data files with demanding workloads. (PVSCSI) and additional ESX / ESXi kernel-level
storage stack optimizations dramatically improves
storage I/O performance.
Storage configuration is essential for any successful database deployment, especially in virtual
environments where you can consolidate many different Oracle database workloads on a single ESX /
ESXi host. Your storage subsystem should provide sufficient I/O throughput as well as storage capacity to
accommodate the cumulative needs of all virtual machines running on your ESX / ESXi hosts.
VMFS RDM
• Volume can host many virtual machines • Maps a single LUN to one virtual
(or can be dedicated to one virtual machine, so only one virtual machine is
machine). possible per LUN.
• Increases storage utilization, provides • More LUNs are required, so it is easier
better flexibility, easier administration to reach the LUN limit of 256 that can be
and management. presented to an ESX/ESXi host.
• Can potentially support clustering • RDM might be required to leverage third
software that does not issue SCSI party storage array-level backup and
reservations, such as Oracle replication tools.
Clusterware. To configure, follow the
procedures given in Disabling • RDM volumes can help facilitate
simultaneous write protection provided migrating physical Oracle databases to
by VMFS using the multi-writer flag virtual machines. Alternatively, enables
(http://kb.vmware.com/kb/1034165). quick migration to physical in rare Oracle
support cases.
• Oracle RAC node live migration.
• Required for MSCS quorum disks.
Jumbo frames as well as 10 GbE connectivity are recommended for IP-based storage using iSCSI and
NFS. Jumbo frames must be enabled for each virtual switch through the VMware vSphere Client. Also, if
you use an ESX / ESXi host, you must create a VMkernel network interface with jumbo frames enabled. It
is also necessary to enable jumbo frames on the hardware as well, including the network switches and
storage arrays. When employing jumbo frames, every network hop in the path should be enabled.
See Enabling Jumbo Frames on virtual distributed switches at https://kb.vmware.com/kb/1038827.
Align the data disks within the guest operating system also using disk partitioning utilities. Partition mis-
alignment can add significant latency to high-end workloads due to a single I/O having to cross physical
boundaries. Partition alignment reduces the I/Os sent to disk by the controller thus reducing latency.
It is considered a best practice to do the following:
• Create VMFS partitions from within vCenter because they are aligned by default.
• Align the data disk for heavy I/O workloads using diskpart, fdisk or parted depending on the
operating system.
• Consult with the storage vendor for alignment recommendations on their hardware.
For more information, see the white paper Performance Best Practices for VMware vSphere 6.0
(http://www.vmware.com/pdf/Perf_Best_Practices_vSphere6.0.pdf).
Refer to the following article for best practices regarding optimal hub distance:
• Best Practices for Running VMware vSphere on iSCSI
http://www.vmware.com/files/pdf/iSCSI_design_deploy.pdf
Flash Read Cache is a feature in vSphere 5.5 that utilizes the vSphere flash infrastructure layer to
provide a host-level caching functionality for virtual machine I/Os using flash storage. The goal of
introducing the Flash Read Cache feature is to enhance performance of certain I/O workloads that exhibit
characteristics suitable for caching.
Flash Read Cache is a volatile write-through cache that is enabled on a per-VMDK basis with a cache
block size (4 KB to 1 MB). See the VMware vSphere Flash Read Cache1.0 FAQ at
www.vmware.com/files/pdf/techpaper/VMware-vSphere-Flash-Read-Cache-FAQ.pdf.
VMware recommends deploying Flash Read Cache for single instance database. Currently, there is no
support for Oracle RAC. See Enabling or disabling simultaneous write protection provided by VMFS using
the multi-writer flag at https://kb.vmware.com/kb/1034165. The following figure is a snapshot of a table
from this Knowledge Base article.
Figure 4. VMFS Multi-Writer Flag Supported/Unsupported Features
• Deficiencies in earlier versions of VMFS sometimes led to the use of RDMs, because they were a
superior virtual storage option at the time. This is no longer true.
It is a big challenge to back up multi-terabyte databases due to minimal and restricted backup windows.
The data churn itself can also be substantial. It is not feasible to make full backups of multi-terabyte
databases in the allotted backup windows.
There are three different levels of triggering backups of Oracle databases on vSphere:
• Application (for example, using Oracle RMAN)
• vSphere (for example, using VMware snapshots)
• Storage (for example, storage-based snapshots, disk sync/split, and so on)
Backup solutions like Oracle RMAN and SQL backup provide fine-level granularity for database backups,
but often do not constitute the fastest solution. Virtual machine snapshots containing the Oracle database
as backups would be ideal but as described in A snapshot removal can stop a virtual machine for long
time (http://kb.vmware.com/kb/1002836), the brief stun moment of the VM can potentially cause
performance issues. Storage-based snapshots would be the fastest of all the options, but unfortunately
the datastore/LUN level snapshot is not the same as the VM level. Therefore, there is no VMDK-level
granularity with storage-level snapshots.
An ideal backup solution combines the capabilities at the storage level with the granularity of a VM level
snapshot:
• The solution should be able to trigger backups and clones with VMDK granularity at the same time
from a virtual machine level.
• The solution should match the speed of the storage based snapshots.
VMware vSphere Virtual Volumes™ is the technology that provides the solution because it has the
capability of achieving both objectives—VMDK level granularity with storage level snapshot capability.
Figure 5. vSphere Virtual Volumes Architecture
For a detailed description of VMware vSphere Virtual Volumes, see the VMware vSphere Virtual Volumes
white paper at http://www.vmware.com/files/pdf/products/vvol/vmware-oracle-on-virtual-volumes.pdf.
Virtual SAN supports a hybrid disk architecture that leverages flash-based devices for performance and
magnetic disks for capacity and persistent data storage. In addition, Virtual SAN can use flash-based
devices for both caching and persistent storage. It is a distributed object storage system that leverages
the vSphere Storage Policy-Based Management (SPBM) feature to deliver centrally managed,
application-centric storage services and capabilities. Administrators can specify storage attributes, such
as capacity, performance, and availability, as a policy on a per-VMDK level. The policies dynamically self-
tune and load balance the system so that each virtual machine has the appropriate level of resources
Oracle single-instance and Oracle Real Application Clusters (RAC) databases can be deployed on Virtual
SAN.
For more details, see the Oracle Real Application Clusters on VMware Virtual SAN reference architecture
(http://www.vmware.com/files/pdf/products/vsan/vmware-oracle-real-application-clusters-on-vmware-
virtual-san-reference-architecture.pdf).
8. Networking Guidelines
8.1 General Networking Guidelines
The following table describes networking-related best practices.
Table 8. Recommended Networking-Related Best Practices
Recommendation Justification
Use the VMXNET family of paravirtualized The paravirtualized network adapters in the VMXNET
network adapters family implement an optimized network interface that
passes network traffic between the virtual machine
and the physical network interface cards with minimal
overhead.
Separate infrastructure traffic from virtual Virtual machines should not see infrastructure traffic
machine traffic for security and isolation. (security violation) and should not be impacted by
infrastructure traffic bursts (for example, vSphere
vMotion operations).
Use NIC teaming for availability and load NIC teams can share the load of traffic among some or
balancing. all of its members, or provide passive failover in the
event of a hardware failure or a network outage.
Take advantage of Network I/O Control to This can reduce cabling requirements, simplify
converge network and storage traffic onto management, and reduce cost.
10 GbE.
Enable jumbo frames for Oracle interconnect This can reduce cache fusion traffic thereby
traffic. enhancing performance.
• Use the VMXNET3 network adapter in vSphere. This is a paravirtualized device that works only if
VMware Tools™ is installed on the guest operating system. The VMXNET3 adapter is optimized for
virtual environments and designed to provide high performance. For more information about network
adaptors and compatibility with the ESX / ESXi release and supported guest operating system, see
Choosing a network adapter for your virtual machine (http://kb.vmware.com/kb/1001805).
• Use VMXNET3 paravirtualized adapter drivers for Oracle RAC private interconnect traffic.
• Allocate separate NICs for vSphere vMotion, FT logging traffic, ESXi console access management,
and Oracle RAC interconnect.
• vSphere 5.0 supports the use of more than one NIC for vSphere vMotion, allowing more
simultaneous vSphere vMotion instances; added specifically for memory-intensive applications like
databases.
Figure 7. Example Network Layout of Oracle RAC Database on VMware
Management
Oracle RAC –
Public and
Private
vMotion
management plane. The management functionality of the distributed switch resides on the vCenter
Server system that lets you administer the networking configuration of your environment on a data center
level. The data plane remains locally on every host that is associated with the distributed switch. The data
plane section of the distributed switch is called a host proxy switch. The networking configuration that you
create on vCenter Server (the management plane) is automatically pushed down to all host proxy
switches (the data plane).
Figure 9. Distributed Switch
Note that this new configuration option is only available in ESXi 5.0 and later. An alternative way to
disable virtual interrupt coalescing for all virtual NICs on the host, which affects all VMs, not just the
latency-sensitive ones, is to set the advanced networking performance option (Configuration >
Advanced Settings > Net) CoalesceDefaultOn to 0 (disabled).
For Oracle RAC Interconnect ,VMware recommends disabling virtual interrupt coalescing for VMXNET3
Private NIC by setting ethernetX.coalescingScheme=disabled where ethernetX is the vmnic of the Oracle
RAC interconnect.
Recommendation Justification
Disable unnecessary foreground and The impact of unnecessary foreground and background
background processes within the guest processes in the guest operating systems can lead to
operating system to save CPU cycles. unnecessary CPU wastage.
VMware recommends disabling unnecessary foreground and background processes within the guest
operating system.
• Examples of unnecessary Linux processes are: anacron, apmd, atd, autofs, cups, cupsconfig,
gpm, isdn, iptables, kudzu, netfs, and portmap.
• Examples of unnecessary Windows processes are: alerter, automatic updates, clip book, error
reporting, help and support, indexing, messenger, netmeeting, remote desktop, and system restore
services.
• For Linux installs, the database administrator (DBA) should request that the system administrator
compile a monolithic kernel to load only the necessary features. Whether you intend to run Windows
or Linux as the final optimized operating system, these host installs should be cloned by the VMware
administrator for reuse.
• After the operating system has been prepared, install Oracle the same way as installing for a physical
environment. Use the recommended kernel parameters listed in the Oracle Installation guide. It is a
good practice to check with Oracle Support for the latest settings to use prior to beginning the
installation process.
Recommendation Justification
To minimize time drift in virtual machines, The impact of high timer-interrupt rates in some operating
follow guidelines in relevant VMware systems can lead to time synchronization errors.
Knowledge Base articles.
Most operating systems track the passage of time by configuring the underlying hardware to provide
periodic interrupts. The rate at which those interrupts are configured to arrive varies for different operating
systems. High timer-interrupt rates can incur overhead that affects a virtual machine's performance. The
amount of overhead increases with the number of vCPUs assigned to a virtual machine. The impact of
these high timer-interrupt rates can lead to time synchronization errors.
To address timekeeping issues when running Oracle databases, follow the guidelines in the following
VMware Knowledge Base articles:
• Timekeeping best practices for Linux guests
http://kb.vmware.com/kb/1006427
• Timekeeping best practices for Windows, including NTP
http://kb.vmware.com/kb/1318
For Oracle RAC environments, follow the guidelines in the following VMware Knowledge Base articles:
• Disabling Time Synchronization
http://kb.vmware.com/kb/1189
In the VMware Tools control panel, the time synchronization check box is unselected, but you might
experience these symptoms:
• When you suspend a virtual machine, the next time you resume that virtual machine it synchronizes
the time to adjust it to the host.
• Time is resynchronized when you migrate the virtual machine using vSphere vMotion, take a
snapshot, restore to a snapshot, shrink the virtual disk, or restart the VMware Tools service in the
virtual machine (including rebooting the virtual machine).
ASM is not storage-aware. In other words, whatever disks are provisioned to a DBA can be used to
create a disk group. Oracle ASM cannot determine the optimal data placement or LUN selection with
respect to the underlying storage infrastructure. For that reason, Oracle ASM is not a substitute for close
communication between the storage administrator and the database administrator. Refer to your Oracle
installation guide to create ASM disk groups.
Figure 6This figure represents an example storage design for a virtualized Oracle OLTP database. The
design is based on the following principles:
• At a minimum, an optimized architecture requires joint collaboration among the database, VMware,
and storage administrators.
• Follow storage vendor best practices for database layout on their arrays (as is done in the physical
world).
Note that this figure illustrates only an example, and actual configurations for customer deployments can
differ.
Recommendation Justification
Use vCenter and/or the Guest OS counters can be used to get a rough idea of
esxtop/resxtop utility for performance within the virtual machine but, for example,
performance monitoring in the virtual CPU and memory usage reported within the guest OS can
environment. be different from what ESX / ESXi reports.
11.1 ESXTOP/rESXTOP
The resxtop and esxtop command-line utilities provide a detailed look at how ESX / ESXi uses resources
in real time. We can start either utility in one of three modes: interactive (default), batch, or replay.
The fundamental difference between resxtop and esxtop is that we can use resxtop remotely (or locally),
whereas esxtop can be started only through the service console of a local ESX host.
Always use the VI Client or vSphere Client, esxtop, or resxtop to measure resource utilization. CPU
and memory usage reported within the guest OS can be different from what ESX / ESXi reports.
Oracle DBA administrators should pay close attention to the counters listed in the following table.
See VMware Communities: Interpreting esxtop Statistics (http://communities.vmware.com/docs/DOC-
9279) for a full list of counters.
Table 12. ESX/ESXi Performance Counters
Of the CPU counters, the used time indicates system load, and ready time indicates overloaded CPU
resources.
A significant swap rate in the memory counters is a clear indication of a shortage of ESX / ESXi memory,
and high device latencies in the storage section point to an overloaded or misconfigured array.
Network traffic is not frequently the cause of most database performance problems except when large
amounts of iSCSI storage traffic are using a single network line. Check total throughput on the NICs to
see if the network is saturated.
To determine whether there is any swapping within the guest operating system, use in the in-guest
counters in the same manner as in physical environments.
Relevant performance attributes and measurement methods have been covered under the Virtual CPU
Guidelines (Chapter 5), Memory Guidelines (Chapter 6), Storage Guidelines (Chapter 7) and Networking
Guidelines (Chapter 8)VMware vRealize Operations and Blue Medora Adapter
VMware vRealize® Operations™ is built on a scale-out, resilient platform designed to deliver intelligent
operational insights to simplify and automate management of applications and infrastructure across
virtual, physical and cloud environments—from vSphere to Hyper-V, Amazon Web Services (AWS), and
more. With vRealize Operations comprehensive visibility across applications and infrastructure in one
place, IT organizations of all sizes can improve performance, avoid business disruption, and become
more efficient.
vRealize Operations delivers:
• Intelligent operations – Self-learning tools, predictive analytics, intelligent workload management, and
Smart Alerts about application and infrastructure health enable proactive identification and
remediation of emerging performance, capacity, and configuration issues.
• Policy-based automation – Out-of-the-box and customizable policies for critical IT operations are
associated with Smart Alerts, guided remediation, and compliance standards to deliver
recommendations, or trigger actions, that optimize performance and capacity and enforce
configuration standards.
• Unified management – An open and extensible platform, supported by third-party management packs
for Microsoft, SAP, and others, provides complete visibility in a single console across applications,
storage, and network devices.
The Blue Medora - VMware vRealize Operations Management Pack for Oracle Enterprise Manager
extends the VMware vRealize Operations Enterprise Edition by integrating with Oracle Enterprise
Manager and providing comprehensive visibility and insights into the health, capacity, and performance of
Oracle Databases, Oracle Middleware, and Oracle business critical applications. This management
package helps to detect capacity and performance issues so they can be corrected before they cause a
major impact.
Oracle Enterprise Manager (OEM) is Oracle’s systems management platform and is deployed by many
Oracle customers. The latest version is Oracle Enterprise Manager Cloud Control (EM12c). EM12c is
used to manage, provision, and patch the Oracle infrastructure landscape including Oracle database,
Oracle middleware (WebLogic, Tuxedo, IdM, and so on), various Oracle applications (PeopleSoft, Siebel,
JD Edwards, Oracle EBS, Fusion, and so on), and also Oracle hardware (storage, server, Exadata, and
so on).
For each OEM target (for example, a database, an operating system, a WebLogic J2EE instance, and so
on), the Oracle Enterprise Manager collects and stores up to hundreds of near-real-time health,
performance, availability, and configuration metrics. These are the same metrics that are used within
OEM to visualize Oracle environments and generate alerts on issues.
The Blue Medora - VMware vRealize Operations Management Pack for Oracle Enterprise Manager
makes all of these Oracle-related metrics available within vRealize Operations. It allows the use of
vRealize Operations capabilities to identify performance issues and gain deep insights into the health, risk
and efficiency of your virtual and physical Oracle workloads (the underlying operating systems and in
particular related VMware infrastructure, if applicable). This management pack helps customers recognize
and address symptoms that threaten performance and availability, before the symptoms become issues
that affect business users and the business itself.
For the Blue Medora - vRealize Operations Management Pack for Oracle Enterprise Manager, go to
http://www.bluemedora.com/products/vrops-management-pack-oracle-em/.
Snapshots are sets of historical data for specific time periods that are used for performance comparisons.
By default, AWR automatically generates snapshots of the performance data once every hour and retains
the statistics in the workload repository for 7 days. The data in the snapshot interval is then analyzed by
the Oracle DBAs for performance troubleshooting. AWR compares the difference between snapshots to
determine which SQL statements to capture based on the effect on the system load.
When looking at an AWR report, a good place to start is the Top 5 Timed Foreground Events section,
near the top of the report. This gives you an indication of the bottlenecks in the system during this sample
period.
Figure 15. Top 5 Timed Foreground Events Listing
After you have identified the top events, drill down to see what SQL and PL/SQL are consuming the
majority of those resources. In the Main Report section, click the SQL Statistics link. The Oracle
statistics metric indicates the top events and wait time that took place during the workload.
For Oracle statistics gathering and AWR report analysis, see the Oracle documentation at
https://docs.oracle.com/database/121/TGDBA/gather_stats.htm#TGDBA167.
vSphere Data Protection is a backup and recovery solution and is deployed as a virtual appliance and
managed using vSphere Web Client. vSphere Data Protection can back up and restore entire virtual
machines. vSphere Data Protection leverages VMware snapshots through the vSphere Storage APIs –
Data Protection product to back up virtual machines.
Figure 16. vSphere Data Protection
Before vSphere 6.0 and vSphere Data Protection 6.0, there were two editions of vSphere Data
Protection—vSphere Data Protection, included with vSphere, and vSphere Data Protection Advanced,
which was sold separately. With the release of vSphere Data Protection 6.0, all vSphere Data Protection
Advanced functionality has been consolidated into vSphere Data Protection 6.0 and included with
vSphere 6.0 Essentials Plus Kit and higher editions
For more information on vSphere Data Protection, see VMware documentation at
https://www.vmware.com/products/vsphere/features/data-protection.html.
You can back up Oracle databases on vSphere by choosing any of the storage technologies previously
mentioned. Keep in mind that the snapshot or clone contains VMDKs of other virtual machines unless the
VMware datastore is dedicated to a particular virtual machine.
Some examples of storage-based backup tools include EMC Replication Manager, EMC Avamar, and
NetApp SnapManager for Oracle.
Recommendation Justification
Do not disable vSphere HA. vSphere HA monitors vSphere hosts and virtual machines
to detect hardware and guest operating system failures
and restarts virtual machines on other vSphere hosts in
the cluster without manual intervention when a server
outage is detected.
For information about setting up shared VMDKs for an Oracle RAC cluster, see Enabling or disabling
simultaneous write protection provided by VMFS using the multi-writer flag
(http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1
034165).
For best practices for deploying Oracle RAC on VMware with respect to compute, memory, networking
and storage, see Section 5, Virtual CPU Guidelines, Section 6, Storage Guidelines, Section 7, Storage
Guidelines, and Section 8, Networking Guidelines.
See the following Oracle documentation for setting up an optimal Oracle Real Application cluster:
• Real Application Clusters Administration and Deployment Guide
https://docs.oracle.com/database/121/RACAD/admcon.htm#RACAD1111
• Oracle Databases on VMware RAC Deployment Guide
https://www.vmware.com/files/pdf/solutions/oracle/Oracle_Databases_VMware_RAC_Deployment_G
uide.pdf.
See also the Oracle Real Application Clusters on VMware Virtual SAN reference architecture for more
details about deploying Oracle RAC on Virtual SAN
(http://www.vmware.com/files/pdf/products/vsan/vmware-oracle-real-application-clusters-on-vmware-
virtual-san-reference-architecture.pdf).
Typical Oracle database virtual machines have more than four vCPUs, and therefore, are not a good fit
for the vSphere FT use case. However, see the VMware vSphere 6.0 documentation for best practices if
you need to set up vSphere Fault Tolerance for Oracle database virtual machines
(http://pubs.vmware.com/vsphere-60/index.jsp).
stretched cluster to provide this active balancing of resources should always be the primary design and
implementation goal. Although often associated with disaster recovery, vSphere metro storage cluster
infrastructures are not recommended as primary solutions for pure disaster recovery.
Figure 19. vSphere Metro Storage Cluster
• Oracle RAC and Oracle RAC One Node on Extended Distance (Stretched) Clusters
http://www.oracle.com/technetwork/products/clustering/overview/extendedracversion11-435972.pdf
Oracle Data Guard maintains these standby databases as copies of the production database. If the
production database becomes unavailable because of a planned or an unplanned outage, Oracle Data
Guard can switch any standby database to the production role, minimizing the downtime associated with
the outage. A standby database can be either a single-instance Oracle database or an Oracle RAC
database.
Two types of standby databases are implemented as follows:
• Physical standby database which provides a physically identical copy of the primary database, with
on-disk database structures that are identical to the primary database on a block-for-block basis. A
physical standby database is kept synchronized with the primary database, through Redo Apply,
which recovers the redo data received from the primary database and applies the redo to the physical
standby database.
• Logical standby database which contains the same logical information as the production database,
although the physical organization and structure of the data can be different. The logical standby
database is kept synchronized with the primary database through SQL Apply, which transforms the
data in the redo received from the primary database into SQL statements and then executes the SQL
statements on the standby database.
Oracle Active Data Guard increases performance, availability and data protection wherever it is used for
real-time data protection and availability. An Oracle Active Data Guard standby database can be used to
offload a primary database of reporting, ad-hoc queries, data extracts, and backups, making it a very
effective way to insulate interactive users and critical business tasks on the production system from the
overhead of long-running operations.
Oracle Active Data Guard provides read-only access to a physical standby database while it is
synchronized with a primary database, enabling minimal latency between reporting and production data.
Oracle Active Data Guard automatically repairs physical corruption on either the primary or standby
database, increasing availability and maintaining data protection at all times.
The steps for setting up an Oracle Data Guard on vSphere are the same as setting it up on a physical
environment. VMware recommends referring to Oracle best practices as documented in the Oracle
documentation at https://docs.oracle.com/database/121/SBYDB/concepts.htm#SBYDB00010.
• VMware Continuent for Analytics and Big Data – Provides replication from MySQL to various Hadoop
distributions (including Pivotal HD, MapR, HortonWorks, and Cloudera), HP Vertica, and Amazon
Redshift.
Figure 21. VMware Continuent Features
vSphere Replication does not use VMware snapshots to perform replication. After replication has been
configured for a virtual machine, vSphere Replication begins the initial full synchronization of the source
virtual machine to the target location. A copy of the VMDKs to be replicated can be created and shipped
to the target location and used as “seeds,” reducing the time and network bandwidth consumed by the
initial full synchronization. After the initial full synchronization, changes to the protected virtual machine
are tracked and replicated on a regular basis. The transmissions of these changes are referred to as
“lightweight delta syncs.” The transmission frequency is determined by the RPO that is configured for the
virtual machine. A lower RPO requires more-frequent replication.
vSphere Replication cannot provide the application-level replication that Oracle Data Guard provides
because it is not a native Oracle tool.
vSphere Replication can be used to replicate virtual machines housing Oracle databases based on the
RPO settings. The Oracle database on the target site can be recovered through an Oracle crash
consistent recovery because the database is crash consistent at the point of the replication cycle.
Using vSphere Replication consistency groups and quiescing the Oracle database before every
replication cycle is not acceptable in any Oracle production database environment because it introduces
database performance issues. This results in business SLA deficiencies and therefore many DBAs
choose to use the crash consistent copy of the database on the target site.
With storage-based replication on vSphere, the replication is done at the actual VMware datastore level.
The backup might contain VMDKs from other virtual machines that share the same data store, unless the
data store is dedicated to an Oracle virtual machine.
Storage-based replication will result in a crash-consistent copy at the target side which can be recovered
through an Oracle crash-consistent recovery because the database is crash consistent at the point of the
replication cycle.
Using storage-based consistency groups and quiescing the Oracle database before every replication
cycle is not acceptable in any Oracle production database environment because it introduces database
performance issues. This results in business SLA deficiencies and therefore many DBAs choose to use
the crash consistent copy of the database on the target site.
Site Recovery Manager servers coordinate the operations of the vCenter Server at two sites. This is so
that as virtual machines at the protected site are shut down, copies of these virtual machines at the
recovery site start up. By using the data replicated from the protected site, these virtual machines assume
responsibility for providing the same services.
Migration of protected inventory and services from one site to the other is controlled by a recovery plan
that specifies the order in which virtual machines are shut down and started up, the resource pools to
which they are allocated, and the networks they can access. Site Recovery Manager enables the testing
of recovery plans, using a temporary copy of the replicated data, and isolated networks in a way that does
not disrupt ongoing operations at either site. Multiple recovery plans can be configured to migrate
individual applications and entire sites, providing finer control over which virtual machines are failed over
and failed back. This also enables flexible testing schedules.
For more information about VMware Site Recovery Manager, see the documentation at
https://www.vmware.com/support/pubs/srm_pubs.html.
In conclusion, the choice of Oracle database replication depends on many factors, often based on the
service level agreement (SLA), the level of effort needed to recover in case of a disaster, the cost of such
replication technology, and the infrastructure in place
15.1 EVO:RAIL
EVO:RAIL combines VMware compute, network, storage, and management resources into a hyper-
converged infrastructure appliance to create a simple, easy-to-manage, all-in-one solution for all your
virtualized workloads, including tier-1 production and mission-critical applications. Offered by selected
partners, EVO:RAIL is backed by a single point of contact for software and hardware support.
EVO:RAIL delivers the following benefits:
• Radically simple: Meet accelerated business demands by simplifying infrastructure design with
predictable sizing and scaling, and by streamlining purchase and deployment. With EVO:RAIL, you
go from initial power-on to VM deployment in minutes, and easily scale to 16 appliances. Manage
your VMs, appliance, and cluster from a single pane of glass.
• Resilient and secure: Designed with four independent hosts, distributed Virtual SAN datastore, built-in
VM security policies, and leveraging best-in-class VMware availability capabilities enables zero
application downtime during planned maintenance or during disk, network, or host failure.
• Trusted and proven: Powered by the VMware industry-leading infrastructure virtualization technology,
including vSphere, vCenter Server and Virtual SAN, EVO:RAIL delivers the first hyper-converged
infrastructure appliance designed, powered, and validated by VMware.
EVO:RAIL capabilities are as shown in the following table snapshot.
Figure 25. EVO:RAIL Capabilities
EVO:RAIL software is integrated into a pre-specified and optimized 2U/4 node hardware platform.
EVO:RAIL software consists of the EVO:RAIL Engine, vSphere Enterprise Plus, Virtual SAN, vCenter
Server, and VMware vRealize Log Insight™.
EVO:RAIL 2U chassis are tightly prescribed by VMware, and are pre-built and tested by EVO:RAIL
partners. Each appliance has four servers to ensure resiliency, and is optimized for performance and
flexibility.
Figure 26. EVO:RAIL Hardware System Details
EVO:RAIL is interoperable and optimized for the entire VMware software stack, including VMware NSX ®,
and vRealize Operations. EVO:RAIL includes vSphere Data Protection and vSphere Data Protection
Advanced for backup protection, and vSphere Replication for replication. EVO:RAIL data can also be
protected by any third-party back-up product that supports vSphere Enterprise Plus.
15.2 EVO:RACK
EVO:RACK, a hyper-converged infrastructure project, simplifies how companies buy, deploy, and operate
software-defined data centers, helping IT organizations rapidly provision applications and services at data
center scale.
Designed as a hyper-converged infrastructure solution that can scale to tens of racks; EVO:RACK meets
the increasing demands of private clouds at medium-to-large enterprises. It can run on a range of pre-
integrated hardware configurations ranging from Open Compute Project-based hardware designs to
industry-standard OEM servers and converged infrastructure.
Figure 27. EVO:RACK Configuration
EVO: RACK also leverages the full line-up of VMware software solutions for the SDDC including, VMware
vCloud Suite®, Virtual SAN, and VMware NSX. EVO:RACK is designed to support integrated virtual and
physical networking, and brings it all together through EVO software specifically developed to simplify the
deployment and ongoing lifecycle management of the SDDC software. The EVO software creates a
scalable management solution across a distributed, multi-rack hyper-converged cloud infrastructure.
This approach provides a virtual abstraction (across racks and other form factors) that pools
compute/storage/network and uses this abstracted resource pool as a unit of SDDC instantiation and
operation. The result is a dramatically simplified model for deployment, configuration, and provisioning.
Figure 28. EVO:RACK Model
EVO: RACK provides streamlined and automated lifecycle management of the SDDC components
including non-disruptive patching and upgrading of software. The initial target for such an HCI is an
environment with workloads spanning IaaS, and VDI with plans to extend it to support PaaS and Big Data
workloads in the future.
16. Summary
The best practices and guidelines discussed in this document are summarized in this section.
Recommendations
Create golden images of optimized operating systems using vSphere cloning technologies.
Allow vSphere to choose the best virtual machine monitor based on the CPU and guest
operating system combination.
Keep the default of one core per socket for vNuma to match physical NUMA topology, and try
to align VMs with physical NUMA boundaries.
Set memory reservations equal to the sum total of the size of the Oracle SGA, the Oracle PGA ,
the Oracle background processes and operating system used memory.
Do not turn off ESXI memory management mechanisms unless directed by VMware support.
Enable jumbo frames for IP-based storage using iSCSI and NFS.
Use your storage vendor’s best practices documentation when laying out the Oracle database.
Recommendations
Use paravirtualized SCSI adapters for Oracle data files with demanding workloads.
Separate infrastructure traffic from virtual machine traffic for security and isolation.
Take advantage of Network I/O Control to converge network and storage traffic onto 10 GbE.
Disable unnecessary foreground and background processes within the guest operating system
to save CPU cycles.
To minimize time drift in virtual machines, follow guidelines in relevant VMware Knowledge
Base articles.
Use vCenter and/or the esxtop/resxtop utility for performance monitoring in the virtual
environment.
17. References
• VMware Compatibility Guide
http://www.vmware.com/resources/compatibility/search.php
• vSphere Performance Best Practices guides
http://www.vmware.com/files/pdf/techpaper/VMware-PerfBest-Practices-vSphere6-0.pdf
https://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf
• Deploying Extremely Latency-Sensitive Applications in VMware vSphere 5.5
http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf
• Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere VMs
http://www.vmware.com/files/pdf/techpaper/VMW-Tuning-Latency-Sensitive-Workloads.pdf
• Virtualizing Performance-Critical Database Applications in VMware vSphere 6.0
http://www.vmware.com/files/pdf/products/vsphere/vmware-database-apps-perf-vsphere6.pdf
• Enabling or disabling simultaneous write protection provided by VMFS using the multi-writer flag
http://kb.vmware.com/kb/1034165
• Oracle Databases on VMware RAC Deployment Guide
http://www.vmware.com/files/pdf/solutions/oracle/Oracle_Databases_VMware_RAC_Deployment_Gu
ide.pdf
• Oracle Real Application Clusters on VMware Virtual SAN
http://www.vmware.com/files/pdf/products/vsan/vmware-oracle-real-application-clusters-on-vmware-
virtual-san-reference-architecture.pdf
• VMware vSphere Virtual Volumes
http://www.vmware.com/files/pdf/products/vvol/vmware-oracle-on-virtual-volumes.pdf
• VMware vSphere Resource Management
https://pubs.vmware.com/vsphere-60/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-60-
resource-management-guide.pdf
• VMware vSphere Networking
https://pubs.vmware.com/vsphere-60/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-60-
networking-guide.pdf
• VMware vSphere Storage
https://pubs.vmware.com/vsphere-60/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-60-
storage-guide.pdf
• VMware Communities: Interpreting esxtop Statistics
http://communities.vmware.com/docs/DOC-9279
18. Acknowledgements
Definition of terms:
• Configured memory – Memory size of virtual machine assigned at creation.
• Active memory – Memory recently accessed by applications in the virtual machine.
• Reservation – Guaranteed lower bound on the amount of memory that the host reserves for the
virtual machine, which cannot be reclaimed by ESX/ESXi for other virtual machines.
• Swappable – Virtual machine memory that can be reclaimed by the balloon driver or, in the worst
case, by ESX/ESXi swapping. This is the automatic size of the swap file that is created for each
virtual machine on the VMFS file system (.vswpfile).
For more information about ESX / ESXi memory management concepts and the balloon driver, see
VMware vSphere Resource Management at https://pubs.vmware.com/vsphere-
60/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-60-resource-management-guide.pdf.