Tuning Performance Oracle Weblogic Server 14c
Tuning Performance Oracle Weblogic Server 14c
Tuning Performance Oracle Weblogic Server 14c
14c (14.1.1.0.0)
F18313-07
October 2023
Oracle Fusion Middleware Tuning Performance of Oracle WebLogic Server, 14c (14.1.1.0.0)
F18313-07
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software, software documentation, data (as defined in the Federal Acquisition Regulation), or related
documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S.
Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software,
any programs embedded, installed, or activated on delivered hardware, and modifications of such programs)
and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end
users are "commercial computer software," "commercial computer software documentation," or "limited rights
data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation
of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated
software, any programs embedded, installed, or activated on delivered hardware, and modifications of such
programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and
limitations specified in the license contained in the applicable contract. The terms governing the U.S.
Government's use of Oracle cloud services are defined by the applicable contract for such services. No other
rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle®, Java, and MySQL are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc,
and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface
Audience xiii
Documentation Accessibility xiii
Diversity and Inclusion xiii
Related Documentation xiv
Conventions xiv
iii
Achieve Performance Objectives 2-5
Tuning Tips 2-5
iv
How Many Work Managers are Needed? 5-5
What are the SLA Requirements for Each Work Manager? 5-5
Understanding the Differences Between Work Managers and Execute Queues 5-5
Migrating from Previous Releases 5-6
Tuning the Stuck Thread Detection Behavior 5-6
Tuning Network I/O 5-6
Tuning Muxers 5-7
Java Non-Blocking IO (NIO) Muxer 5-7
Native Muxers 5-7
Server Location and Supported Platforms 5-8
Network Channels 5-9
Reducing the Potential for Denial of Service Attacks 5-10
Tuning Message Size 5-10
Tuning Complete Message Timeout 5-10
Tuning Number of File Descriptors 5-10
Tuning Connection Backlog Buffering 5-11
Tuning Cached Connections 5-11
Tuning the Work Manager Queue Size 5-11
Optimize Java Expressions 5-12
Using WebLogic Server Clusters to Improve Performance 5-12
Scalability and High Availability 5-12
How to Ensure Scalability for WebLogic Clusters 5-13
Database Bottlenecks 5-14
Session Replication 5-14
Asynchronous HTTP Session Replication 5-14
Invalidation of Entity EJBs 5-15
Invalidation of HTTP sessions 5-16
JNDI Binding, Unbinding and Rebinding 5-16
Running Multiple Server Instances on Multi-Core Machines 5-16
Monitoring a WebLogic Server Domain 5-16
Using the Administration Console to Monitor WebLogic Server 5-17
Using the WebLogic Diagnostic Framework 5-17
Using JMX to Monitor WebLogic Server 5-17
Using WLST to Monitor WebLogic Server 5-17
Resources to Monitor WebLogic Server 5-17
Tuning Class and Resource Loading 5-17
Filtering Loader Mechanism 5-18
Class Caching 5-18
SSL Considerations 5-19
v
6 Tuning the WebLogic Persistent Store
Overview of Persistent Stores 6-1
Using the Default Persistent Store 6-2
Using Custom File Stores and JDBC Stores 6-2
Using a JDBC TLOG Store 6-2
Using JMS Paging Stores 6-3
Using Flash Storage to Page JMS Messages 6-3
Using Diagnostic Stores 6-4
Best Practices When Using Persistent Stores 6-4
Tuning JDBC Stores 6-4
Tuning File Stores 6-5
Basic Tuning Information 6-5
Tuning a File Store Direct-Write-With-Cache Policy 6-6
Using Flash Storage to Increase Performance 6-7
Additional Considerations 6-7
Tuning the File Store Direct-Write Policy 6-8
Tuning the File Store Block Size 6-9
Setting the Block Size for a File Store 6-10
Determining the File Store Block Size 6-10
Determining the File System Block Size 6-11
Converting a Store with Pre-existing Files 6-11
Using a Network File System 6-11
Configuring Synchronous Write Policies 6-11
Test Server Restart Behavior 6-12
Handling NFS Locking Errors 6-12
Solution 1 – Using NFS v4 Instead of NFS v3 6-13
Solution 2 - Copying Data Files to Remove NFS Locks 6-13
Solution 3 - Disabling File Locks in WebLogic Server File Stores 6-14
7 Database Tuning
General Suggestions 7-1
Database-Specific Tuning 7-2
Oracle 7-2
Microsoft SQL Server 7-3
Sybase 7-3
vi
Tuning the Stateful Session Bean Cache 8-2
Tuning the Entity Bean Cache 8-2
Transaction-Level Caching 8-3
Caching between Transactions 8-3
Ready Bean Caching 8-3
Tuning the Query Cache 8-3
Tuning EJB Pools 8-4
Tuning the Stateless Session Bean Pool 8-4
Tuning the MDB Pool 8-4
Tuning the Entity Bean Pool 8-5
CMP Entity Bean Tuning 8-5
Use Eager Relationship Caching 8-5
Using Inner Joins 8-6
Use JDBC Batch Operations 8-6
Tuned Updates 8-6
Using Field Groups 8-7
include-updates 8-7
call-by-reference 8-7
Bean-level Pessimistic Locking 8-8
Concurrency Strategy 8-8
Tuning In Response to Monitoring Statistics 8-9
Cache Miss Ratio 8-9
Lock Waiter Ratio 8-10
Lock Timeout Ratio 8-10
Pool Miss Ratio 8-11
Destroyed Bean Ratio 8-11
Pool Timeout Ratio 8-11
Transaction Rollback Ratio 8-12
Transaction Timeout Ratio 8-12
vii
Thread Utilization for MDBs that Process Messages from Foreign Destinations 9-6
Token-based Message Polling for Transactional MDB Listening on Queues/Topics 9-6
Compatibility for WLS 10.0 and Earlier-style Polling 9-7
11 Tuning Transactions
Improving Throughput Using XA Transaction Cluster Affinity 11-1
Logging Last Resource Transaction Optimization 11-2
LLR Tuning Guidelines 11-2
Read-only, One-Phase Commit Optimizations 11-3
Configure XA Transactions without TLogs 11-3
viii
Message Compression for JMS Servers 12-12
Message Compression for Store-and-Forward Sending Agents 12-13
Paging Out Messages To Free Up Memory 12-13
Specifying a Message Paging Directory 12-14
Tuning the Message Buffer Size Option 12-14
Defining Quota 12-14
Quota Resources 12-15
Destination-Level Quota 12-15
JMS Server-Level Quota 12-16
Blocking Senders During Quota Conditions 12-16
Defining a Send Timeout on Connection Factories 12-16
Specifying a Blocking Send Policy on JMS Servers 12-17
Subscription Message Limits 12-17
Controlling the Flow of Messages on JMS Servers and Destinations 12-18
How Flow Control Works 12-19
Configuring Flow Control 12-19
Flow Control Thresholds 12-20
Handling Expired Messages 12-21
Defining a Message Expiration Policy 12-21
Configuring an Expiration Policy on Topics 12-22
Configuring an Expiration Policy on Queues 12-22
Configuring an Expiration Policy on Templates 12-23
Defining an Expiration Logging Policy 12-23
Expiration Log Output Format 12-24
Tuning Active Message Expiration 12-24
Configuring a JMS Server to Actively Scan Destinations for Expired Messages 12-25
Tuning Applications Using Unit-of-Order 12-25
Best Practices 12-25
Using UOO and Distributed Destinations 12-26
Migrating Old Applications to Use UOO 12-26
Using JMS 2.0 Asynchronous Message Sends 12-26
Using One-Way Message Sends 12-28
Configure One-Way Sends On a Connection Factory 12-29
One-Way Send Support In a Cluster With a Single Destination 12-30
One-Way Send Support In a Cluster With Multiple Destinations 12-30
When One-Way Sends Are Not Supported 12-30
Different Client and Destination Hosts 12-30
XA Enabled On Client's Host Connection Factory 12-31
Higher QOS Detected 12-31
Destination Quota Exceeded 12-31
Change In Server Security Policy 12-31
ix
Change In JMS Server or Destination Status 12-31
Looking Up Logical Distributed Destination Name 12-32
Hardware Failure 12-32
One-Way Send QOS Guidelines 12-32
Tuning the Messaging Performance Preference Option 12-33
Messaging Performance Configuration Parameters 12-33
Compatibility With the Asynchronous Message Pipeline 12-34
Client-side Thread Pools 12-35
Best Practices for JMS .NET Client Applications 12-35
Considerations for Oracle Data Guard Environments 12-35
Pause Destinations for Planned Down Time 12-36
Migrate JMS Services for Unexpected Outages 12-36
x
Use Custom JSP Tags 16-2
Precompile JSPs 16-2
Use HTML Template Compression 16-2
Use Service Level Agreements 16-2
Related Reading 16-3
Session Management 16-3
Managing Session Persistence 16-3
Minimizing Sessions 16-4
Aggregating Session Data 16-4
Pub-Sub Tuning Guidelines 16-4
Enabling GZIP Compression 16-4
A Capacity Planning
Capacity Planning Factors A-1
Programmatic and Web-based Clients A-2
RMI and Server Traffic A-3
SSL Connections and Performance A-3
WebLogic Server Process Load A-3
Database Server Capacity and User Storage Requirements A-4
Concurrent Sessions A-4
Network Load A-4
Clustered Configurations A-5
Server Migration A-5
Application Design A-5
Assessing Your Application Performance Objectives A-5
Hardware Tuning A-6
Benchmarks for Evaluating Performance A-6
xi
Supported Platforms A-6
Network Performance A-6
Determining Network Bandwidth A-6
Related Information A-7
xii
Preface
The documentation is intended for administrators who monitor the performance of Oracle
WebLogic Server 14.1.1.0.0 and tune the components such as JVMs, EJBs, DBs, Persistent
Stores, Data Sources, Messaging Servers, and so on.
• Audience
• Documentation Accessibility
• Diversity and Inclusion
• Related Documentation
• Conventions
Audience
This document is written for people who monitor performance and tune the components in a
WebLogic Server environment. It is assumed that readers know server administration and
hardware performance tuning fundamentals, WebLogic Server, XML, and the Java
programming language.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
xiii
Preface
Related Documentation
New and Changed WebLogic Server Features
For a comprehensive listing of the new WebLogic Server features introduced in this
release, see What's New in Oracle WebLogic Server.
Conventions
The following text conventions are used in this document.
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
xiv
1
Top Tuning Recommendations for WebLogic
Server
Tuning Oracle WebLogic Server and your WebLogic Server application is a complex and
iterative process. To get you started, Oracle recommends various tuning techniques to
optimize your application's performance. These tuning techniques are applicable to nearly all
WebLogic applications.
• Tune Pool Sizes
• Use the Prepared Statement Cache
• Use Logging Last Resource Optimization
• Tune Connection Backlog Buffering
• Use Optimistic or Read-only Concurrency
• Use Local Interfaces
• Use eager-relationship-caching
• Tune HTTP Sessions
• Tune Messaging Applications
• Tune Pool Sizes
Provide pool sizes (such as pools for JDBC connections, Stateless Session EJBs, and
MDBs) that maximize concurrency for the expected thread utilization.
• Use the Prepared Statement Cache
The prepared statement cache keeps compiled SQL statements in memory, thus
avoiding a round-trip to the database when the same statement is used later.
• Use Logging Last Resource Optimization
When using transactional database applications, consider using the JDBC data source
Logging Last Resource (LLR) transaction policy instead of XA.
• Tune Connection Backlog Buffering
You can tune the number of connection requests that a WebLogic Server instance
accepts before refusing additional requests. This tunable applies primarily for Web
applications.
• Use Optimistic or Read-only Concurrency
Use optimistic concurrency with cache-between-transactions or read-only concurrency
with query-caching for CMP EJBs to leverage the Entity Bean cache provided by the EJB
container.
• Use Local Interfaces
Use local-interfaces or use call-by-reference semantics to avoid the overhead of
serialization when one EJB calls another or an EJB is called by a servlet/JSP in the same
application.
• Use eager-relationship-caching
Use eager-relationship-caching to allow the EJB container to load related beans using a
single SQL statement.
1-1
Chapter 1
Tune Pool Sizes
1-2
Chapter 1
Use Local Interfaces
consistency guarantees with the performance gain of caching. See Tuning WebLogic
Server EJBs.
• Query-caching is a WebLogic Server 9.0 feature that allows the EJB container to cache
results for arbitrary non-primary-key finders defined on read-only EJBs. All of these
parameters can be set in the application/module deployment descriptors. See
Concurrency Strategy.
Note:
Use eager-relationship-caching
Use eager-relationship-caching to allow the EJB container to load related beans using a
single SQL statement.
It improves performance by reducing the number of database calls to load related beans in
transactions when a bean and it's related beans are expected to be used in that transaction.
See Tuning WebLogic Server EJBs.
1-3
Chapter 1
Tune Messaging Applications
1-4
2
Performance Tuning Roadmap and
Guidelines
Use performance tuning roadmap in Oracle WebLogic Server to understand your
performance objectives and tune your application environment to optimize performance.
• Performance Tuning Roadmap
• Tuning Tips
• Performance Tuning Roadmap
The performance tuning roadmap includes the methods you use to quantify your
performance objectives, such as measuring your performance metrics, locating
bottlenecks in system, and minimizing the impact of bottlenecks.
• Tuning Tips
Follow the tuning tips and guidelines when tuning overall system performance.
2-1
Chapter 2
Performance Tuning Roadmap
2-2
Chapter 2
Performance Tuning Roadmap
Tip:
Even if you find that the CPU is 100 percent utilized, you should profile your
application for performance improvements.
2-3
Chapter 2
Performance Tuning Roadmap
your performance objectives. For the scope of this document, this includes (from most
important to least important):
• Tune Your Application
• Tune your DB
• Tune WebLogic Server Performance Parameters
• Tune Your JVM
• Tune the Operating System
• Tuning the WebLogic Persistent Store
Tune your DB
Your database can be a major enterprise-level bottleneck. Database optimization can
be complex and vender dependent. See DataBase Tuning.
2-4
Chapter 2
Tuning Tips
Tuning Tips
Follow the tuning tips and guidelines when tuning overall system performance.
• Performance tuning is not a silver bullet. Simply put, good system performance depends
on: good design, good implementation, defined performance objectives, and performance
tuning.
• Performance tuning is ongoing process. Implement mechanisms that provide
performance metrics which you can compare against your performance objectives,
allowing you to schedule a tuning phase before your system fails.
• The object is to meet your performance objectives, not eliminate all bottlenecks.
Resources within a system are finite. By definition, at least one resource (CPU, memory,
or I/O) will be a bottleneck in the system. Tuning allows you minimize the impact of
bottlenecks on your performance objectives.
• Design your applications with performance in mind:
– Keep things simple - avoid inappropriate use of published patterns.
– Apply Java EE performance patterns.
– Optimize your Java code.
2-5
3
Tuning Java Virtual Machines (JVMs)
The Java virtual machine (JVM) in Oracle WebLogic Server is a virtual "execution engine"
instance that executes the bytecodes in Java class files on a microprocessor. How you tune
your JVM affects the performance of WebLogic Server and your applications. Configure the
JVM tuning options for WebLogic Server.
• JVM Tuning Considerations
• Garbage Collection
• Increasing Java Heap Size for Managed Servers
• JVM Tuning Considerations
Examine some general JVM tuning considerations for WebLogic Server.
• Changing To a Different JVM
When you create a domain, you choose the JVM that you want to run your domain and
the Configuration Wizard configures the Oracle start scripts based on your choice. Modify
the values for the JAVA_HOME and JAVA_VENDOR variables in the Configuration Wizard to
change the JVM.
• Garbage Collection
Garbage collection is the VM's process of freeing up unused Java objects in the Java
heap.
• Increasing Java Heap Size for Managed Servers
For better performance, increase the heap size for each Managed Server in your
environment.
3-1
Chapter 3
Changing To a Different JVM
Garbage Collection
Garbage collection is the VM's process of freeing up unused Java objects in the Java
heap.
The following sections provide information on tuning your VM's garbage collection:
• VM Heap Size and Garbage Collection
• Choosing a Garbage Collection Scheme
• Using Verbose Garbage Collection to Determine Heap Size
• Specifying Heap Size Values
• Manually Requesting Garbage Collection
• Requesting Thread Stacks
• VM Heap Size and Garbage Collection
• Choosing a Garbage Collection Scheme
• Using Verbose Garbage Collection to Determine Heap Size
• Specifying Heap Size Values
• Tuning Tips for Heap Sizes
• Java HotSpot VM Heap Size Options
• Manually Requesting Garbage Collection
• Requesting Thread Stacks
3-2
Chapter 3
Garbage Collection
3-3
Chapter 3
Garbage Collection
2. Use the -verbosegc option to turn on verbose garbage collection output for your
JVM and redirect both the standard error and standard output to a log file.
This places thread dump information in the proper context with WebLogic Server
informational and error messages, and provides a more useful log for diagnostic
purposes.
For example, on Windows and Solaris, enter the following:
% java -ms32m -mx200m -verbosegc -classpath $CLASSPATH
-Dweblogic.Name=%SERVER_NAME% -Dbea.home="C:\Oracle\Middleware"
-Dweblogic.management.username=%WLS_USER%
-Dweblogic.management.password=%WLS_PW%
-Dweblogic.management.server=%ADMIN_URL%
-Dweblogic.ProductionModeEnabled=%STARTMODE%
-Djava.security.policy="%WL_HOME%\server\lib\weblogic.policy"
weblogic.Server >> logfile.txt 2>&1
where the logfile.txt 2>&1 command redirects both the standard error and
standard output to a log file.
3. Analyze the following data points:
a. How often is garbage collection taking place? In the weblogic.log file, compare
the time stamps around the garbage collection.
b. How long is garbage collection taking? Full garbage collection should not take
longer than 3 to 5 seconds.
c. What is your average memory footprint? In other words, what does the heap
settle back down to after each full garbage collection? If the heap always
settles to 85 percent free, you might set the heap size smaller.
4. Review the New generation heap sizes, see Java HotSpot VM Heap Size Options.
5. Make sure that the heap size is not larger than the available free RAM on your
system.
Use as large a heap size as possible without causing your system to "swap" pages
to disk. The amount of free RAM on your system depends on your hardware
configuration and the memory requirements of running processes on your
machine. See your system administrator for help in determining the amount of free
RAM on your system.
6. If you find that your system is spending too much time collecting garbage (your
allocated virtual memory is more than your RAM can handle), lower your heap
size.
Typically, you should use 80 percent of the available RAM (not taken by the
operating system or other processes) for your JVM.
7. If you find that you have a large amount of available free RAM remaining, run
more instances of WebLogic Server on your machine.
Remember, the goal of tuning your heap size is to minimize the time that your JVM
spends doing garbage collection while maximizing the number of clients that
WebLogic Server can handle at a given time.
3-4
Chapter 3
Garbage Collection
sizes values.You must specify Java heap size values each time you start an instance of
WebLogic Server. This can be done either from the java command line or by modifying the
default values in the sample startup scripts that are provided with the WebLogic distribution
for starting WebLogic Server.
• Tuning Tips for Heap Sizes
• Java HotSpot VM Heap Size Options
3-5
Chapter 3
Garbage Collection
For example, when you start a WebLogic Server instance from a java command line,
you could specify the HotSpot VM heap size values as follows:
$ java -XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8 -Xms512m -
Xmx512m
The default size for these values is measured in bytes. Append the letter 'k' or 'K' to
the value to indicate kilobytes, 'm' or 'M' to indicate megabytes, and 'g' or 'G' to indicate
gigabytes. The example above allocates 128 megabytes of memory to the New
generation and maximum New generation heap sizes, and 512 megabytes of memory
to the minimum and maximum heap sizes for the WebLogic Server instance running in
the JVM.
• Other Java HotSpot VM Options
3-6
Chapter 3
Increasing Java Heap Size for Managed Servers
See Starting Managed Servers with a Startup Script in Administering Server Startup and
Shutdown for Oracle WebLogic Server.
3-7
Chapter 3
Increasing Java Heap Size for Managed Servers
3-8
4
Tuning WebLogic Diagnostic Framework and
Java Flight Recorder Integration
Follow the recommended tips and guidelines to tune WebLogic Diagnostic Framework
(WLDF) and Java Flight Recorder of Oracle WebLogic Server.
• Using Java Flight Recorder
• Using WLDF
• Tuning Considerations
• Using Java Flight Recorder
Java Flight Recorder is a performance monitoring and profiling tool that records
diagnostic information on a continuous basis, making it always available, even in the
wake of catastrophic failure such as a system crash.
• Using WLDF
If WebLogic Server is configured with Oracle HotSpot, and the Java Flight Recorder is
enabled, the Java Flight Recorder data is automatically also captured in the diagnostic
image capture. This data can be extracted from the diagnostic image capture and viewed
in Java Mission Control. If Java Flight Recorder is not enabled, or if WebLogic Server is
configured with a different JVM, the Java Flight Recorder data is not captured in the
diagnostics image capture.
• Tuning Considerations
In most environments, there is little performance impact when the Diagnostic Volume is
set to Low and the most performance impact if Diagnostic Volume is set to High. The
volume of diagnostic data produced by WebLogic Server needs to be weighed against
potential performance loss.
Using WLDF
If WebLogic Server is configured with Oracle HotSpot, and the Java Flight Recorder is
enabled, the Java Flight Recorder data is automatically also captured in the diagnostic image
capture. This data can be extracted from the diagnostic image capture and viewed in Java
Mission Control. If Java Flight Recorder is not enabled, or if WebLogic Server is configured
4-1
Chapter 4
Tuning Considerations
with a different JVM, the Java Flight Recorder data is not captured in the diagnostics
image capture.
The volume of Java Flight Recorder data that is captured can be configured using the
Diagnostic Volume attribute in the WebLogic Server Administration Console, see
Configuring WLDF Diagnostic Volume in Configuring and Using the Diagnostics
Framework for Oracle WebLogic Server. You can also set the volume using WLST.
Tuning Considerations
In most environments, there is little performance impact when the Diagnostic Volume
is set to Low and the most performance impact if Diagnostic Volume is set to High.
The volume of diagnostic data produced by WebLogic Server needs to be weighed
against potential performance loss.
4-2
5
Tuning WebLogic Server
Learn how to tune Oracle WebLogic Server to match your application.
• Setting Java Parameters for Starting WebLogic Server
• Development vs. Production Mode Default Tuning Values
• Deployment
• Thread Management
• Tuning Network I/O
• Tuning the Work Manager Queue Size
• Optimize Java Expressions
• Using WebLogic Server Clusters to Improve Performance
• Monitoring a WebLogic Server Domain
• Tuning Class and Resource Loading
• SSL Considerations
• Setting Java Parameters for Starting WebLogic Server
Java parameters must be specified whenever you start WebLogic Server.
• Development vs. Production Mode Default Tuning Values
You can indicate whether a domain is to be used in a development environment or a
production environment. WebLogic Server uses different default values for various
services depending on the type of environment you specify.
• Deployment
Learn techniques to improve deployment performance.
• Thread Management
WebLogic Server provides the following mechanisms to manage threads to perform work.
• Tuning Network I/O
Learn about network communication between clients and servers (including T3 and IIOP
protocols, and their secure versions).
• Tuning the Work Manager Queue Size
By default, the queue size for the Work Manager’s maximum threads constraint is 8,192
(8K). During times of high load (when the machine CPU runs at 100% utilization), Work
Manager instances may be unable to process messages in the queue quickly enough
using this default setting.
• Optimize Java Expressions
Set the optimize-java-expression element to optimize Java expressions to improve
runtime performance.
• Using WebLogic Server Clusters to Improve Performance
A WebLogic Server cluster is a group of WebLogic Servers instances that together
provide fail-over and replicated services to support scalable high-availability operations
for clients within a domain. A cluster appears to its clients as a single server but is in fact
a group of servers acting as one to provide increased scalability and reliability.
5-1
Chapter 5
Setting Java Parameters for Starting WebLogic Server
• Change the value of the variable JAVA_HOME to the location of your JDK. For
example:
set JAVA_HOME=myjdk_location
where myjdk_location is the path to your supported JDK for this release. See
Oracle Fusion Middleware Supported System Configurations.
• For higher performance throughput, set the minimum Java heap size equal to the
maximum heap size. For example:
"%JAVA_HOME%\bin\java" -server –Xms512m –Xmx512m -classpath %CLASSPATH% -
See Specifying Heap Size Values for details about setting heap size options.
5-2
Chapter 5
Development vs. Production Mode Default Tuning Values
For information about how the security and performance-related configuration parameters
differ when switching from one domain mode to another, see How Domain Mode Affects the
Default Security Configuration in Securing a Production Environment for Oracle WebLogic
Server .
Deployment
Learn techniques to improve deployment performance.
• On-demand Deployment of Internal Applications
• Use FastSwap Deployment to Minimize Redeployment Time
• Generic Overrides
• On-demand Deployment of Internal Applications
• Use FastSwap Deployment to Minimize Redeployment Time
• Generic Overrides
5-3
Chapter 5
Thread Management
Generic Overrides
Generic overrides allow you to override application specific property files without
having to crack a jar file by placing application specific files to be overridden into the
AppFileOverrides optional subdirectory. For more information on how to use and
configure this feature, see Generic File Loading Overrides in Deploying Applications to
WebLogic Server.
Thread Management
WebLogic Server provides the following mechanisms to manage threads to perform
work.
• Tuning a Work Manager
• Understanding the Differences Between Work Managers and Execute Queues
• Migrating from Previous Releases
• Tuning the Stuck Thread Detection Behavior
• Tuning a Work Manager
• Self-Tuning Thread Pool Size
• How Many Work Managers are Needed?
• What are the SLA Requirements for Each Work Manager?
• Understanding the Differences Between Work Managers and Execute Queues
• Migrating from Previous Releases
• Tuning the Stuck Thread Detection Behavior
5-4
Chapter 5
Thread Management
5-5
Chapter 5
Tuning Network I/O
depending on the demand from the various work-managers. The position of a request
in the execute queue is determined by its internal priority:
• The higher the priority, closer it is placed to the head of the execute queue.
• The closer to the head of the queue, more quickly the request will be dispatched a
thread to use.
Work managers provide you the ability to better control thread utilization (server
performance) than execute-queues, primarily due to the many ways that you can
specify scheduling guidelines for the priority-based thread pool. These scheduling
guidelines can be set either as numeric values or as the capacity of a server-managed
resource, like a JDBC connection pool.
5-6
Chapter 5
Tuning Network I/O
Tuning Muxers
WebLogic Server uses software modules called muxers to read incoming requests on the
server and incoming responses on the client. WebLogic Server supports the following muxer
types:
• Java Non-Blocking IO (NIO) Muxer
• Native Muxers
• Server Location and Supported Platforms
• Java Non-Blocking IO (NIO) Muxer
• Native Muxers
• Server Location and Supported Platforms
Native Muxers
Native Muxers are not recommended for most environments. If you must enable these
muxers, the value of the MuxerClass attribute must be explicitly set:
The POSIX Native Muxer provides similar performance improvements for larger messages/
payloads in UNIX-like systems that support poll system calls, such as Solaris and HP-UX:
-Dweblogic.MuxerClass=weblogic.socket.PosixSocketMuxer
Native muxers use platform-specific native binaries to read data from sockets. The majority of
all platforms provide some mechanism to poll a socket for data. For example, Unix systems
5-7
Chapter 5
Tuning Network I/O
use the poll system call and the Windows architecture uses completion ports. Native
muxers implement a non-blocking thread model. When a native muxer is used, the
server creates a fixed number of threads dedicated to reading incoming requests. Prior
to WebLogic Server 12.1.2, Oracle recommended to use native muxers and referred to
as performance packs.
For WebLogic Server 12.1.2 and subsequent releases, the Non-Blocking IO (NIO)
muxer is recommended by default. However, Oracle still provides native muxer as an
option for users upgrading WebLogic Server versions prior to 12.1.2 to maximize
consistency of the runtime environment after upgrading. See Enable Native IO in
Oracle WebLogic Server Administration Console Help.
With native muxers, you may be able to improve throughput for some cpu-bound
applications by using the following:
-Dweblogic.socket.SocketMuxer.DELAY_POLL_WAKEUP=xx
Install Location
Supported Platforms
The native library supports the following platforms:
oracle.wls.core.app.server.nativelib/template.xml:
dest="server/native/solaris/sparc64/libmuxer.so" source="wlserver/
server/native/solaris/sparc64/libmuxer.so"
oracle.wls.core.app.server.nativelib/template.xml:
dest="server/native/solaris/sparc/libmuxer.so" source="wlserver/server/
native/solaris/sparc/libmuxer.so"
oracle.wls.core.app.server.nativelib/template.xml:
5-8
Chapter 5
Tuning Network I/O
dest="server/native/solaris/x64/libmuxer.so" source="wlserver/server/native/
solaris/x64/libmuxer.so"
oracle.wls.core.app.server.nativelib/template.xml:
dest="server/native/solaris/x86/libmuxer.so" source="wlserver/server/native/
solaris/x86/libmuxer.so"
oracle.wls.core.app.server.nativelib/template.xml:
dest="server/native/linux/s390x/libmuxer.so" source="wlserver/server/native/
linux/s390x/libmuxer.so"
oracle.wls.core.app.server.nativelib/template.xml:
dest="server/native/linux/ia64/libmuxer.so" source="wlserver/server/native/
linux/ia64/libmuxer.so"
oracle.wls.core.app.server.nativelib/template.xml:
dest="server/native/aix/ppc64/libmuxer.so" source="wlserver/server/
native/aix/ppc64/libmuxer.so"
oracle.wls.core.app.server.nativelib/template.xml:
dest="server/native/aix/ppc/libmuxer.so" source="wlserver/server/
native/aix/ppc/libmuxer.so"
oracle.wls.core.app.server.nativelib/template.xml:
dest="server/native/macosx/libmuxer.jnilib" source="wlserver/server/native/
macosx/libmuxer.jnilib"
oracle.wls.core.app.server.nativelib/template.xml:
dest="server/native/hpux11/IPF64/libmuxer.so" source="wlserver/server/native/
hpux11/IPF64/libmuxer.so"
oracle.wls.core.app.server.tier1nativelib/template.xml:
dest="server/native/linux/i686/libmuxer.so" source="wlserver/server/native/
linux/i686/libmuxer.so"
oracle.wls.core.app.server.tier1nativelib/template.xml:
dest="server/native/linux/x86_64/libmuxer.so" source="wlserver/server/native/
linux/x86_64/libmuxer.so"
Network Channels
Network channels, also called network access points, allow you to specify different quality of
service (QOS) parameters for network communication. Each network channel is associated
with its own exclusive socket using a unique IP address and port. By default, T3 requests
from a multi-threaded client are multiplexed over the same remote connection and the server
instance reads requests from the socket one at a time. If the request size is large, this
becomes a bottleneck.
Although the primary role of a network channel is to control the network traffic for a server
instance, you can leverage the ability to create multiple custom channels to allow a multi-
threaded client to communicate with server instance over multiple connections, reducing the
potential for a bottleneck. To configure custom multi-channel communication, use the
following steps:
1. Configure multiple network channels using different IP and port settings. See Configure
custom network channels in Oracle WebLogic Server Administration Console Online
Help.
2. In your client-side code, use a JNDI URL pattern similar to the pattern used in clustered
environments. The following is an example for a client using two network channels:
t3://<ip1>:<port1>,<ip2>:<port2>
5-9
Chapter 5
Tuning Network I/O
5-10
Chapter 5
Tuning the Work Manager Queue Size
Note that when you tune the number of file descriptors for WebLogic Server, your changes
should be in balance with any changes made to the complete message timeout parameter. A
higher complete message timeout setting results in a socket not closing until the message
timeout occurs, which therefore results in a longer hold on the file descriptor. So if the
complete message timeout setting is high, the file descriptor limit should also be set high.
This balance provides optimal system availability with reduced potential for denial-of-service
attacks.
For information about how to tune the number of available file descriptors, consult your UNIX
vendor's documentation.
In the following example, the target is specified by the server name (Server-0), and the queue
size is increased to 65,536 (64K).
<max-threads-constraint>
<name>ClusterMessaging-max</name>
<target>Server-0</target>
<count>1</count>
<queue-size>65536</queue-size>
</max-threads-constraint>
5-11
Chapter 5
Optimize Java Expressions
<work-manager>
<name>ClusterMessaging</name>
<target>Server-0</target>
<max-threads-constraint>ClusterMessaging-max</max-threads-constraint>
</work-manager>
You can specify the target using either the server name or the cluster name. You can
use either server or cluster as target names.
5-12
Chapter 5
Using WebLogic Server Clusters to Improve Performance
Note:
Provided that you have resolved all application and environment bottleneck issues,
adding additional servers to a cluster should provide linear scalability. When doing
benchmark or initial configuration test runs, isolate issues in a single server
environment before moving to a clustered environment.
5-13
Chapter 5
Using WebLogic Server Clusters to Improve Performance
Database Bottlenecks
In many cases where a cluster of WebLogic servers fails to scale, the database is the
bottleneck. In such situations, the only solutions are to tune the database or reduce
load on the database by exploring other options. See DataBase Tuning and Tuning
Data Sources.
Session Replication
User session data can be stored in two standard ways in a Java EE application:
stateful session EJBs or HTTP sessions. By themselves, they are rarely a impact
cluster scalability. However, when coupled with a session replication mechanism
required to provide high-availability, bottlenecks are introduced. If a Java EE
application has Web and EJB components, you should store user session data in
HTTP sessions:
• HTTP session management provides more options for handling fail-over, such as
replication, a shared DB or file.
• Superior scalability.
• Replication of the HTTP session state occurs outside of any transactions. Stateful
session bean replication occurs in a transaction which is more resource intensive.
• The HTTP session replication mechanism is more sophisticated and provides
optimizations a wider variety of situations than stateful session bean replication.
See Session Management.
5-14
Chapter 5
Using WebLogic Server Clusters to Improve Performance
Topology Behavior
LAN Replication to a secondary server within the same cluster occurs asynchronously with
the "async-replication" setting in the webapp.
MAN Replication to a secondary server in a remote cluster. This happens asynchronously
with the "async-replication" setting in the webapp.
WAN Replication to a secondary server within the cluster happens asynchronously with the
"async-replication" setting in the webapp. Persistence to a database through a remote
cluster happens asynchronously regardless of whether "async-replication" or
"replication" is chosen.
5-15
Chapter 5
Monitoring a WebLogic Server Domain
ReadOnly with a read-write pattern—In this pattern, persistent data that would
otherwise be represented by a single EJB are actually represented by two EJBs: one
read-only and the other updatable. When the state of the updateable bean changes,
the container automatically invalidates corresponding read-only EJB instance. If
updates to the EJBs are frequent, the work done by the servers to invalidate the read-
only EJBs becomes a serious bottleneck.
5-16
Chapter 5
Tuning Class and Resource Loading
5-17
Chapter 5
Tuning Class and Resource Loading
In this example, resource filtering has been configured for the exact resource name
"x/y" and for any resource whose name starts with "z". '*' is the only wild card pattern
allowed. Resources with names matching these patterns are searched for only on the
application classpath, the system classpath search is skipped.
Note:
If you add a class or resource to the filtering configuration and subsequently
get exceptions indicating the class or resource isn't found, the most likely
cause is that the class or resource is on the system classpath, not on the
application classpath.
Class Caching
WebLogic Server allows you to enable class caching for faster start ups. Once you
enable caching, the server records all the classes loaded until a specific criterion is
reached and persists the class definitions in an invisible file. When the server restarts,
the cache is checked for validity with the existing code sources and the server uses
the cache file to bulk load the same sequence of classes recorded in the previous run.
5-18
Chapter 5
SSL Considerations
If any change is made to the system classpath or its contents, the cache will be invalidated
and re-built on server restart.
The advantages of using class caching are:
• Reduces server startup time.
• The package level index reduces search time for all classes and resources.
See Configuring Class Caching in Developing Applications for Oracle WebLogic Server.
Note:
Class caching is supported in development mode when starting the server using a
startWebLogic script. Class caching is disabled by default and is not supported in
production mode. The decrease in startup time varies among different JRE vendors.
SSL Considerations
If WebLogic Server is configured with JDK 7, you may find that the out-of-the-box SSL
performance slower than in previous WebLogic Server releases. This performance change is
due to the stronger cipher and MAC algorithm used by default when JDK 7 is used with the
JSSE-based SSL provider in WebLogic Server.
See SSL Performance Considerations in Administering Security for Oracle WebLogic Server.
5-19
6
Tuning the WebLogic Persistent Store
The persistent store provides a built-in, high-performance storage solution for Oracle
WebLogic Server subsystems and services that require persistence. Tune the persistent
store by tuning JDBC stores, file stores, and following the best practices when using
persistent stores.
• Overview of Persistent Stores
• Best Practices When Using Persistent Stores
• Tuning JDBC Stores
• Tuning File Stores
• Using a Network File System
• Overview of Persistent Stores
Each server instance, including the Administration Server, has a default persistent store
that requires no configuration. In addition to using the default file store, you can also
configure a file-based store or JDBC-accessible store, JDBC TLOG store, and a file-
based paging store.
• Best Practices When Using Persistent Stores
Learn the best practices for using WebLogic persistent stores.
• Tuning JDBC Stores
Review information on tuning JDBC stores.
• Tuning File Stores
Learn about tuning file stores.
• Using a Network File System
Learn about using a WebLogic persistent store with a Network File System (NFS).
6-1
Chapter 6
Overview of Persistent Stores
6-2
Chapter 6
Overview of Persistent Stores
Note:
Paged persistent messages are potentially physical stored in two different places:
• Always in a recoverable default or custom store.
• Potentially in a paging directory.
Note:
Most Flash storage devices are a single point of failure and are typically only
accessible as a local device. They are suitable for JMS server paging stores which
do not recover data after a failure and automatically reconstruct themselves as
needed.
In most cases, Flash storage devices are not suitable for custom or default stores
which typically contains data that must be safely recoverable. A configured
Directory attribute of a default or custom store should not normally reference a
directory that is on a single point of failure device.
Use the following steps to use a Flash storage device to page JMS messages:
1. Set the JMS server Message Paging Directory attribute to the path of your flash storage
device, see Specifying a Message Paging Directory.
2. Tune the Message Buffer Size attribute (it controls when paging becomes active). You
may be able to use lower threshold values as faster I/O operations provide improved load
absorption. See Tuning the Message Buffer Size Option.
6-3
Chapter 6
Best Practices When Using Persistent Stores
3. Tune JMS Server quotas to safely account for any Flash storage space limitations.
This ensures that your JMS server(s) will not attempt to page more messages than
the device can store, potentially yielding runtime errors and/or automatic
shutdowns. As a conservative rule of thumb, assume page file usage will be at
least 1.5 * ((Maximum Number of Active Messages) * (512 + average message
body size)) rounded up to the nearest 16MB. See Defining Quota.
6-4
Chapter 6
Tuning File Stores
JDBC stores in Oracle WebLogic Server Administration Console Online Help and Using
the WebLogic Persistent Store in Administering the WebLogic Persistent Store.
6-5
Chapter 6
Tuning File Stores
Note:
Certain older versions of Microsoft Windows may incorrectly report
storage device synchronous write completion if the Windows default
Write Cache Enabled setting is used. This violates the transactional
semantics of transactional products (not specific to Oracle), including file
stores configured with a Direct-Write (default) or Direct-Write-With-
Cache policy, as a system crash or power failure can lead to a loss or a
duplication of records/messages. One of the visible symptoms is that this
problem may manifest itself in high persistent message/transaction
throughput exceeding the physical capabilities of your storage device.
You can address the problem by applying a Microsoft supplied patch,
disabling the Windows Write Cache Enabled setting, or by using a
power-protected storage device.
• When performing head-to-head vendor comparisons, make sure all the write
policies for the persistent store are equivalent. Some non-WebLogic vendors
default to the equivalent of Disabled.
• Depending on the synchronous write policy, custom and default stores have a
variety of additional tunable attributes that may improve performance. These
include CacheDirectory, MaxWindowBufferSize, IOBufferSize, BlockSize,
InitialSize, and MaxFileSize. See JMSFileStoreMBean in the MBean
Reference for Oracle WebLogic Server.
Note:
.The JMSFileStoreMBean is deprecated, but the individual bean attributes
apply to the non-deprecated beans for custom and default file stores.
6-6
Chapter 6
Tuning File Stores
primary files should be stored in remote storage for high availability, whereas cache files are
strictly for performance and not for high availability and can be stored locally.
When the Direct-Write-With-Cache synchronous write policy is selected, there are several
additional tuning options that you should consider:
• Setting the CacheDirectory. For performance reasons, the cache directory should be
located on a local file system. It is placed in the operating system temp directory by
default.
• Increasing the MaxWindowBufferSize and IOBufferSize attributes. These tune native
memory usage of the file store.
• Increasing the InitialSize and MaxFileSize tuning attributes. These tune the initial size
of a store, and the maximum file size of a particular file in the store respectively.
• Tune the BlockSize attribute. See Tuning the File Store Block Size.
For more information on individual tuning parameters, see the JMSFileStoreMBean in the
MBean Reference for Oracle WebLogic Server.
• Using Flash Storage to Increase Performance
• Additional Considerations
Additional Considerations
Consider the following when tuning the Direct-Write-With-Cache policy:
• There may be additional security and file locking considerations when using the Direct-
Write-With-Cache synchronous write policy. See Securing a Production Environment for
Oracle WebLogic Server and the CacheDirectory and LockingEnabled attributes of the
JMSFileStoreMBean in the MBean Reference for Oracle WebLogic Server.
– The JMSFileStoreMBean is deprecated, but the individual bean attributes apply to the
non-deprecated beans for custom and default file stores.
• It is safe to delete a cache directory while the store is not running, but this may slow
down the next store boot. Cache files are re-used to speed up the file store boot and
recovery process, but only if the store's host WebLogic server has been shut down
cleanly prior to the current boot (not after kill -9, nor after an OS/JVM crash) and there
was no off-line change to the primary files (such as a store admin compaction). If the
existing cache files cannot be safely used at boot time, they are automatically discarded
and new files are created. In addition, a Warning log 280102 is generated. After a
migration or failover event, this same Warning message is generated, but can be ignored.
• If the a Direct-Write-With-Cache file store fails to load a wlfileio native driver, the
synchronous write policy automatically changes to the equivalent of Direct-Write with
6-7
Chapter 6
Tuning File Stores
Note:
The AvoidDirectIO properties described in this section are still supported in
this release, but have been deprecated as of 11gR1PS2. Use the
configurable Direct-Write-With-Cache synchronous write policy as an
alternative to the Direct-Write policy.
For file stores with the synchronous write policy of Direct-Write, you may be directed
by Oracle Support or a release note to set weblogic.Server options on the command
line or start script of the JVM that runs the store:
• Globally changes all stores running in the JVM:
-Dweblogic.store.AvoidDirectIO=true
• For the default store, where server-name is the name of the server hosting the
store:
-Dweblogic.store._WLS_server-name.AvoidDirectIO=true
Setting AvoidDirectIO on an individual store overrides the setting of the global -
Dweblogic.store.AvoidDirectIO option. For example: If you have two stores, A and
B, and set the following options:
-Dweblogic.store.AvoidDirectIO=true
-Dweblogic.store.A.AvoidDirectIO=false
Note:
Setting the AvoidDirectIO option may have performance implications which
often can be mitigated using the block size setting described in Tuning the
File Store Block Size.
6-8
Chapter 6
Tuning File Stores
6-9
Chapter 6
Tuning File Stores
Note:
The BlockSize command line properties that are described in this section
are still supported in 11gR1PS2, but are deprecated. Oracle recommends
using the BlockSize configurable on custom and default file stores instead.
To set the block size of a store, use one of the following properties on the command
line or start script of the JVM that runs the store:
• Globally sets the block size of all file stores that don't have pre-existing files.
-Dweblogic.store.BlockSize=block-size
• Sets the block size for a specific file store that doesn't have pre-existing files.
-Dweblogic.store.store-name.BlockSize=block-size
• Sets the block size for the default file store, if the store doesn't have pre-existing
files:
-Dweblogic.store._WLS_server-name. BlockSize=block-size
The value used to set the block size is an integer between 512 and 8192 which is
automatically rounded down to the nearest power of 2.
Setting BlockSize on an individual store overrides the setting of the global -
Dweblogic.store.BlockSize option. For example: If you have two stores, A and B,
and set the following options:
-Dweblogic.store.BlockSize=8192
-Dweblogic.store.A.BlockSize=512
then store B has a block size of 8192 and store A has a block size of 512.
Note:
Setting the block size using command line properties only takes effect for file
stores that have no pre-existing files. If a store has pre-existing files, the
store continues to use the block size that was set when the store was first
created.
6-10
Chapter 6
Using a Network File System
6-11
Chapter 6
Using a Network File System
File locking errors are caused when two file stores using same files are started
simultaneously. The following information can be used to prevent or correct the errors:
• If two file stores from two different domains having the same name are configured
to use the same directory, shut down WebLogic Server and change the
configuration of the conflicting stores to use different directories. This is prevented
from occurring with two different file stores within the same domain, as WebLogic
Server prevents the two file stores from being configured with the same name
within the same domain.
• WebLogic does not support starting multiple deployments of the same domain
from different sites when they are using shared persistent storage locations.
• WebLogic does not support starting multiple WebLogic Servers of the same name
from different sites when they are using shared persistent storage locations.
File locking errors are also caused due to an "abandoned lock" which occurs when the
owning file store is no longer running after a machine failure, operating system crash,
and virtual machine destruction. The NFS storage device does not become aware of
the problem immediately. As a result, any subsequent attempt by the WebLogic Server
to acquire locks on the previously locked files may fail. You can perform the tasks
described in the following solutions to unlock the logs and data files:
6-12
Chapter 6
Using a Network File System
For more information about locking of files stored in NFS mounted directories on the storage
device, see your storage vendor’s documentation.
• Solution 1 – Using NFS v4 Instead of NFS v3
• Solution 2 - Copying Data Files to Remove NFS Locks
• Solution 3 - Disabling File Locks in WebLogic Server File Stores
Note:
You should be very cautious when using this option. It is critical to ensure that a file
store is shut down before copying the file store’s files. Otherwise, the files may get
corrupted. Additional procedural precautions must be implemented to avoid any
human error and to ensure that no instance of file store is manually started at any
given point in time during the copy. Similarly, extra precautions must be taken to
ensure that no two domains have a store with the same name that references the
same directory.
Manually unlock the default store, paging store, and JMS store data files and start the servers
by first ensuring that there are no running file stores that are using the files, then creating a
copy of the locked persistence store file, and using the copy for subsequent operations.
To create a copy of the locked persistence store file, rename the file, and then copy it back to
its original name. The following sample steps assume that transaction logs are stored in the /
shared/tlogs directory and JMS data is stored in the /shared/jms directory.
6-13
Chapter 6
Using a Network File System
cd /shared/tlogs
mv _WLS_SOA_SERVER1000000.DAT _WLS_SOA_SERVER1000000.DAT.old
cp _WLS_SOA_SERVER1000000.DAT.old _WLS_SOA_SERVER1000000.DAT
cd /shared/jms
mv SOAJMSFILESTORE_AUTO_1000000.DAT
SOAJMSFILESTORE_AUTO_1000000.DAT.old
cp SOAJMSFILESTORE_AUTO_1000000.DAT.old
SOAJMSFILESTORE_AUTO_1000000.DAT
mv UMSJMSFILESTORE_AUTO_1000000.DAT
UMSJMSFILESTORE_AUTO_1000000.DAT.old
cp UMSJMSFILESTORE_AUTO_1000000.DAT.old
UMSJMSFILESTORE_AUTO_1000000.DAT
In this solution, the WebLogic file locking mechanism continues to provide protection
from any accidental data corruption caused when multiple instances of the same
servers are started accidentally. However, the servers must be restarted manually after
abrupt machine failures. In addition, file stores create multiple numbered.DAT files
consecutively when they are used to store large amounts of data. All files may need to
be copied and renamed when this occurs.
Note:
With this solution, because the WebLogic Server locking is disabled,
automated server restarts and failovers are expected to succeed. However,
you should be very cautious when using this option. The WebLogic file
locking feature is designed to help prevent severe file corruptions that can
occur in concurrency scenarios.
6-14
Chapter 6
Using a Network File System
using database tables and prevents automated restart of more than one instance of the
same WebLogic Server.
• Avoid disabling file locks on a file store that is using Automatic Service Migration (ASM).
– ASM requires file store locking to work safely and is activated in the following
scenarios:
1. A custom file store target is set to a Migratable Target and the Migratable Target
is configured with a Migration Policy other than 'manual' (the default).
2. A custom file store target is set to a WebLogic cluster when the store is
configured with a Migration Policy other than 'Off' (the default).
3. A WebLogic Server is configured with a JTA Migratable Target with a Migration
Policy value other than ‘manual’ (the default), as this in turn can lead to default
file store migrations.
– If both ASM and disabling file locks are required, store your critical data in database
stores instead of file stores to avoid the risk of file corruptions. For example, use a
custom JDBC store instead of a file store and configure JTA to use a JDBC TLOG
store instead of each WebLogic Server’s default file store.
• Additional procedural precautions must be implemented to avoid any human error and to
ensure that only one instance of a server is manually started at any given point in time.
Similarly, precautions must be taken to ensure that no two domains have a store with the
same name that references the same directory.
You can use a system property or WebLogic Server configuration to disable WebLogic file
locking mechanisms for the default file store, custom file store, a JMS paging file store, and a
Diagnostics file store, as described in the following sections:
• Disabling File Locking for all Stores Using a System Property
• Disabling File Locking for the Default File Store
• Disabling File Locking for a Custom File Store
• Disabling File Locking for a JMS Paging File Store
• Disabling File Locking for a Diagnostics File Store
6-15
Chapter 6
Using a Network File System
6-16
Chapter 6
Using a Network File System
<block-size>-1</block-size>
<initial-size>0</initial-size>
<file-locking-enabled>false</file-locking-enabled>
<target>examplesServer</target>
</file-store>
6-17
Chapter 6
Using a Network File System
Example 6-6 Example config.xml Entry for Disabling File Locking for a
Diagnostics File Store
<server>
<name>examplesServer</name>
...
<server-diagnostic-config>
<diagnostic-store-dir>data/store/diagnostics</diagnostic-store-dir>
<diagnostic-store-file-locking-enabled>false</diagnostic-store-file-
lockingenabled>
<diagnostic-data-archive-type>FileStoreArchive</diagnostic-data-archive-type>
<data-retirement-enabled>true</data-retirement-enabled>
<preferred-store-size-limit>100</preferred-store-size-limit>
<store-size-check-period>1</store-size-check-period>
</server-diagnostic-config>
</server>
6-18
7
Database Tuning
Follow the Oracle WebLogic Server tuning guidelines to prevent your database from
becoming a major enterprise-level bottleneck by configuring it for optimal performance.
• General Suggestions
• Database-Specific Tuning
• General Suggestions
Review general database tuning suggestions.
• Database-Specific Tuning
Consider the basic tuning suggestions for Oracle, SQL Server, and Sybase databases.
General Suggestions
Review general database tuning suggestions.
• Good database design — Distribute the database workload across multiple disks to avoid
or reduce disk overloading. Good design also includes proper sizing and organization of
tables, indexes, and logs.
• Disk I/O optimization — Disk I/O optimization is related directly to throughput and
scalability. Access to even the fastest disk is orders of magnitude slower than memory
access. Whenever possible, optimize the number of disk accesses. In general, selecting
a larger block/buffer size for I/O reduces the number of disk accesses and might
substantially increase throughput in a heavily loaded production environment.
• Checkpointing — This mechanism periodically flushes all dirty cache data to disk, which
increases the I/O activity and system resource usage for the duration of the checkpoint.
Although frequent checkpointing can increase the consistency of on-disk data, it can also
slow database performance. Most database systems have checkpointing capability, but
not all database systems provide user-level controls. Oracle, for example, allows
administrators to set the frequency of checkpoints while users have no control over
SQLServer 7.x checkpoints. For recommended settings, see the product documentation
for the database you are using.
• Disk and database overhead can sometimes be dramatically reduced by batching
multiple operations together and/or increasing the number of operations that run in
parallel (increasing concurrency). Examples:
– Increasing the value of the Message bridge BatchSize or the Store-and-Forward
WindowSize can improve performance as larger batch sizes produce fewer but larger
I/Os.
– Programmatically leveraging JDBC's batch APIs.
– Use the MDB transaction batching feature. See Tuning Message-Driven Beans.
– Increasing concurrency by increasing max-beans-in-free-pool and thread pool size
for MDBs (or decreasing it if batching can be leveraged).
7-1
Chapter 7
Database-Specific Tuning
Database-Specific Tuning
Consider the basic tuning suggestions for Oracle, SQL Server, and Sybase databases.
• Oracle
• Microsoft SQL Server
• Sybase
Note:
Always check the tuning guidelines in your database-specific vendor
documentation.
• Oracle
• Microsoft SQL Server
• Sybase
Oracle
This section describes performance tuning for Oracle.
• Number of processes — On most operating systems, each connection to the
Oracle server spawns a shadow process to service the connection. Thus, the
maximum number of processes allowed for the Oracle server must account for the
number of simultaneous users, as well as the number of background processes
used by the Oracle server. The default number is usually not big enough for a
system that needs to support a large number of concurrent operations. For
platform-specific issues, see your Oracle administrator's guide. The current setting
of this parameter can be obtained with the following query:
SELECT name, value FROM v$parameter WHERE name = 'processes';
• Buffer pool size —The buffer pool usually is the largest part of the Oracle server
system global area (SGA). This is the location where the Oracle server caches
data that it has read from disk. For read-mostly applications, the single most
important statistic that affects data base performance is the buffer cache hit ratio.
The buffer pool should be large enough to provide upwards of a 95% cache hit
ratio. Set the buffer pool size by changing the value, in data base blocks, of the
db_cache_size parameter in the init.ora file.
• Shared pool size — The share pool in an important part of the Oracle server
system global area (SGA). The SGA is a group of shared memory structures that
contain data and control information for one Oracle database instance. If multiple
users are concurrently connected to the same instance, the data in the instance's
SGA is shared among the users. The shared pool portion of the SGA caches data
for two major areas: the library cache and the dictionary cache. The library cache
stores SQL-related information and control structures (for example, parsed SQL
statement, locks). The dictionary cache stores operational metadata for SQL
processing.
7-2
Chapter 7
Database-Specific Tuning
For most applications, the shared pool size is critical to Oracle performance. If the shared
pool is too small, the server must dedicate resources to managing the limited amount of
available space. This consumes CPU resources and causes contention because Oracle
imposes restrictions on the parallel management of the various caches. The more you
use triggers and stored procedures, the larger the shared pool must be. The
SHARED_POOL_SIZE initialization parameter specifies the size of the shared pool in bytes.
The following query monitors the amount of free memory in the share pool:
SELECT * FROM v$sgastat
WHERE name = 'free memory' AND pool = 'shared pool';
• Maximum opened cursor — To prevent any single connection taking all the resources in
the Oracle server, the OPEN_CURSORS initialization parameter allows administrators to limit
the maximum number of opened cursors for each connection. Unfortunately, the default
value for this parameter is too small for systems such as WebLogic Server. Cursor
information can be monitored using the following query:
SELECT name, value FROM v$sysstat
WHERE name LIKE 'opened cursor%';
• Database block size — A block is Oracle's basic unit for storing data and the smallest unit
of I/O. One data block corresponds to a specific number of bytes of physical database
space on disk. This concept of a block is specific to Oracle RDBMS and should not be
confused with the block size of the underlying operating system. Since the block size
affects physical storage, this value can be set only during the creation of the database; it
cannot be changed once the database has been created. The current setting of this
parameter can be obtained with the following query:
SELECT name, value FROM v$parameter WHERE name = 'db_block_size';
• Sort area size — Increasing the sort area increases the performance of large sorts
because it allows the sort to be performed in memory during query processing. This can
be important, as there is only one sort area for each connection at any point in time. The
default value of this init.ora parameter is usually the size of 6–8 data blocks. This value
is usually sufficient for OLTP operations but should be increased for decision support
operation, large bulk operations, or large index-related operations (for example,
recreating an index). When performing these types of operations, you should tune the
following init.ora parameters (which are currently set for 8K data blocks):
sort_area_size = 65536
sort_area_retained_size = 65536
Sybase
The following guidelines pertain to performance tuning parameters for Sybase databases. For
more information about these parameters, see your Sybase documentation.
7-3
Chapter 7
Database-Specific Tuning
7-4
8
Tuning WebLogic Server EJBs
Tune the Oracle WebLogic Server EJBs for your application environment by following the
general EJB tuning tips, and tuning EJB caches and pools.
• General EJB Tuning Tips
• Tuning EJB Caches
• Tuning EJB Pools
• CMP Entity Bean Tuning
• Tuning In Response to Monitoring Statistics
• General EJB Tuning Tips
Use the general EJB tuning tips to optimize the application's performance.
• Tuning EJB Caches
Learn how to tune EJB caches.
• Tuning EJB Pools
Learn how to tune EJB pools.
• CMP Entity Bean Tuning
The largest performance gains in entity beans are achieved by using caching to minimize
the number of interactions with the data base. However, in most situations, it is not
realistic to be able to cache entity beans beyond the scope of a transaction. Learn about
the WebLogic Server EJB container features that you can use to minimize database
interaction safely.
• Tuning In Response to Monitoring Statistics
The WebLogic Server Administration Console reports a wide variety of EJB runtime
monitoring statistics, many of which are useful for tuning the performance of your EJBs.
Learn how some of these statistics can help you tune the performance of EJBs.
8-1
Chapter 8
Tuning EJB Caches
8-2
Chapter 8
Tuning EJB Caches
• Transaction-Level Caching
• Caching between Transactions
• Ready Bean Caching
• Transaction-Level Caching
• Caching between Transactions
• Ready Bean Caching
Transaction-Level Caching
Once an entity bean has been loaded from the database, it is always retrieved from the
cache whenever it is requested when using the findByPrimaryKey or invoked from a cached
reference in that transaction. Getting an entity bean using a non-primary key finder always
retrieves the persistent state of the bean from the data base.
Is it safe to cache the state? This depends on the concurrency-strategy for that bean. The
entity-bean cache is really only useful when cache-between-transactions can be safely set
to true. In cases where ejbActivate() and ejbPassivate() callbacks are expensive, it is still
a good idea to ensure the entity-cache size is large enough. Even though the persistent state
may be reloaded at least once per transaction, the beans in the cache are already activated.
The value of the cache-size is set by the deployment descriptor parameter max-beans-in-
cache and should be set to maximize cache-hits. In most situations, the value need not be
larger than the product of the number of rows in the table associated with the entity bean and
the number of threads expected to access the bean concurrently.
8-3
Chapter 8
Tuning EJB Pools
8-4
Chapter 8
CMP Entity Bean Tuning
8-5
Chapter 8
CMP Entity Bean Tuning
This element should only be set to true if the CMP bean's related beans can never be
null or an empty set.
The default value is false. If you specify its value as true, all relationship cache query
on the entity bean use an inner join instead of a left outer join to execute a select
query clause.
Tuned Updates
When an entity EJB is updated, the EJB container automatically updates in the data
base only those fields that have actually changed. As a result the update statements
are simpler and if a bean has not been modified, no data base call is made. Because
different transactions may modify different sets of fields, more than one form of update
statements may be used to store the bean in the data base. It is important that you
account for the types of update statements that may be used when setting the size of
the prepared statement cache in the JDBC connection pool. See Cache Prepared and
Callable Statements.
8-6
Chapter 8
CMP Entity Bean Tuning
Note:
Be careful to ensure that fields that are accessed in the same transaction are not
configured into separate field-groups. If that happens, multiple data base calls occur
to load the same bean, when one would have been enough.
include-updates
This flag causes the EJB container to flush all modified entity beans to the data base before
executing a finder. If the application modifies the same entity bean more than once and
executes a non-pk finder in-between in the same transaction, multiple updates to the data
base are issued. This flag is turned on by default to comply with the EJB specification.
If the application has transactions where two invocations of the same or different finders
could return the same bean instance and that bean instance could have been modified
between the finder invocations, it makes sense leaving include-updates turned on. If not,
this flag may be safely turned off. This eliminates an unnecessary flush to the data base if the
bean is modified again after executing the second finder. This flag is specified for each finder
in the cmp-rdbms descriptor.
call-by-reference
When it is turned off, method parameters to an EJB are passed by value, which involves
serialization. For mutable, complex types, this can be significantly expensive. Consider using
for better performance when:
• The application does not require call-by-value semantics, such as method parameters
are not modified by the EJB.
or
• If modified by the EJB, the changes need not be invisible to the caller of the method.
This flag applies to all EJBs, not just entity EJBs. It also applies to EJB invocations between
servlets/JSPs and EJBs in the same application. The flag is turned off by default to comply
with the EJB specification. This flag is specified at the bean-level in the WebLogic-specific
deployment descriptor.
8-7
Chapter 8
CMP Entity Bean Tuning
Note:
If the lock is not exclusive lock, you man encounter deadlock conditions. If
the data base lock is a shared lock, there is potential for deadlocks when
using that RDBMS.
Concurrency Strategy
The concurrency-strategy deployment descriptor tells the EJB container how to
handle concurrent access of the same entity bean by multiple threads in the same
server instance. Set this parameter to one of four values:
• Exclusive—The EJB container ensures there is only one instance of an EJB for a
given primary key and this instance is shared among all concurrent transactions in
the server with the container serializing access to it. This concurrency setting
generally does not provide good performance unless the EJB is used infrequently
and chances of concurrent access is small.
• Database—This is the default value and most commonly used concurrency
strategy. The EJB container defers concurrency control to the database. The
container maintains multiple instances of an EJB for a given primary-key and each
transaction gets it's own copy. In combination with this strategy, the database
isolation-level and bean level pessimistic locking play a major role in determining if
concurrent access to the persistent state should be allowed. It is possible for
multiple transactions to access the bean concurrently so long as it does not need
to go to the database, as would happen when the value of cache-between-
transactions is true. However, setting the value of cache-between-transactions
to true unsafe and not recommended with the Dababase concurrency strategy.
• Optimistic—The goal of the optimistic concurrency strategy is to minimize locking
at the data base and while continuing to provide data consistency. The basic
assumption is that the persistent state of the EJB is changed very rarely. The
container attempts to load the bean in a nested transaction so that the isolation-
level settings of the outer transaction does not cause locks to be acquired at the
data base. At commit-time, if the bean has been modified, a predicated update is
used to ensure it's persistent state has not been changed by some other
transaction. If so, an OptimisticConcurrencyException is thrown and must be
handled by the application.
Since EJBs that can use this concurrency strategy are rarely modified, using
cache-between-transactions on can boost performance significantly. This
strategy also allows commit-time verification of beans that have been read, but not
changed. This is done by setting the verify-rows parameter to Read in the cmp-
8-8
Chapter 8
Tuning In Response to Monitoring Statistics
rdbms descriptor. This provides very high data-consistency while at the same time
minimizing locks at the data base. However, it does slow performance somewhat. It is
recommended that the optimistic verification be performed using a version column: it is
faster, followed closely by timestamp, and more distantly by modified and read. The
modified value does not apply if verify-rows is set to Read.
When an optimistic concurrency bean is modified in a server that is part of a cluster, the
server attempts to invalidate all instances of that bean cluster-wide in the expectation that
it will prevent OptimisticConcurrencyExceptions. In some cases, it may be more cost
effective to simply let other servers throw an OptimisticConcurrencyException. in this
case, turn off the cluster-wide invalidation by setting the cluster-invalidation-
disabled flag in the cmp-rdbms descriptor.
• ReadOnly—The ReadOnly value is the most performant. When selected, the container
assumes the EJB is non-transactional and automatically turns on cache-between-
transactions. Bean states are updated from the data base at periodic, configurable
intervals or when the bean has been programmatically invalidated. The interval between
updates can cause the persistent state of the bean to become stale. This is the only
concurrency-strategy for which query-caching can be used. See Caching between
Transactions.
A high cache miss ratio could be indicative of an improperly sized cache. If your application
uses a certain subset of beans (read primary keys) more frequently than others, it would be
8-9
Chapter 8
Tuning In Response to Monitoring Statistics
ideal to size your cache large enough so that the commonly used beans can remain in
the cache as less commonly used beans are cycled in and out upon demand. If this is
the nature of your application, you may be able to decrease your cache miss ratio
significantly by increasing the maximum size of your cache.
If your application doesn't necessarily use a subset of beans more frequently than
others, increasing your maximum cache size may not affect your cache miss ratio. We
recommend testing your application with different maximum cache sizes to determine
which give the lowest cache miss ratio. It is also important to keep in mind that your
server has a finite amount of memory and therefore there is always a trade-off to
increasing your cache size.
A high lock waiter ratio can indicate a suboptimal concurrency strategy for the bean. If
acceptable for your application, a concurrency strategy of Database or Optimistic will
allow for more parallelism than an Exclusive strategy and remove the need for locking
at the EJB container level.
Because locks are generally held for the duration of a transaction, reducing the
duration of your transactions will free up beans more quickly and may help reduce
your lock waiter ratio. To reduce transaction duration, avoid grouping large amounts of
work into a single transaction unless absolutely necessary.
The lock timeout ratio is closely related to the lock waiter ratio. If you are concerned
about the lock timeout ratio for your bean, first take a look at the lock waiter ratio and
our recommendations for reducing it (including possibly changing your concurrency
strategy). If you can reduce or eliminate the number of times a thread has to wait for a
lock on a bean, you will also reduce or eliminate the amount of timeouts that occur
while waiting.
A high lock timeout ratio may also be indicative of an improper transaction timeout
value. The maximum amount of time a thread will wait for a lock is equal to the current
transaction timeout value.
If the transaction timeout value is set too low, threads may not be waiting long enough
to obtain access to a bean and timing out prematurely. If this is the case, increasing
the trans-timeout-seconds value for the bean may help reduce the lock timeout ratio.
Take care when increasing the trans-timeout-seconds, however, because doing so can
cause threads to wait longer for a bean and threads are a valuable server resource.
Also, doing so may increase the request time, as a request ma wait longer before
timing out.
8-10
Chapter 8
Tuning In Response to Monitoring Statistics
If your pool miss ratio is high, you must determine what is happening to your bean instances.
There are three things that can happen to your beans.
• They are in use.
• They were destroyed.
• They were removed.
Follow these steps to diagnose the problem:
1. Check your destroyed bean ratio to verify that bean instances are not being destroyed.
2. Investigate the cause and try to remedy the situation.
3. Examine the demand for the EJB, perhaps over a period of time.
One way to check this is via the Beans in Use Current Count and Idle Beans Count displayed
in the WebLogic Server Administration Console. If demand for your EJB spikes during a
certain period of time, you may see a lot of pool misses as your pool is emptied and unable to
fill additional requests.
As the demand for the EJB drops and beans are returned to the pool, many of the beans
created to satisfy requests may be unable to fit in the pool and are therefore removed. If this
is the case, you may be able to reduce the number of pool misses by increasing the
maximum size of your free pool. This may allow beans that were created to satisfy demand
during peak periods to remain in the pool so they can be used again when demand once
again increases.
To reduce the number of destroyed beans, Oracle recommends against throwing non-
application exceptions from your bean code except in cases where you want the bean
instance to be destroyed. A non-application exception is an exception that is either a
java.rmi.RemoteException (including exceptions that inherit from RemoteException) or is not
defined in the throws clause of a method of an EJB's home or component interface.
In general, you should investigate which exceptions are causing your beans to be destroyed
as they may be hurting performance and may indicate problem with the EJB or a resource
used by the EJB.
8-11
Chapter 8
Tuning In Response to Monitoring Statistics
Pool Timeout Ratio = (Pool Total Timeout Count / Pool Total Access Count) * 100
A high pool timeout ratio could be indicative of an improperly sized free pool.
Increasing the maximum size of your free pool via the max-beans-in-free-pool
setting will increase the number of bean instances available to service requests and
may reduce your pool timeout ratio.
Another factor affecting the number of pool timeouts is the configured transaction
timeout for your bean. The maximum amount of time a thread will wait for a bean from
the pool is equal to the default transaction timeout for the bean. Increasing the trans-
timeout-seconds setting in your weblogic-ejb-jar.xml file will give threads more
time to wait for a bean instance to become available.
Users should exercise caution when increasing this value, however, since doing so
may cause threads to wait longer for a bean and threads are a valuable server
resource. Also, request time might increase because a request will wait longer before
timing out.
A high transaction timeout ratio could be caused by the wrong transaction timeout
value. For example, if your transaction timeout is set too low, you may be timing out
transactions before the thread is able to complete the necessary work. Increasing your
transaction timeout value may reduce the number of transaction timeouts.
You should exercise caution when increasing this value, however, since doing so can
cause threads to wait longer for a resource before timing out. Also, request time might
increase because a request will wait longer before timing out.
A high transaction timeout ratio could be caused by a number of things such as a
bottleneck for a server resource. We recommend tracing through your transactions to
investigate what is causing the timeouts so the problem can be addressed.
8-12
9
Tuning Message-Driven Beans
Use the tuning and best practice information of Oracle WebLogic Server for Message-Driven
Beans (MDBs).
• Use Transaction Batching
• MDB Thread Management
• Best Practices for Configuring and Deploying MDBs Using Distributed Topics
• Using MDBs with Foreign Destinations
• Token-based Message Polling for Transactional MDBs Listening on Queues/Topics
• Compatibility for WLS 10.0 and Earlier-style Polling
• Use Transaction Batching
MDB transaction batching allows several JMS messages to be processed in one
container managed transaction. Batching amortizes the cost of transactions over multiple
messages and when used appropriately, can reduce or even eliminate the throughput
difference between 2PC and 1PC processing.
• MDB Thread Management
Thread management for MDBs is described in terms of concurrency—the number of
MDB instances that can be active at the same time. Review information about MDB
concurrency.
• Best Practices for Configuring and Deploying MDBs Using Distributed Topics
Message-driven beans provide a number of application design and deployment options
that offer scalability and high availability when using distributed topics. Follow the best
practices for configuring and deploying MDBs.
• Using MDBs with Foreign Destinations
Review information on the behavior of WebLogic Server when using MDBs that consume
messages from foreign destinations
• Token-based Message Polling for Transactional MDB Listening on Queues/Topics
Token-based polling mechanism approach provides better control of the concurrent poller
thread count under changing message loads. Transactional WebLogic MDB uses a
synchronous polling mechanism to retrieve messages from JMS destinations. With
synchronous polling, one or more WebLogic polling threads synchronously receive
messages from the MDB's source destination and then invoke the MDB application's
onMessage callback.
• Compatibility for WLS 10.0 and Earlier-style Polling
In WLS 10.0 and earlier, transactional MDBs with batching enabled created a dedicated
polling thread for each deployed MDB. This polling thread was not allocated from the pool
specified by dispatch-policy, it was an entirely new thread in addition to the all other
threads running on the system.
9-1
Chapter 9
Use Transaction Batching
9-2
Chapter 9
MDB Thread Management
Note:
Every application is unique, select a concurrency strategy based on how your
application performs in its environment.
Note:
You must configure the max-threads-constraint parameter to override the
default concurrency of 16.
• In WebLogic Server 8.1, you could increase the size of the default execute queue
knowing that a larger default pool means a larger maximum MDB concurrency. Default
thread pool MDBs upgraded to WebLogic Server 9.0 will have a fixed maximum of 16. To
achieve MDB concurrency numbers higher than 16, you will need to create a custom
work manager or custom execute queue. See Table 9-1.
9-3
Chapter 9
MDB Thread Management
9-4
Chapter 9
Best Practices for Configuring and Deploying MDBs Using Distributed Topics
Caution:
Note:
The term "foreign destination" in this context refers to destinations that are hosted
by a non-WebLogic JMS provider. It does not refer to remote WebLogic
destinations.
9-5
Chapter 9
Token-based Message Polling for Transactional MDB Listening on Queues/Topics
9-6
Chapter 9
Compatibility for WLS 10.0 and Earlier-style Polling
9-7
10
Tuning Data Sources
To get the best performance from your Oracle WebLogic Server data sources, use the
recommended tips to tune the data sources.
• Tune the Number of Database Connections
• Waste Not
• Use Test Connections on Reserve with Care
• Cache Prepared and Callable Statements
• Using Pinned-To-Thread Property to Increase Performance
• Database Listener Timeout under Heavy Server Loads
• Disable Wrapping of Data Type Objects
• Advanced Configurations for Oracle Drivers and Databases
• Use Best Design Practices
• Tune the Number of Database Connections
Creating a database connection is a relatively expensive process in any environment. A
straightforward and easy way to boost performance of a data source in WebLogic Server
applications is to set the value of Initial Capacity equal to the value for Maximum
Capacity when configuring connection pools in your data source.
• Waste Not
Another simple way to boost performance is to avoid wasting resources. Read about
situations in which you can avoid wasting JDBC related resources.
• Use Test Connections on Reserve with Care
When Test Connections on Reserve is enabled, the server instance checks a database
connection prior to returning the connection to a client. This reduces the risk of passing
invalid connections to clients.
• Cache Prepared and Callable Statements
When you use a prepared statement or callable statement in an application or EJB, there
is considerable processing overhead for the communication between the application
server and the database server and on the database server itself. To minimize the
processing costs, WebLogic Server can cache prepared and callable statements used in
your applications.
• Using Pinned-To-Thread Property to Increase Performance
To minimize the time it takes for an application to reserve a database connection from a
data source and to eliminate contention between threads for a database connection, add
the Pinned-To-Thread property in the connection Properties list for the data source, and
set its value to true.
• Database Listener Timeout under Heavy Server Loads
In some situations where WebLogic Server is under heavy loads, the database listener
may timeout and throw an exception while creating a new connection. To workaround this
issue, increase the listener timeout on the database server.
10-1
Chapter 10
Tune the Number of Database Connections
Note that if you configure the value of Initial Capacity to be zero, WebLogic Server
does not get a connection during startup. This provides a big startup performance
gain, especially if several data sources are available. But more importantly, it allows
the data source to be deployed on startup, even if the database is not available or has
problems at startup (or it could be a standby data source that is not even available
when the primary service is running).
There are two situations in which a connection is reserved, even if Initial Capacity
is zero:
1. For a multi data source configured for LLR, a connection is reserved on each
member data source to determine if the underlying database is an Oracle Real
Application Clusters (Oracle RAC) database. If it is Oracle RAC, only one of the
member data sources must be available.
2. For an Active GridLink (AGL) data source configured with auto-ONS(that is, with
no ONS host and port pairs provided), a connection is created to get the ONS
configuration information from the database.
See Tuning Data Source Connection Pool Options in Administering JDBC Data
Sources for Oracle WebLogic Server.
10-2
Chapter 10
Waste Not
Waste Not
Another simple way to boost performance is to avoid wasting resources. Read about
situations in which you can avoid wasting JDBC related resources.
• JNDI lookups are relatively expensive, so caching an object that required a looked-up in
client code or application code avoids incurring this performance hit more than once.
• Once client or application code has a connection, maximize the reuse of this connection
rather than closing and reacquiring a new connection. While acquiring and returning an
existing creation is much less expensive than creating a new one, excessive acquisitions
and returns to pools creates contention in the connection pool and degrades application
performance.
• Don't hold connections any longer than is necessary to achieve the work needed. Getting
a connection once, completing all necessary work, and returning it as soon as possible
provides the best balance for overall performance.
10-3
Chapter 10
Using Pinned-To-Thread Property to Increase Performance
In this release, the Pinned-To-Thread feature does not work with multi data sources,
Oracle RAC, and IdentityPool. These features rely on the ability to return a connection
to the connection pool and reacquire it if there is a connection failure or connection
identity does not match
See JDBC Data Source: Configuration: Connection Pool in the Oracle WebLogic
Server Administration Console Online Help.
10-4
Chapter 10
Use Best Design Practices
10-5
11
Tuning Transactions
Learn tuning guidelines of Oracle WebLogic Server to optimize transaction performance.
• Improving Throughput Using XA Transaction Cluster Affinity
• Logging Last Resource Transaction Optimization
• Read-only_ One-Phase Commit Optimizations
• Configure XA Transactions without TLogs
• Improving Throughput Using XA Transaction Cluster Affinity
XA transaction cluster affinity allows server instances that are participating in a global
transactions to service related requests rather than load-balancing these requests to
other member servers.
• Logging Last Resource Transaction Optimization
The Logging Last Resource (LLR) transaction optimization through JDBC data sources
safely reduces the overhead of two-phase transactions involving database inserts,
updates, and deletes. Two phase transactions occur when two different resources
participate in the same global transaction (global transactions are often referred to as
"XA" or "JTA" transactions).
• Read-only, One-Phase Commit Optimizations
When resource managers, such as the Oracle Database (including Oracle AQ and
Oracle RAC), provide read-only optimizations, Oracle WebLogic can provide a read-only,
one-phase commit optimization that provides a number of benefits – even when enabling
multiple connections of the same XA transactions – such as eliminating
XAResource.prepare network calls and transaction log writes, both in Oracle WebLogic
and in the resource manager.
• Configure XA Transactions without TLogs
Improve XA transaction performance by eliminating TLogs when XA transactions span a
single Transaction Manager (TM). XA transaction resources (Determiners) are used
during transaction recovery when a TLog is not present.
11-1
Chapter 11
Logging Last Resource Transaction Optimization
See Configure clusters in Oracle WebLogic Server Administration Console Online Help
and XA Transaction Affinity in Administering Clusters for Oracle WebLogic Server.
11-2
Chapter 11
Read-only, One-Phase Commit Optimizations
11-3
12
Tuning WebLogic JMS
Get the most out of your applications by implementing the administrative performance tuning
features available with Oracle WebLogic Server JMS.
• JMS Performance & Tuning Check List
• Handling Large Message Backlogs
• Cache and Re-use Client Resources
• Tuning Distributed Queues
• Tuning Topics
• Tuning for Large Messages
• Defining Quota
• Blocking Senders During Quota Conditions
• Subscription Message Limits
• Controlling the Flow of Messages on JMS Servers and Destinations
• Handling Expired Messages
• Tuning Applications Using Unit-of-Order
• Using JMS 2.0 Asynchronous Message Sends
• Using One-Way Message Sends
• Tuning the Messaging Performance Preference Option
• Client-side Thread Pools
• Best Practices for JMS .NET Client Applications
• Considerations for Oracle Data Guard Environments
• JMS Performance & Tuning Check List
Review a checklist of items to consider when tuning WebLogic JMS.
• Handling Large Message Backlogs
When message senders inject messages faster than consumers, messages accumulate
into a message backlog.
• Cache and Re-use Client Resources
JMS client resources are relatively expensive to create in comparison with sending and
receiving messages. These resources should be cached or pooled for re-use rather than
recreating them with each message. They include contexts, destinations, connection
factories, connections, sessions, consumers, or producers.
• Tuning Distributed Queues
Each distributed queue member is individually advertised in JNDI as jms-server-
name@distributed-destination-jndi-name. If produced messages are failing to load
balance evenly across all distributed queue members, you may wish to change the
configuration of your producer connection factories to disable server affinity (enabled by
default) or set Producer Load Balancing Policy to Per-JVM.
12-1
Chapter 12
• Tuning Topics
Review information on how to tune WebLogic Topics.
• Tuning for Large Messages
Learn how to improve JMS performance when handling large messages.
• Defining Quota
It is highly recommended to always configure message count quotas. Quotas help
prevent large message backlogs from causing out-of-memory errors, and
WebLogic JMS does not set quotas by default.
• Blocking Senders During Quota Conditions
• Subscription Message Limits
In Oracle WebLogic JMS 12.2.1.3.0 and later, you can help prevent overloaded
subscriptions from using all the available resources by configuring a message limit
for a topic or a template. To configure a message limit, set the
MessagesLimitOverride attribute on a destination template, a standalone topic, or
a uniform distributed topic.
• Controlling the Flow of Messages on JMS Servers and Destinations
With the Flow Control feature, you can direct a JMS server or destination to slow
down message producers when it determines that it is becoming overloaded.
• Handling Expired Messages
Active message expiration ensures that expired messages are cleaned up
immediately. Expired message auditing gives you the option of tracking expired
messages, either by logging when a message expires or by redirecting expired
messages to a defined error destination.
• Tuning Applications Using Unit-of-Order
Message Unit-of-Order is a WebLogic Server value-added feature that enables a
stand-alone message producer, or a group of producers acting as one, to group
messages into a single unit with respect to the processing order (a sub-ordering).
This single unit is called a Unit-of-Order (or UOO) and requires that all messages
from that unit be processed sequentially in the order they were created.
• Using JMS 2.0 Asynchronous Message Sends
WebLogic Server 12.2.1.0 introduced a standard way to do asynchronous sends,
that is flexible, powerful, and supported by the standard JMS 2.0 asynchronous
send method.
• Using One-Way Message Sends
One-way message sends can greatly improve the performance of applications that
are bottle-necked by senders, but do so at the risk of introducing a lower QOS
(quality-of-service). By enabling the One-Way Send Mode options, you allow
message producers created by a user-defined connection factory to do one-way
message sends, when possible.
• Tuning the Messaging Performance Preference Option
The Messaging Performance Preference tuning option on JMS destinations
enables you to control how long a destination should wait (if at all) before creating
full batches of available messages for delivery to consumers.
• Client-side Thread Pools
WebLogic client thread pools are configured differently than WebLogic server
thread-pools, and are not self tuning. Use the -Dweblogic.ThreadPoolSize=n
command-line property to configure the thread pools.
12-2
Chapter 12
JMS Performance & Tuning Check List
12-3
Chapter 12
Handling Large Message Backlogs
singleton destination, target the connection factory to the same server that hosts
the destination.
• If JTA transactions include both JMS and JDBC operations, consider enabling the
JDBC LLR optimization. LLR is a commonly used safe "ACID" optimization that
can lead to significant performance improvements, with some drawbacks. See
Tuning Transactions.
• If you are using Java clients, avoid thin Java clients except when a small jar size is
more important than performance. Thin clients use the slower IIOP protocol even
when T3 is specified so use a full java client instead. See Developing Standalone
Clients for Oracle WebLogic Server.
• Tune JMS Store-and-Forward according to Tuning WebLogic JMS Store-and-
Forward.
• Tune a WebLogic Messaging Bridge according Tuning WebLogic Message Bridge.
• For asynchronous message sends, see Using JMS 2.0 Asynchronous Message
Sends (preferred), or if JMS 2.0 is not an option, and you are using non-persistent
non-transactional remote producer clients, then consider enabling one-way calls.
See Using One-Way Message Sends.
• Consider using JMS distributed queues. See Using Distributed Queues in
Developing JMS Applications for Oracle WebLogic Server.
• If you are already using distributed queues, see Tuning Distributed Queues.
• Consider using advanced distributed topic features (PDTs). See Developing
Advanced Pub/Sub Applications in Developing JMS Applications for Oracle
WebLogic Server.
• If your applications use Topics, see Tuning Topics.
• Avoid configuring sorted destinations, including priority sorted destinations. FIFO
or LIFO destinations are the most efficient. Destination sorting can be expensive
when there are large message backlogs, even a backlog of a few hundred
messages can lower performance.
• Use careful selector design. See Filtering Messages in Developing JMS
Applications for Oracle WebLogic Server.
• Run applications on the same WebLogic Servers that are also hosting
destinations. This eliminates networking and some or all marshalling overhead,
and can heavily reduce network and CPU usage. It also helps ensure that
transactions are local to a single server. This is one of the major advantages of
using an application server's embedded messaging.
12-4
Chapter 12
Handling Large Message Backlogs
• Can lead to high garbage collection (GC) overhead. A JVM's GC overhead is partially
proportional to the number of live objects in the JVM.
• Improving Message Processing Performance
• Controlling Message Production
12-5
Chapter 12
Handling Large Message Backlogs
12-6
Chapter 12
Cache and Re-use Client Resources
that the producer threads are running in a size limited dedicated thread pool, as this
ensures that the blocking threads do not interfere with activity in other thread pools. For
example, if an EJB or servlet is calling a "send" that might block for a significant time:
configure a custom work manager with a max threads constraint, and set the dispatch-
policy of the EJB/servlet to reference this work-manager.
Once created, a JMS consumer remains pinned to a particular queue member. This can lead
to situations where consumers are not evenly load balanced across all distributed queue
members, particularly if new members become available after all consumers have been
initialized. If consumers fail to load balance evenly across all distributed queue members, the
best option is to use an MDB that's targeted to a cluster designed to process the messages.
WebLogic MDBs automatically ensure that all distributed queue members are serviced. If
MDBs are not an option, here are some suggestions to improve consumer load balancing:
• Ensure that your application is creating enough consumers and the consumer's
connection factory is tuned using the available load balancing options. In particular,
consider disabling the default server affinity setting and consider setting the Producer
Load Balancing Policy to Per-JVM.
• Change applications to periodically close and recreate consumers. This forces
consumers to re-load balance.
12-7
Chapter 12
Tuning Topics
• Consume from individual queue members instead of from the distributed queues
logical name.
• Configure the distributed queue to enable forwarding. Distributed queue
forwarding automatically internally forwards messages that have been idled on a
member destination without consumers to a member that has consumers. This
approach may not be practical for high message load applications.
Note:
Queue forwarding is not compatible with the WebLogic JMS Unit-of-
Order feature, as it can cause messages to be delivered out of order.
Tuning Topics
Review information on how to tune WebLogic Topics.
• You may want to convert singleton topics to distributed topics.
• Oracle highly recommends leveraging MDBs to process Topic messages,
especially when working with Distributed Topics. MDBs automate the creation and
servicing of multiple subscriptions and also provide high scalability options to
automatically distribute the messages for a single subscription across multiple
Distributed Topic members.
• There is a Sharable subscription extension that allows messages on a single topic
subscription to be processed in parallel by multiple subscribers on multiple JVMs.
WebLogic MDBs leverage this feature when they are not in Compatibility mode.
• If the application can tolerate the deletion of old messages without having them be
processed by a consumer, consider using message expirations or subscription
limits. See Defining an Expiration Logging Policy and Subscription Message
Limits.
• If produced messages are failing to load balance evenly across the members of a
Distributed Topic, you may need to change the configuration of your producer
connection factories to disable server affinity (enabled by default) or set Producer
Load Balancing Policy to Per-JVM.
• Before using any of these previously mentioned advanced features, Oracle
recommends fully reviewing the following related documentation:
– Configuring and Deploying MDBs Using Distributed Topics in Developing
Message-Driven Beans for Oracle WebLogic Server
– Developing Advanced Pub/Sub Applications in Administering JMS Resources
for Oracle WebLogic Server
– Advanced Programming with Distributed Destinations Using the JMS
Destination Availability Helper API in Administering JMS Resources for Oracle
WebLogic Server
• Tuning Non-durable Topic Publishers
12-8
Chapter 12
Tuning for Large Messages
Tuning MessageMaximum
WebLogic JMS pipelines messages that are delivered to asynchronous consumers
(otherwise known as message listeners) or prefetch-enabled synchronous consumers. This
action aids performance because messages are aggregated when they are internally pushed
from the server to the client. The messages backlog (the size of the pipeline) between the
JMS server and the client is tunable by configuring the MessagesMaximum setting on the
connection factory. See Asynchronous Message Pipeline in Developing JMS Applications for
Oracle WebLogic Server.
In some circumstances, tuning the MessagesMaximum parameter may improve performance
dramatically, such as when the JMS application defers acknowledges or commits. In this
case, Oracle suggests setting the MessagesMaximum value to:
2 * (ack or commit interval) + 1
For example, if the JMS application acknowledges 50 messages at a time, set the
MessagesMaximum value to 101.
12-9
Chapter 12
Tuning for Large Messages
Note:
This setting applies to all WebLogic Server network packets delivered to the
client, not just JMS related packets.
12-10
Chapter 12
Tuning for Large Messages
For instructions on configuring default compression thresholds using the WebLogic Server
Administration Console, see:
• Connection factories — Configure default delivery parameters in the Oracle WebLogic
Server Administration Console Online Help.
• Store-and-Forward (SAF) remote contexts — Configure SAF remote contexts in the
Oracle WebLogic Server Administration Console Online Help.
Once configured, message compression is triggered on producers for client sends, on
connection factories for message receives and message browsing, or through SAF
forwarding. Messages are compressed using GZIP. Compression only occurs when message
producers and consumers are located on separate server instances where messages must
cross a JVM boundary, typically across a network connection when WebLogic domains reside
on different machines. Decompression automatically occurs on the client side and only when
the message content is accessed, except for the following situations:
• Using message selectors on compressed XML messages can cause decompression,
since the message body must be accessed in order to filter them. For more information
on defining XML message selectors, see Filtering Messages in Developing JMS
Applications for Oracle WebLogic Server.
• Interoperating with earlier versions of WebLogic Server can cause decompression. For
example, when using the Messaging Bridge, messages are decompressed when sent
from the current release of WebLogic Server to a receiving side that is an earlier version
of WebLogic Server.
On the server side, messages always remains compressed, even when they are written to
disk.
Store Compression
WebLogic Server provides the ability to configure message compression for JMS Store I/O
operations.
By selecting an appropriate message body compression option, JMS store I/O performance
may improve for:
• Persistent messages that are read from or written to disk.
• Persistent and non-persistent messages are paged in or paged out when JMS paging is
enabled.
The following sections provide information on how to configure message compression:
• Selecting a Message Compression Option
• Message Compression for JMS Servers
• Message Compression for Store-and-Forward Sending Agents
For general tuning information on JMS message compression, see Threshold Compression
for Remote Producers.
• Selecting a Message Compression Option
This section provides information on the types of message compression available for use
when message body compression is enabled.
• Message Compression for JMS Servers
• Message Compression for Store-and-Forward Sending Agents
To configure message body compression for SAF Sending Agents:
12-11
Chapter 12
Tuning for Large Messages
Note:
The performance of each compression option is dependent on the operating
environment, data type, and data size. Oracle recommends users test their
environments to determine the most appropriate compression option.
12-12
Chapter 12
Tuning for Large Messages
12-13
Chapter 12
Defining Quota
To configure the Message Paging Directory attribute, see "Configure general JMS
server properties" in Oracle WebLogic Server Administration Console Online Help.
Defining Quota
It is highly recommended to always configure message count quotas. Quotas help
prevent large message backlogs from causing out-of-memory errors, and WebLogic
JMS does not set quotas by default.
There are many options for setting quotas, but in most cases it is enough to simply set
a Messages Maximum quota on each JMS Server rather than using destination level
quotas. Keep in mind that each current JMS message consumes JVM memory even
when the message has been paged out, because paging pages out only the message
bodies but not message headers. A good rule of thumb for queues is to assume that
each current JMS message consumes 512 bytes of memory. A good rule of thumb for
topics is to assume that each current JMS message consumes 256 bytes of memory
plus an additional 256 bytes of memory for each subscriber that hasn't acknowledged
the message yet. For example, if there are 3 subscribers on a topic, then a single
published message that hasn't been processed by any of the subscribers consumes
256 + 256*3 = 1024 bytes even when the message is paged out. Although message
12-14
Chapter 12
Defining Quota
header memory usage is typically significantly less than these rules of thumb indicate, it is a
best practice to make conservative estimates on memory utilization.
In prior releases, there were multiple levels of quotas: destinations had their own quotas and
would also have to compete for quota within a JMS server. In this release, there is only one
level of quota: destinations can have their own private quota or they can compete with other
destinations using a shared quota.
In addition, a destination that defines its own quota no longer also shares space in the JMS
server's quota. Although JMS servers still allow the direct configuration of message and byte
quotas, these options are only used to provide quota for destinations that do not refer to a
quota resource.
• Quota Resources
• Destination-Level Quota
• JMS Server-Level Quota
Quota Resources
A quota is a named configurable JMS module resource. It defines a maximum number of
messages and bytes, and is then associated with one or more destinations and is responsible
for enforcing the defined maximums. Multiple destinations referring to the same quota share
available quota according to the sharing policy for that quota resource.
Quota resources include the following configuration parameters:
Attribute Description
Bytes Maximum and The Messages Maximum/Bytes Maximum parameters for a quota
Messages Maximum resource defines the maximum number of messages and/or bytes
allowed for that quota resource. No consideration is given to messages
that are pending; that is, messages that are in-flight, delayed, or
otherwise inhibited from delivery still count against the message and/or
bytes quota.
Quota Sharing The Shared parameter for a quota resource defines whether multiple
destinations referring to the same quota resource compete for
resources with each other.
Quota Policy The Policy parameter defines how individual clients compete for quota
when no quota is available. It affects the order in which send requests
are unblocked when the Send Timeout feature is enabled on the
connection factory, as described in Tuning for Large Messages.
For more information about quota configuration parameters, see QuotaBean in the MBean
Reference for Oracle WebLogic Server. For instructions on configuring a quota resource
using the WebLogic Server Administration Console, see Create a quota for destinations in the
Oracle WebLogic Server Administration Console Online Help.
Destination-Level Quota
Destinations no longer define byte and messages maximums for quota, but can use a quota
resource that defines these values, along with quota policies on sharing and competition.
12-15
Chapter 12
Blocking Senders During Quota Conditions
The Quota parameter of a destination defines which quota resource is used to enforce
quota for the destination. This value is dynamic, so it can be changed at any time.
However, if there are unsatisfied requests for quota when the quota resource is
changed, then those requests will fail with a
javax.jms.ResourceAllocationException.
Note:
Outstanding requests for quota will fail at such time that the quota resource
is changed. This does not mean changes to the message and byte attributes
for the quota resource, but when a destination switches to a different quota.
12-16
Chapter 12
Subscription Message Limits
2. In the Send Timeout field, enter the amount of time, in milliseconds, a sender will block
messages when there is insufficient space on the message destination. Once the
specified waiting period ends, one of the following results will occur:
• If sufficient space becomes available before the timeout period ends, the operation
continues.
• If sufficient space does not become available before the timeout period ends, you
receive a resource allocation exception.
If you choose not to enable the blocking send policy by setting this value to 0, then
you will receive a resource allocation exception whenever sufficient space is not
available on the destination.
For more information about the Send Timeout field, see JMS Connection Factory:
Configuration: Flow Control in the Oracle WebLogic Server Administration Console
Online Help.
3. Click Save.
12-17
Chapter 12
Controlling the Flow of Messages on JMS Servers and Destinations
is deleted on one subscription, then the message can still be received by the other
subscriptions.
A subscription limit differs from a quota in multiple ways. A topic that has reached its
quota disallows new messages until existing messages have been processed or
expired; on the other hand, a subscription that has reached its subscription limit allows
the new message and makes room for it by deleting current messages. Also, a topic
that has reached its quota affects all subscriptions on the topic, as this disallows new
messages from being added to any subscription. By contrast, a subscription limit only
affects subscriptions that have reached their limits.
Note:
12-18
Chapter 12
Controlling the Flow of Messages on JMS Servers and Destinations
Attribute Description
Flow Control Determines whether a producer can be flow controlled by the JMS server.
Enabled
Flow The maximum number of messages per second for a producer that is experiencing a
Maximum threshold condition.
If a producer is not currently limiting its flow when a threshold condition is reached, the
initial flow limit for that producer is set to Flow Maximum. If a producer is already
limiting its flow when a threshold condition is reached (the flow limit is less than Flow
Maximum), then the producer will continue at its current flow limit until the next time the
flow is evaluated.
Once a threshold condition has subsided, the producer is not permitted to ignore its
flow limit. If its flow limit is less than the Flow Maximum, then the producer must
gradually increase its flow to the Flow Maximum each time the flow is evaluated. When
the producer finally reaches the Flow Maximum, it can then ignore its flow limit and
send without limiting its flow.
12-19
Chapter 12
Controlling the Flow of Messages on JMS Servers and Destinations
Attribute Description
Flow The minimum number of messages per second for a producer that is experiencing a
Minimum threshold condition. This is the lower boundary of a producer's flow limit. That is,
WebLogic JMS will not further slow down a producer whose message flow limit is at its
Flow Minimum.
Flow Interval An adjustment period of time, defined in seconds, when a producer adjusts its flow
from the Flow Maximum number of messages to the Flow Minimum amount, or vice
versa.
Flow Steps The number of steps used when a producer is adjusting its flow from the Flow
Minimum amount of messages to the Flow Maximum amount, or vice versa.
Specifically, the Flow Interval adjustment period is divided into the number of Flow
Steps (for example, 60 seconds divided by 6 steps is 10 seconds per step).
Also, the movement (that is, the rate of adjustment) is calculated by dividing the
difference between the Flow Maximum and the Flow Minimum into steps. At each Flow
Step, the flow is adjusted upward or downward, as necessary, based on the current
conditions, as follows:
The downward movement (the decay) is geometric over the specified period of time
(Flow Interval) and according to the specified number of Flow Steps. (For example,
100, 50, 25, 12.5).
The movement upward is linear. The difference is simply divided by the number of Flow
Steps.
For more information about the flow control fields, and the valid and default values for
them, see JMS Connection Factory: Configuration: Flow Control in the Oracle
WebLogic Server Administration Console Online Help.
Attribute Description
Bytes/Messages When the number of bytes/messages exceeds this threshold, the JMS
Threshold High server/destination becomes armed and instructs producers to limit
their message flow.
Bytes/Messages When the number of bytes/messages falls below this threshold, the
Threshold Low JMS server/destination becomes unarmed and instructs producers to
begin increasing their message flow.
Flow control is still in effect for producers that are below their message
flow maximum. Producers can move their rate upward until they reach
their flow maximum, at which point they are no longer flow controlled.
For detailed information about other JMS server and destination threshold and quota
fields, and the valid and default values for them, see the following pages in the Oracle
WebLogic Server Administration Console Online Help:
• JMS Server: Configuration: Thresholds and Quotas
12-20
Chapter 12
Handling Expired Messages
12-21
Chapter 12
Handling Expired Messages
12-22
Chapter 12
Handling Expired Messages
• For more information about the Expiration Policy options for a queue, see JMS
Queue: Configuration: Delivery Failure in the Oracle WebLogic Server Administration
Console Online Help.
3. If you selected the Log expiration policy in the previous step, use the Expiration Logging
Policy field to define what information about the message is logged.
For more information about valid Expiration Logging Policy values, see Defining an
Expiration Logging Policy.
4. Click Save
12-23
Chapter 12
Handling Expired Messages
For example:
<ExpiredJMSMessage JMSMessageID='ID:P<851839.1022176920343.0' >
<HeaderFields JMSPriority='7' JMSRedelivered='false' />
<UserProperties Make='Honda' Model='Civic' Color='White'Weight='2680' />
</ExpiredJMSMessage>
If no header fields are displayed, the line for header fields is not be displayed. If no
user properties are displayed, that line is not be displayed. If there are no header fields
and no properties, the closing </ExpiredJMSMessage> tag is not necessary as the
opening tag can be terminated with a closing bracket (/>).
For example:
<ExpiredJMSMessage JMSMessageID='ID:N<223476.1022177121567.1' />
All values are delimited with double quotes. All string values are limited to 32
characters in length. Requested fields and/or properties that do not exist are not
displayed. Requested fields and/or properties that exist but have no value (a null
value) are displayed as null (without single quotes). Requested fields and/or properties
that are empty strings are displayed as a pair of single quotes with no space between
them.
For example:
<ExpiredJMSMessage JMSMessageID='ID:N<851839.1022176920344.0' >
<UserProperties First='Any string longer than 32 char ...' Second=null
Third='' />
</ExpiredJMSMessage>
12-24
Chapter 12
Tuning Applications Using Unit-of-Order
Best Practices
The following sections provide best practice information when using UOO:
12-25
Chapter 12
Using JMS 2.0 Asynchronous Message Sends
• Ideal for applications that have strict message ordering requirements. UOO
simplifies administration and application design, and in most applications improves
performance.
• Use MDB batching to:
– Speed-up processing of the messages within a single sub-ordering.
– Consume multiple messages at a time under the same transaction.
See Tuning Message-Driven Beans.
• You can configure a default UOO for the destination. Only one consumer on the
destination processes messages for the default UOO at a time.
12-26
Chapter 12
Using JMS 2.0 Asynchronous Message Sends
The JMS 2.0 asynchronous send feature allows messages to be sent asynchronously without
waiting for a JMS Server to accept them. This feature may yield a substantial performance
gain, even a 'multi-x' gain, for applications that are bottlenecked on message send latency,
especially for batches of small non-persistent messages.
Asynchronous send calls each get an asynchronous reply from the server indicating the
message has been successfully sent with the same degree of confidence as if a synchronous
send had been performed. The JMS provider notifies the application by invoking the callback
method onCompletion, on an application-specified CompletionListener object. For a given
message producer, callbacks to the CompletionListener will be performed, single threaded
per session, in the same order as the corresponding calls to the asynchronous send method.
Note:
Oracle recommends using JMS 2.0 asynchronous sends instead of the proprietary
WebLogic one-way message sends as described in Using One-Way Message
Sends.
The JMS 2.0 asynchronous send has a performance similar to that of the One-Way Sends.
The JMS 2.0 asynchronous send:
• Can handle both non-persistent and persistent messages.
• Can handle Unit of Order messages.
• Does not get degraded performance when a client's connection host is connected to a
different server in the cluster, than the producer's target destination.
• Provides best effort flow control (block) internally, without a need for special tuning when
the amount of outstanding, asynchronously sent data without a completion-event gets too
high.
See JMS 2.0 javadoc for send() calls with CompletionListeners.
See What's New in JMS 2.0, Part Two—New Messaging Features for example usage.
12-27
Chapter 12
Using One-Way Message Sends
Note:
Note:
Oracle recommends using the JMS 2.0 asynchronous send feature instead
of the proprietary WebLogic one-way send feature. The asynchronous send
feature was introduced in 12.2.1.0 and has less activation restrictions. For
example, the JMS 2.0 asynchronous send feature works well in a cluster
without requiring additional configuration changes.
12-28
Chapter 12
Using One-Way Message Sends
Typical message sends from a JMS producer are termed two-way sends because they
include both an internal request and an internal response. When an producer application calls
send(), the call generates a request that contains the application's message and then waits
for a response from the JMS server to confirm its receipt of the message. This call-and-
response mechanism regulates the producer, since the producer is forced to wait for the JMS
server's response before the application can make another send call. Eliminating the
response message eliminates this wait, and yields a one-way send. WebLogic Server
supports a configurable one-way send option for non-persistent, non-transactional
messaging; no application code changes are required to leverage this feature.
When the One-Way Send Mode is active, the associated producers can send messages
without internally waiting for a response from the target destination's host JMS server. You
can choose to allow queue senders and topic publishers to do one-way sends, or to limit this
capability to topic publishers only. You must also specify a One-Way Window Size to
determine when a two-way message is required to regulate the producer before it can
continue making additional one-way sends.
• Configure One-Way Sends On a Connection Factory
• One-Way Send Support In a Cluster With a Single Destination
• One-Way Send Support In a Cluster With Multiple Destinations
• When One-Way Sends Are Not Supported
• Different Client and Destination Hosts
• XA Enabled On Client's Host Connection Factory
• Higher QOS Detected
• Destination Quota Exceeded
• Change In Server Security Policy
• Change In JMS Server or Destination Status
• Looking Up Logical Distributed Destination Name
• Hardware Failure
• One-Way Send QOS Guidelines
Note:
One-way message sends are disabled if your connection factory is configured with
"XA Enabled". This setting disables one-way sends whether or not the sender
actually uses transactions.
12-29
Chapter 12
Using One-Way Message Sends
12-30
Chapter 12
Using One-Way Message Sends
12-31
Chapter 12
Using One-Way Message Sends
Hardware Failure
A hardware or network failure will disable one-way sends. In such cases, the JMS
producer is notified by an OnException or by the next two-way message send. (Even
in one-way mode, clients will send a two-way message every One Way Send Window
Size number of messages configured on the client's connection factory.) The producer
will be closed. The worst-case scenario is that all messages can be lost up to the last
two-way message before the failure occurred.
12-32
Chapter 12
Tuning the Messaging Performance Preference Option
detected by WebLogic Server and the producer is automatically closed. See Hardware
Failure for more information.
Note:
This is an advanced option for fine tuning. It is normally best to explore other tuning
options first.
At the minimum value, batching is disabled. Tuning above the default value increases the
amount of time a destination is willing to wait before batching available messages. The
maximum message count of a full batch is controlled by the JMS connection factory's
Messages Maximum per Session setting.
Using the WebLogic Server Administration Console, this advanced option is available on the
General Configuration page for both standalone and uniform distributed destinations (or via
the DestinationBean API), as well as for JMS templates (or via the TemplateBean API).
12-33
Chapter 12
Tuning the Messaging Performance Preference Option
It may take some experimentation to find out which value works best for your system.
For example, if you have a queue with many concurrent message consumers, by
selecting the WebLogic Server Administration Console's Do Not Batch Messages
value (or specifying "0" on the DestinationBean MBean), the queue will make every
effort to promptly push messages out to its consumers as soon as they are available.
Conversely, if you have a queue with only one message consumer that doesn't require
fast response times, by selecting the console's High Waiting Threshold for Message
Batching value (or specifying "100" on the DestinationBean MBean), then the queue
will strongly attempt to only push messages to that consumer in batches, which will
increase the waiting period but may improve the server's overall throughput by
reducing the number of sends.
For instructions on configuring Messaging Performance Preference parameters on a
standalone destinations, uniform distributed destinations, or JMS templates using the
WebLogic Server Administration Console, see the following sections in the
Administration Console Online Help:
• Configure advanced topic parameters
• Configure advanced queue parameters
• Uniform distributed topics - configure advanced parameters
• Uniform distributed queues - configure advanced parameters
• Configure advanced JMS template parameters
For more information about these parameters, see DestinationBean and
TemplateBean in the MBean Reference for Oracle WebLogic Server.
12-34
Chapter 12
Client-side Thread Pools
Message Pipeline, see Receiving Messages in Developing JMS Applications for Oracle
WebLogic Server.
12-35
Chapter 12
Considerations for Oracle Data Guard Environments
12-36
13
Tuning WebLogic JMS Store-and-Forward
Oracle WebLogic Server JMS provides advanced Store-and-Forwarding (SAF) capability for
high-performance message forwarding from a local server instance to a remote JMS
destination. Get the best performance from SAF applications by following the recommended
practices and tips.
• Best Practices for JMS SAF
• Tuning Tips for JMS SAF
See Understanding the Store-and-Forward Service in Administering the Store-and-Forward
Service for Oracle WebLogic Server.
• Best Practices for JMS SAF
Learn the best practices for JMS SAF.
• Tuning Tips for JMS SAF
For better performance of JMS SAF, use the recommended tuning tips.
Note:
A Messaging Bridge is still required to store-and-forward messages to foreign
destinations and destinations from releases prior to WebLogic 9.0.
• Configure separate SAF Agents for JMS SAF and Web Services Reliable Messaging
Agents (WS-RM) to simplify administration and tuning.
• Sharing the same WebLogic Store between subsystems provides increased performance
for subsystems requiring persistence. For example, transactions that include SAF and
JMS operations, transactions that include multiple SAF destinations, and transactions
that include SAF and EJBs. See Tuning the WebLogic Persistent Store.
• Tune message load balancing to match your preference. See SAF Load Balancing in
Administering the Store-and-Forward Service.
13-1
Chapter 13
Tuning Tips for JMS SAF
Note:
For a distributed queue, WindowSize is ignored and the batch size is set
internally at 1 message.
• Increase the JMS SAF Window Interval. By default, a JMS SAF agent has a
Window Interval value of 0 which forwards messages as soon as they arrive.
This can lower performance as it can make the effective Window size much
smaller than the configured value. A more appropriate initial value for Window
Interval value is 500 milliseconds. You can then optimize this value for your
environment. In this context, small messages are less than a few K, while large
messages are on the order of tens of K.
Changing the Window Interval improves performance only in cases where the
forwarder is already able to forward messages as fast as they arrive. In this case,
instead of immediately forwarding newly arrived messages, the forwarder pauses
to accumulate more messages and forward them as a batch. The resulting larger
batch size improves forwarding throughput and reduces overall system disk and
CPU usage at the expense of increasing latency.
Note:
For a distributed queue, Window Interval is ignored.
13-2
14
Tuning WebLogic Message Bridge
Learn how to improve message bridge performance using the best practices available in
Oracle WebLogic Server.
• Best Practices
• Changing the Batch Size
• Changing the Batch Interval
• Changing the Quality of Service
• Using Multiple Bridge Instances
• Changing the Thread Pool Size
• Avoiding Durable Subscriptions
• Co-locating Bridges with Their Source or Target Destination
• Changing the Asynchronous Mode Enabled Attribute
• Tuning Environments with Many Bridges
• Best Practices
Learn the best practices for tuning WebLogic message bridge.
• Changing the Batch Size
• Changing the Batch Interval
• Changing the Quality of Service
• Using Multiple Bridge Instances
If message ordering is not required, consider deploying multiple bridges.
• Changing the Thread Pool Size
A general bridge configuration rule is to provide a thread for each bridge instance
targeted to a server instance. You can change the thread pool size to ensure that an
adequate number of threads is available for your environment.
• Avoiding Durable Subscriptions
• Co-locating Bridges with Their Source or Target Destination
If a messaging bridge source or target is a WebLogic destination, deploy the bridge to the
same WebLogic Server as the destination.
• Changing the Asynchronous Mode Enabled Attribute
• Tuning Environments with Many Bridges
Learn to improve system boot time and general performance of systems that deploy
many bridge instances.
14-1
Chapter 14
Best Practices
Best Practices
Learn the best practices for tuning WebLogic message bridge.
• Avoid using a message bridge if remote destinations are already highly available.
JMS clients can send directly to remote destinations. Use a messaging bridge in
situations where remote destinations are not highly available, such as an
unreliable network or different maintenance schedules.
• Use the better performing JMS store-and-forward feature instead of using a
message bridge when forwarding messages to remote destinations. In general, a
JMS SAF agent is significantly faster than a message bridge. One exception is a
configuration when sending messages in a non-persistent exactly-once mode.
Note:
A message bridge is still required to store-and-forward messages to
foreign destinations and destinations from releases prior to WebLogic
9.0.
When the Exactly-once quality of service is used, the bridge must undergo a two-
phase commit with both JMS servers in order to ensure the transaction semantics and
14-2
Chapter 14
Using Multiple Bridge Instances
this operation can be very expensive. However, unlike the other qualities of service, the
bridge can batch multiple operations together using Exactly-once service.
You may need to experiment with this parameter to get the best possible performance. For
example, if the queue is not very busy or if non-persistent messages are used, Exactly-once
batching may be of little benefit. See Configure messaging bridge instances in Oracle
WebLogic Server Administration Console Online Help.
14-3
Chapter 14
Avoiding Durable Subscriptions
1 If the source destination is a non-WebLogic JMS provider and the QOS is Exactly-once, then the
Asynchronous Mode Enabled attribute is disabled and the messages are processed in synchronous
mode.
14-4
Chapter 14
Tuning Environments with Many Bridges
14-5
15
Tuning Resource Adapters
Learn the best practices available in Oracle WebLogic Server to tune resource adapters.
• Classloading Optimizations for Resource Adapters
• Connection Optimizations
• Thread Management
• InteractionSpec Interface
• Classloading Optimizations for Resource Adapters
You can package resource adapter classes in one or more JAR files, and then place the
JAR files in the RAR file. These are called nested JARs. When you nest JAR files in the
RAR file, and classes need to be loaded by the classloader, the JARs within the RAR file
must be opened and closed and iterated through for each class that must be loaded.
• Connection Optimizations
Oracle recommends that resource adapters implement the optional enhancements
described in Connection Optimization section of the J2CA 1.5 Specification.
• Thread Management
Resource adapter implementations use the WorkManager to launch operations that need
to run in a new thread, rather than creating new threads directly. WebLogic Server
manages and monitors these threads.
• InteractionSpec Interface
An InteractionSpec holds properties for driving an Interaction with an EIS instance. The
CCI specification defines a set of standard properties for an InteractionSpec. The
InteractionSpec implementation class must provide getter and setter methods for each of
its supported properties.
15-1
Chapter 15
Connection Optimizations
Connection Optimizations
Oracle recommends that resource adapters implement the optional enhancements
described in Connection Optimization section of the J2CA 1.5 Specification.
See http://www.oracle.com/technetwork/java/index.html. Implementing these
interfaces allows WebLogic Server to provide several features that will not be available
without them.
Lazy Connection Association allows the server to automatically clean up unused
connections and prevent applications from hogging resources. Lazy Transaction
Enlistment allows applications to start a transaction after a connection is already
opened.
Thread Management
Resource adapter implementations use the WorkManager to launch operations that
need to run in a new thread, rather than creating new threads directly. WebLogic
Server manages and monitors these threads.
See Chapter 10, "Work Management" in the J2CA 1.5 Specification at http://
www.oracle.com/technetwork/java/index.html.
InteractionSpec Interface
An InteractionSpec holds properties for driving an Interaction with an EIS instance.
The CCI specification defines a set of standard properties for an InteractionSpec. The
InteractionSpec implementation class must provide getter and setter methods for each
of its supported properties.
WebLogic Server supports the Common Client Interface (CCI) for EIS access, as
defined in Chapter 17, "Common Client Interface" in the J2CA 1.5 Specification at
http://www.oracle.com/technetwork/java/index.html. The CCI defines a standard
client API for application components that enables application components and EAI
frameworks to drive interactions across heterogeneous EISes.
As a best practice, you should not store the InteractionSpec class that the CCI
resource adapter is required to implement in the RAR file. Instead, you should
package it in a separate JAR file outside of the RAR file, so that the client can access
it without having to put the InteractionSpec interface class in the generic
CLASSPATH.
With respect to the InteractionSpec interface, it is important to remember that when
all application components (EJBs, resource adapters, Web applications) are packaged
in an EAR file, all common classes can be placed in the APP-INF/lib directory. This is
the easiest possible scenario.
This is not the case for standalone resource adapters (packaged as RAR files). If the
interface is serializable (as is the case with InteractionSpec), then both the client and
the resource adapter need access to the InteractionSpec interface as well as the
implementation classes. However, if the interface extends java.io.Remote, then the
client only needs access to the interface class.
15-2
16
Tuning Web Applications
Learn the best practices available in Oracle WebLogic Server for tuning Web applications and
managing sessions.
• Best Practices
• Session Management
• Pub-Sub Tuning Guidelines
• Best Practices
Learn the best practices for tuning Web applications.
• Session Management
Optimize your application so that it does as little work as possible when handling session
persistence and sessions. Learn to design a session management strategy that suits
your environment and application.
• Pub-Sub Tuning Guidelines
Follow the general tuning guidelines for a pub-sub server such as increasing the file
descriptors, tuning the JVM options, and so on.
• Enabling GZIP Compression
The WebLogic Server Web container supports HTTP content-encoding GZIP
compression, which is part of HTTP/1.1. With GZIP compression, you can reduce the
size of the data that a Web browser has to download, improving network bandwidth. You
can tune Web applications by enabling and configuring GZIP compression at either the
domain level or Web application level.
Best Practices
Learn the best practices for tuning Web applications.
• Disable Page Checks
• Use Custom JSP Tags
• Precompile JSPs
• Use HTML Template Compression
• Use Service Level Agreements
• Related Reading
• Disable Page Checks
• Use Custom JSP Tags
• Precompile JSPs
• Use HTML Template Compression
• Use Service Level Agreements
• Related Reading
16-1
Chapter 16
Best Practices
Precompile JSPs
You can configure WebLogic Server to precompile your JSPs when a Web Application
is deployed or re-deployed or when WebLogic Server starts up by setting the
precompile parameter to true in the jsp-descriptor element of the weblogic.xml
deployment descriptor. To avoid recompiling your JSPs each time the server restarts
and when you target additional servers, precompile them using weblogic.appc and
place them in the WEB-INF/classes folder and archive them in a .war file. Keeping
your source files in a separate directory from the archived .war file eliminates the
possibility of errors caused by a JSP having a dependency on one of the class files.
For a complete explanation on how to avoid JSP recompilation, see Avoiding
Unnecessary JSP Recompilation.
See jsp-descriptor in Developing Web Applications, Servlets, and JSPs for Oracle
WebLogic Server.
16-2
Chapter 16
Session Management
Related Reading
• Servlet Best Practices in Developing Web Applications, Servlets, and JSPs for Oracle
WebLogic Server.
• Servlet and JSP Performance Tuning at https://www.infoworld.com/article/2072812/servlet-and-
jsp-performance-tuning.html, by Rahul Chaudhary, JavaWorld, June 2004.
Session Management
Optimize your application so that it does as little work as possible when handling session
persistence and sessions. Learn to design a session management strategy that suits your
environment and application.
• Managing Session Persistence
• Minimizing Sessions
• Aggregating Session Data
• Managing Session Persistence
• Minimizing Sessions
• Aggregating Session Data
16-3
Chapter 16
Pub-Sub Tuning Guidelines
Minimizing Sessions
Configuring how WebLogic Server manages sessions is a key part of tuning your
application for best performance. Consider the following:
• Use of sessions involves a scalability trade-off.
• Use sessions sparingly. In other words, use sessions only for state that cannot
realistically be kept on the client or if URL rewriting support is required. For
example, keep simple bits of state, such as a user's name, directly in cookies. You
can also write a wrapper class to "get" and "set" these cookies, in order to simplify
the work of servlet developers working on the same project.
• Keep frequently used values in local variables.
See Setting Up Session Management in Developing Web Applications, Servlets, and
JSPs for Oracle WebLogic Server.
16-4
Chapter 16
Enabling GZIP Compression
size of the data that a Web browser has to download, improving network bandwidth. You can
tune Web applications by enabling and configuring GZIP compression at either the domain
level or Web application level.
See Enabling GZIP Compression for Web Applications in Developing Web Applications,
Servlets, and JSPs for Oracle WebLogic Server.
16-5
17
Tuning Web Services
Use best practices available in Oracle WebLogic Server for designing, developing, and
deploying WebLogic Web Services applications and application resources.
• Web Services Best Practices
• Tuning Web Service Reliable Messaging Agents
• Tuning Heavily Loaded Systems to Improve Web Service Performance
• Web Services Best Practices
Design and architectural decisions have a strong impact on runtime performance and
scalability of Web Service applications. Follow the key recommendations to achieve best
performance.
• Tuning Web Service Reliable Messaging Agents
Web Service Reliable Messaging provides advanced store-and-forward capability for
high-performance message forwarding from a local server instance to a remote
destination.
• Tuning Heavily Loaded Systems to Improve Web Service Performance
The asynchronous request-response, reliable messaging, and buffering features are all
pre-tuned for minimum system resource usage to support a small number of clients
(under 10). If you plan on supporting a larger number of clients or high message
volumes, adjust the tuning parameters to accommodate the additional load.
17-1
Chapter 17
Tuning Web Service Reliable Messaging Agents
Note:
WindowSize also tunes JMS SAF behavior, so it may not be appropriate
to tune this parameter for SAF agents of type both.
• Ensure that retry delay is not set too low. This may cause the system to make
unnecessary delivery attempts.
17-2
Chapter 17
Tuning Heavily Loaded Systems to Improve Web Service Performance
17-3
Chapter 17
Tuning Heavily Loaded Systems to Improve Web Service Performance
Often, these resources can be released sooner. Executing the store cleaner more
frequently can help to reduce the size of the persistent store and minimize the time
required to clean it.
By default, the store cleaner runs every two minutes (120000 ms). Oracle
recommends that you set the store cleaner interval to one minute (60000 ms) using
the following Java system property:
-Dweblogic.wsee.StateCleanInterval=60000
17-4
18
Tuning WebLogic Tuxedo Connector
The WebLogic Tuxedo Connector (WTC) provides interoperability between Oracle WebLogic
Server applications and Tuxedo services. WTC allows WebLogic Server clients to invoke
Tuxedo services and Tuxedo clients to invoke WebLogic Server Enterprise Java Beans
(EJBs) in response to a service request.
Get the best performance from WebLogic Tuxedo Connector (WTC) applications using the
tips provided.
• Configuration Guidelines
• Best Practices
See Introduction to Oracle WebLogic Tuxedo Connector Programming in Developing Oracle
WebLogic Tuxedo Connector Applications for Oracle WebLogic Server.
• Configuration Guidelines
Refer the recommended guidelines when configuring WebLogic Tuxedo Connector.
• Best Practices
Learn the best practices for using WebLogic Tuxedo Connector.
Configuration Guidelines
Refer the recommended guidelines when configuring WebLogic Tuxedo Connector.
• You may have more than one WTC Service in your configuration.
• You can only target one WTC Service to a server instance.
• WTC does not support connection pooling. WTC multiplexes requests though a single
physical connection.
• Configuration changes implemented as follows:
– Changing the session/connection configuration (local APs, remote APs, Passwords,
and Resources) before a connection/session is established. The changes are
accepted and are implemented in the new session/connection.
– Changing the session/connection configuration (local APs, remote APs, Passwords,
and Resources) after a connection/session is established. The changes accepted but
are not implemented in the existing connection/session until the connection is
disconnected and reconnected. See Target WTC servers in Oracle WebLogic Server
Administration Console Online Help.
– Changing the Imported and Exported services configuration. The changes are
accepted and are implemented in the next inbound or outbound request. Oracle does
not recommend this practice as it can leave in-flight requests in an unknown state.
– Changing the tBridge configuration. Any change in a deployed WTC service causes
an exception. You must untarget the WTC service before making any tBridge
configuration changes. After untargeting and making configuration changes, you
must target the WTC service to implement the changes.
18-1
Chapter 18
Best Practices
Best Practices
Learn the best practices for using WebLogic Tuxedo Connector.
• When configuring the connection policy, use ON_STARTUP and INCOMING_ONLY.
ON_STARTUP and INCOMING_ONLY always paired. For example: If a WTC remote
access point is configured with ON_STARTUP, the DM_TDOMAIN section of the Tuxedo
domain configuration must be configured with the remote access point as
INCOMING_ONLY. In this case, WTC always acts as the session initiator. See
Configuring the Connections Between Access Points in the Administering
WebLogic Tuxedo Connector for Oracle WebLogic Server.
• Avoid using connection policy ON_DEMAND. The preferred connection policy is
ON_STARTUP and INCOMING_ONLY. This reduces the chance of service request
failure due to the routing semantics of ON_DEMAND. See Configuring the
Connections Between Access Points in the Administering WebLogic Tuxedo
Connector for Oracle WebLogic Server.
• Consider using the following WTC features: Link Level Failover, Service Level
failover and load balancing when designing your application. See Configuring
Failover and Failback in the Administering WebLogic Tuxedo Connector for Oracle
WebLogic Server.
• Consider using WebLogic Server clusters to provide additional load balancing and
failover. To use WTC in a WebLogic Server cluster:
– Configure a WTC instance on all the nodes of the WebLogic Server cluster.
– Each WTC instance in each cluster node must have the same configuration.
See How to Manage WebLogic Tuxedo Connector in a Clustered Environment in
the Administering WebLogic Tuxedo Connector for Oracle WebLogic Server.
• If your WTC to Tuxedo connection uses the internet, use the following security
settings:
– Set the value of Security to DM_PW. See Authentication of Remote Access
Points in the Administering WebLogic Tuxedo Connector for Oracle WebLogic
Server.
– Enable Link-level encryption and set the min-encrypt-bits parameter to 40
and the max-encrypt-bits to 128. See Link-Level Encryption in the
Administering WebLogic Tuxedo Connector for Oracle WebLogic Server.
• Your application logic should provide mechanisms to manage and interpret error
conditions in your applications.
– See Application Error Management in the Developing Oracle WebLogic
Tuxedo Connector Applications for Oracle WebLogic Server.
– See System Level Debug Settings in the Administering WebLogic Tuxedo
Connector for Oracle WebLogic Server.
• Avoid using embedded TypedFML32 buffers inside TypedFML32 buffers. See Using
FML with WebLogic Tuxedo Connector in the Developing Oracle WebLogic
Tuxedo Connector Applications for Oracle WebLogic Server.
• If your application handles heavy loads, consider configuring more remote Tuxedo
access points and let WTC load balance the work load among the access points.
18-2
Chapter 18
Best Practices
See Configuring Failover and Failback in the Administering WebLogic Tuxedo Connector
for Oracle WebLogic Server.
• When using transactional applications, try to make the remote services involved in the
same transaction available from the same remote access point. See WebLogic Tuxedo
Connector JATMI Transactions in the Developing Oracle WebLogic Tuxedo Connector
Applications for Oracle WebLogic Server.
• The number of client threads available when dispatching services from the gateway may
limit the number of concurrent services running. There is no WebLogic Tuxedo Connector
attribute to increase the number of available threads. Use a reasonable thread model
when invoking service. See Thread Management and Using Work Managers to Optimize
Scheduled Work in Administering Server Environments for Oracle WebLogic Server.
• WebLogic Server Releases 9.2 and higher provide improved routing algorithms which
enhance transaction performance. Specifically, performance is improved when there are
more than one Tuxedo service requests involved in a 2 phase commit (2PC) transaction.
If your application does only single service request to the Tuxedo domain, you can
disable this feature by setting the following WebLogic Server command line parameter:
-Dweblogic.wtc.xaAffinity=false
• Call the constructor TypedFML32 using the maximum number of objects in the buffer. Even
if the maximum number is difficult to predict, providing a reasonable number improves
performance. You approximate the maximum number by multiplying the number of fields
by 1.33.
Note:
This performance tip does not apply to TypedFML buffer type.
For example:
If there are 50 fields in a TypedFML32 buffer type then the maximum number is 63. Calling
the constructor TypedFML32(63, 50) performs better than TypedFML32().
If there are 50 fields in a TypedFML32 buffer type and each can have maximum 10
occurrences, then call the constructor TypedFML32(625, 50) will give better performance
than TypedFML32().
• When configuring Tuxedo applications that act as servers interoperating with WTC
clients, take into account of parallelism that may be achieved by carefully configuring
different servers on different Tuxedo machines.
• Be aware of the possibility of database access deadlock in Tuxedo applications. You can
avoid deadlock through careful Tuxedo application configuration.
• If your are using WTC load balancing or service level failover, Oracle recommends that
you do not disable WTC transaction affinity.
• For load balancing outbound requests, configure the imported service with multiple
entries using a different key. The imported service uses composite key to determine each
record's uniqueness. The composite key is compose of "the service name + the local
access point + the primary route in the remote access point list".
The following is an example of how to correctly configure load balancing requests for
service1 between TDomainSession(WDOM1,TUXDOM1) and
TDomainSession(WDOM1,TUXDOM2):
18-3
Chapter 18
Best Practices
18-4
A
Capacity Planning
Capacity planning in Oracle WebLogic Server is the process of determining what type of
hardware and software configuration is required to meet application needs adequately.
Capacity planning is not an exact science. Every application is different and every user
behavior is different.
• Capacity Planning Factors
• Assessing Your Application Performance Objectives
• Hardware Tuning
• Network Performance
• Related Information
• Capacity Planning Factors
A number of factors influence how much capacity a given hardware configuration will
need in order to support a WebLogic Server instance and a given application. The
hardware capacity required to support your application depends on the specifics of the
application and configuration.
• Assessing Your Application Performance Objectives
Capacity planning for server hardware focuses on the maximum performance
requirements and sets measurable objectives for capacity. Assess your application
performance by gathering information about the level of activity expected on your server,
the anticipated number of users, the number of requests, acceptable response time, and
preferred hardware configuration.
• Hardware Tuning
The hardware capacity required to support your application depends on the specifics of
the application and configuration. Consider how each factor applies to your configuration
and application.
• Network Performance
Network performance is affected when the supply of resources is unable to keep up with
the demand for resources. It is important to continually monitor your network performance
to troubleshoot potential performance bottlenecks.
• Related Information
Information on topics related to capacity planning is available from numerous third-party
software sources. The Oracle Technology Network provides detailed documentation for
WebLogic Server.
A-1
Appendix A
Capacity Planning Factors
The following sections discuss several of these factors. Understanding these factors
and considering the requirements of your application will aid you in generating server
hardware requirements for your configuration. Consider the capacity planning
questions in Table A-1.
A-2
Appendix A
Capacity Planning Factors
The stateless nature of HTTP requires that the server handle more overhead than is the case
with programmatic clients. However, the benefits of HTTP clients are numerous, such as the
availability of browsers and firewall compatibility, and are usually worth the performance
costs.
Programmatic clients are generally more efficient than HTTP clients because T3 does more
of the presentation work on the client side. Programmatic clients typically call directly into
EJBs while Web clients usually go through servlets. This eliminates the work the server must
do for presentation. The T3 protocol operates using sockets and has a long-standing
connection to the server.
A WebLogic Server installation that relies only on programmatic clients should be able to
handle more concurrent clients than an HTTP proxy that is serving installations. If you are
tunneling T3 over HTTP, you should not expect this performance benefit. In fact, performance
of T3 over HTTP is generally 15 percent worse than typical HTTP and similarly reduces the
optimum capacity of your WebLogic Server installation.
A-3
Appendix A
Capacity Planning Factors
Concurrent Sessions
How many transactions must run concurrently? Determine the maximum number of
concurrent sessions WebLogic Server will be called upon to handle. For each session,
you will need to add more RAM for efficiency. Oracle recommends that you install a
minimum of 256 MB of memory for each WebLogic Server installation that will be
handling more than minimal capacity.
Next, research the maximum number of clients that will make requests at the same
time, and how frequently each client will be making a request. The number of user
interactions per second with WebLogic Server represents the total number of
interactions that should be handled per second by a given WebLogic Server
deployment. Typically for Web deployments, user interactions access JSP pages or
servlets. User interactions in application deployments typically access EJBs.
Consider also the maximum number of transactions in a given period to handle spikes
in demand. For example, in a stock report application, plan for a surge after the stock
market opens and before it closes. If your company is broadcasting a Web site as part
of an advertisement during the World Series or World Cup Soccer playoffs, you should
expect spikes in demand.
Network Load
Is the bandwidth sufficient? WebLogic Server requires enough bandwidth to handle all
connections from clients. In the case of programmatic clients, each client JVM will
have a single socket to the server. Each socket requires bandwidth. A WebLogic
Server handling programmatic clients should have 125 to 150 percent the bandwidth
that a server with Web-based clients would handle. If you are interested in the
bandwidth required to run a web server, you can assume that each 56kbps (kilobits
A-4
Appendix A
Assessing Your Application Performance Objectives
per second) of bandwidth can handle between seven and ten simultaneous requests
depending upon the size of the content that you are delivering. If you are handling only HTTP
clients, expect a similar bandwidth requirement as a Web server serving static pages.
The primary factor affecting the requirements for a LAN infrastructure is the use of replicated
sessions for servlets and stateful session EJBs. In a cluster, replicated sessions are the
biggest consumer of LAN bandwidth. Consider whether your application will require the
replication of session information for servlets and EJBs.
To determine whether you have enough bandwidth in a given deployment, look at the network
tools provided by your network operating system vendor. In most cases, including Windows
and Solaris platforms, you can inspect the load on the network system. If the load is very
high, bandwidth may be a bottleneck for your system.
Clustered Configurations
Clusters greatly improve efficiency and failover. Customers using clustering should not see
any noticeable performance degradation. A number of WebLogic Server deployments in
production involve placing a cluster of WebLogic Server instances on a single multiprocessor
server.
Large clusters performing replicated sessions for Enterprise JavaBeans (EJBs) or servlets
require more bandwidth than smaller clusters. Consider the size of session data and the size
of the cluster.
Server Migration
Are your servers configured for migration? Migration in WebLogic Server is the process of
moving a clustered WebLogic Server instance or a component running on a clustered
instance elsewhere in the event of failure. In the case of whole server migration, the server
instance is migrated to a different physical machine upon failure, either manually or
automatically.
For capacity planning in a production environment, keep in mind that server startup during
migration taxes CPU utilization. You cannot assume that because a machine can handle x
number of servers running concurrently that it also can handle that same number of servers
starting up on the same machine at the same time.
Application Design
How well-designed is the application? WebLogic Server is a platform for user applications.
Badly designed or unoptimized user applications can drastically slow down the performance
of a given configuration from 10 to 50 percent. The prudent course is to assume that every
application that is developed for WebLogic Server will not be optimal and will not perform as
well as benchmark applications. Increase the maximum capacity that you calculate or expect.
See Tune Your Application.
A-5
Appendix A
Hardware Tuning
The numbers that you calculate from using one of our sample applications are of
course just a rough approximation of what you may see with your application. There is
no substitute for benchmarking with the actual production application using production
hardware. In particular, your application may reveal subtle contention or other issues
not captured by our test applications.
Hardware Tuning
The hardware capacity required to support your application depends on the specifics
of the application and configuration. Consider how each factor applies to your
configuration and application.
When you examine performance, a number of factors influence how much capacity a
given hardware configuration will need in order to support WebLogic Server and a
given application.
• Benchmarks for Evaluating Performance
• Supported Platforms
Supported Platforms
See Supported Configurations in What's New in Oracle WebLogic Server for links to
the latest certification information on the hardware/operating system platforms that are
supported for each release of WebLogic Server.
Network Performance
Network performance is affected when the supply of resources is unable to keep up
with the demand for resources. It is important to continually monitor your network
performance to troubleshoot potential performance bottlenecks.
Today's enterprise-level networks are very fast and are now rarely the direct cause of
performance in well-designed applications. However, if you find that you have a
problem with one or more network components (hardware or software), work with your
network administrator to isolate and eliminate the problem. You should also verify that
you have an appropriate amount of network bandwidth available for WebLogic Server
and the connections it makes to other tiers in your architecture, such as client and
database connections.
• Determining Network Bandwidth
A-6
Appendix A
Related Information
case of programmatic clients, each client JVM has a single socket to the server, and each
socket requires dedicated bandwidth. A WebLogic Server instance handling programmatic
clients should have 125–150 percent of the bandwidth that a similar Web server would
handle. If you are handling only HTTP clients, expect a bandwidth requirement similar to a
Web server serving static pages.
To determine whether you have enough bandwidth in a given deployment, you can use the
network monitoring tools provided by your network operating system vendor to see what the
load is on the network system. You can also use common operating system tools, such as the
netstat command for Solaris or the System Monitor (perfmon) for Windows, to monitor your
network utilization. If the load is very high, bandwidth may be a bottleneck for your system.
Also monitor the amount of data being transferred across the your network by checking the
data transferred between the application and the application server, and between the
application server and the database server. This amount should not exceed your network
bandwidth; otherwise, your network becomes the bottleneck. To verify this, monitor the
network statistics for retransmission and duplicate packets, as follows:
netstat -s -P tcp
Related Information
Information on topics related to capacity planning is available from numerous third-party
software sources. The Oracle Technology Network provides detailed documentation for
WebLogic Server.
See https://www.oracle.com/middleware/technologies/weblogic.html.
A-7