SG 247943
SG 247943
SG 247943
Lifecycle topics
David Shute
Daniel Dickerson
Richard Kinard
Manuel Carrizosa
Bruno Neves
Pablo Sanchez
Byron Braswell
ibm.com/redbooks
International Technical Support Organization
June 2011
SG24-7943-00
Note: Before using this information and the product it supports, read the information in
“Notices” on page vii.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . xii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Chapter 1. Planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Business framework planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Architectural map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Inclusive asset universe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Use case scenario map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Base configuration items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 Hardware install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Device initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.3 Network integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.4 Application domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.5 User accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.6 Monitoring and logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.7 Configuration management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Application development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Life cycle phases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.1 Revision control system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.2 Development environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.3 Deployment packages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.4 Test methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5.5 Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Contents v
5.5.3 Web Services bridged to AS2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.5.4 XB60 / MQ FTE integration pattern . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.5.5 ebMS data exchange with CPA . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.5.6 Health Level 7 clinical data exchange with standards conversion . 125
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Microsoft, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
viii DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Preface
This IBM® Redbooks® publication will help you to better understand the effective
use of the WebSphere® DataPower® family of appliances. It provides guidance
on the best methods identified to date for building the various components that
implement solutions, such as handling MQ-based message flows or creating
authentication and authorization policies. The information and recommendations
in this publication are the result of real world experiences using the appliances.
Such experience shows that taking the time to plan a solution implementation
before beginning the work yields the greatest savings in time and energy and the
highest quality outcome. This publication begins with a checklist of items to
consider when planning a DataPower solution.
The following author also contributed to the content and creation of this book.
Debbie Willmschen
Stephen Smith
Linda Robinson
Tamikia Barrow
Shari Deiana
International Technical Support Organization, Raleigh Center
Moses C Allotey-pappoe
IBM US
Bryon Kataoka
iSOA Group
Preface xi
Oswaldo Gago
IBM US
John Rasmussen
IBM US
Lingachary Eswarachary
IBM US
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
xii DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the
IBM Redbooks weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
Preface xiii
xiv DataPower SOA Appliance Service Planning, Implementation, and Best Practices
1
Chapter 1. Planning
The DataPower device® can provide robust solutions to a wide range of
enterprise needs, including, but not limited to:
Security
Multi-Protocol Bridging
Message Content Transformation
Message Routing
Policy Enforcement
Monitoring and Logging
The flexibility of the device offers great benefits at a reduced cost of time and
effort because of the configuration-driven (rather than code-driven) method of
implementation offered.
In summary, the business requirements for any solution can easily involve a
complex set of implementation requirements. For this reason, careful planning is
the best way to minimize risk, ensure completeness, and contain the time and
effort required to build, test, and deploy a DataPower-based solution. This
chapter provides a template for the planning process from which a plan emerges.
The collection of these policies or rules can help to guide the choice of service to
use to implement a solution. These rules also determine the actions taken on
each message and inform such aspects as error handling.
It is also helpful to consider the life cycle phases of the projected solution.
Understanding the environments to be used for each phase, development, test
and production, can inform such items as naming conventions, resource
conflicts, or time lines.
Access
Control Database
Registries Systems Systems Mainframes
Message
Application
Transport
Servers
Systems
Chapter 1. Planning 3
1.2.2 Use case scenario map
The particular use case scenario might use only a subset of these services.
Figure 1-2 displays a map showing the DataPower device used in two common
locations:
In the demilitarized zone (DMZ) between an enterprise and external partners,
where DataPower typically performs primarily security services
Within the enterprise as an enterprise service bus (ESB), interconnecting
disparate enterprise assets in a meaningful way
WMB V6
Subscribers Publishers
MQ (distribute MQ
events)
Demilitarized
Zone MQ
TFIM
WAS ND V6
UDDI
TAI, LTPA
Applications
Security Enforcement
• Authentication Creds, AMBI, AMoS
• Cred. Acquisition
• Authorization
• Single Sign-On
• Policy Enforcement
• Auditing
You can use the device to support more than one business service. To keep the
planning process simple enough to manage:
1. Create a plan for one service at a time.
2. As your plan develops, update the corresponding map.
3. As additional services are added, refer to existing plans to identify possible
areas of reusability or conflict.
Chapter 1. Planning 5
1.3.1 Hardware install
Decide on the items in this list before the device is removed from the shipping
container:
Rack location
The 9235 DP device is a 1U device that includes its own rails.
Power requirements
Use the provided power cords to connect both power supply modules to an
AC power source. Connect each power supply module or the unconnected
module will be reported in a failed state. The Type 9235 contains two 650-watt
power modules that accept 110 or 220V current. Both power supply modules
must be connected to the same power source to prevent ground voltage
potential differences between the two power modules.
Serial port
Will the serial port of the device be connected to a terminal server? If so, what
is the console and port address?
Users and groups can be restricted to access only particular domains. This
separation provides a good way to protect device-wide settings (such as IP
address) from inadvertent changes that can disrupt all services that are running
on the device and prevent collisions when more than one group of developers
use the same machine.
Identify the number of domains needed to support the anticipated use of the
machine. Domains can be added, altered, or deleted at any time.
Chapter 1. Planning 7
1.3.5 User accounts
When planning for the configuration of user accounts on the device, consider
these requirements:
Will full administrative access to the device be limited?
Will authentication of users employ off-device services, such as RADIUS or
LDAP? What will be the fallback user in the event that the remote
authentication system fails?
Will access be partitioned or segregated in some way, for example, network
operations, application development, production monitoring? What
permissions will be assigned to each group of users? What domains can
groups access?
Chapter 1. Planning 9
Client-side transport protocols
What protocols will the device need to support to receive or retrieve
messages for processing on the client side (or front side) of the device?
List each protocol. For each protocol, include as much detail as possible,
such as the need for keys and certificates to support secure connections, or
the need for transactionality to support the lowest possible risk of message
loss.
Enterprise-side transport protocols
What protocols will the device need to use to send messages for processing
to enterprise services? What protocols will be used to retrieve or accept
responses?
List each protocol. For each protocol, include as much detail as possible,
such as the need for keys and certificates to support secure connections or
the need for transactionality to support the lowest possible risk of message
loss.
Authenticate/authorize connections or messages
Will the service perform authentication and authorization of requests before
forwarding to enterprise services?
What methods will be employed? What information must be contained in
client-side requests to enable the desired authentication or authorization
system? What authority system (for example, Tivoli Access Manager or
LDAP) will be used? What are the connection requirements to access that
authority?
Identify all of the necessary details to implement this requirement.
Message filtering
The device can filter messages in several ways:
– It can perform schema validation on XML messages and reject those that
do not pass.
– It can perform custom filtering using a custom stylesheet.
– It can perform virus checking and filter for SQL Injection attacks.
Identify any and all forms of filtering required.
Message-level security and integrity measures
Will the device perform security functions, such as encrypting and decrypting
all or part of messages (including headers and cookies), verifying signatures,
or signing messages?
Identify the security operations needed along with the cryptographic material
required to support those operations.
Chapter 1. Planning 11
can interact smoothly with SNMP monitoring, standard load balancer health
checks, and return status information through the XML Management
Interface. How will the service deliver monitoring information to external
monitoring systems?
Monitoring might also include Service Level Agreement enforcement, which
can affect the flow of messages through the device.
Identify the methods and information required to monitor the services that run
on the device and the external monitoring tools that will interact with the
device.
Each phase presents needs that require separate tools to meet the need. In
addition, methods are needed to move a solution from one phase to the next,
such as from application development to test. This movement typically requires
the migration of device configurations from one machine to another or in the case
of movement from test to production, from one machine to potentially many
others.
In the next sections, we discuss some key areas to consider when planning for
life cycle phase migrations.
Chapter 1. Planning 13
– Might contain more than one service, not necessarily only the service
under development. Can also contain unwanted or orphaned objects.
The entire device, especially including the configuration of the default domain
(and thus such values as the IP addresses assigned to Ethernet ports):
– Guaranteed to include all necessary elements
– No duplication on same device possible
– Requires special care for default domain values such as IP addresses
The use of a centralized, multi-device management system can also affect how
solution deployment packages are defined. These tools can support only
domain-level or device-level deployments.
Some issues to consider when moving a solution through the test phase are:
Will the testbed environment mirror all of the services used in the production
environment?
Does the test suite include all of the possible inputs that the solution might
encounter?
What utilities will be used to drive the load for stability and performance
testing?
How will error paths be tested?
Is the testbed using devices that are dedicated to the test cycle?
The device provides a range of troubleshooting tools, all of which are available
during the test phase. Some of these tools, such as the Probe, must not be used
in a production environment because of their impact on performance. Some of
these tools, such as First Failure Data Capture (FFDC), might be needed in
1.5.5 Production
In this phase of the life cycle, monitoring the health of the device and the services
running on the device becomes most important along with robust logging.
The reliability and frequency of backups gain importance, as does the ability to
restore a configuration in the event of a disaster or other need.
Here are some areas to consider in this phase of the life cycle:
What tools will be used to monitor the device? Are the tools tested?
What logging system will be used to capture logs?
What users with what permissions are allowed to access the device?
How will new solutions be deployed to the production environment?
How will maintenance activities be performed, such as firmware updates?
Chapter 1. Planning 15
16 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
2
While overview descriptions of the major services are included here, this chapter
does not attempt to describe the range of capabilities presented by each service.
See the official documentation Information Center at:
http://publib.boulder.ibm.com/infocenter/wsdatap/v3r8m1/index.jsp?topic
=/xa35/welcome.htm
Many of the practices mentioned here apply to features that can be used in any
of the services, such as an Authentication, Authorization, Auditing (AAA) Action
or an SLM Policy. Some features apply only to a particular service, such as the
use of WSDLs for configuration, which applies only to the Web Service Proxy.
Those practices that apply only to a particular service are grouped under that
service heading.
All of the available protocols on which the Multi-Protocol Gateway can receive
incoming requests can also be used on the server-side to forward the request to
its destination. The client-side protocol does not need to match the server-side
protocol.
Propagate URI
This option must be turned off to support MQ-based back-end destinations.
The Write Unique Filename if Trailing Slash settings can be found on the FTP
Client Policies tab, in the User Agent settings. To find the User Agent settings,
access the XML Manager settings (click “…” near it) of the service.
Note: The transfer fails if the FTP server does not support the STOU
command.
The FTP Server Front-Side Handler provides an FTP server that can be used to
submit files for processing by the system. When acting as an FTP server in
passive mode and using the FTP Server Front-side Handler, DataPower can limit
the port range used for the data connection (feature released on firmware version
3.7.2). This feature is called Limit Port Range for Passive Connections. When
enabled, a lowest and highest value can be set to limit the port range that will be
used on FTP connections by the DataPower.
To avoid these problems, use the Basic Auth property of the User Agent. This
property is on the User Agent Configuration settings at the Basic-Auth Policy tab.
Configure a policy that associates a set of URLs with a specific username and
password for Basic-Auth authentication. After the username and password are
set, it no longer needs to be provided in the FTP client address (for example:
ftp://myserver:port rather than ftp://user:password@myserver.port).
Note: Create a new User Agent for this purpose rather than altering the
default XML Manager User Agent to avoid unintended consequences.
Furthermore, the Response Type of the Gateway in use must be set to either
Non-XML or Pass-thru to allow file and directory listings to pass through the
service correctly.
Streaming
The FTP-based Front-Side Handlers for a Multi-Protocol Gateway support
streaming of large files under the correct circumstances. Clients connecting to a
Server FSH can be authenticated using the AAA Policies configured as part of
the Server Handler itself without affecting streaming.
2.2.2 MQ
This section addresses configuration of the MQ Service.
MQ headers
The MQ headers are represented in DataPower as XML serialized to string. As
shown in Example 2-1, the first tag is always the same as the header type, and
internal tags correspond to field names.
In the event that the Queue Manager is down or a connection is not established,
this example configuration allows the MQ QM Object to retry six times with 10
second intervals. After six attempts, the MQ QM object retries every 10 minutes.
The total number of open connections at any time is affected by the Queue
Manager cache timeout value. See the next section for more information about
the cache timeout.
Note: By default, the Cache Timeout has a value of empty string that does not
close idle connections.
In addition, check the size of messages that the remote Queue Manager channel
and queues allow.
Note: Start by checking the MQQM object and the XML Manager message
size properties when DataPower is showing a MQRC 2010 error for large
messages transfer attempts.
In cases where some messages might be large yet most smaller, consider using
a separate Queue Manager for the large message traffic.
Dynamic queues
Use temporary dynamic queues whenever possible to conserve resources
because temporary queues are deleted automatically when usage completes.
MQ single-phase COMMIT
DataPower supports one phase COMMIT. To support this feature, the following
conditions must be true:
The same MQ Queue Manager must be used in the MQ Front-Side Handlers
and any MQ URLs, including the back end destination.
All processing actions must be synchronous.
The same connection is shared across all MQ operations within a transaction.
Use a response rule to capture the response code by using the style sheet code
snippet, as shown on Example 2-2.
Note: For Datagram traffic, set “Process Backend Errors = off” if the
processing policy is not handling MQ errors. This “Process Backend Errors”
field is visible under the MPGW's advanced tab.
Handling errors
For handling MQ errors, use a response rule to capture the response code by
using the code snippet as follows:
<xsl:variable name="mqrc"
select="dp:response-header('x-dp-response-code')"/>
<xsl:variable name="ecode"
select="dp:variable('var://service/error-code')"/>
<xsl:choose>
<xsl:when test="(starts-with($mqrc, '2') and
(string-length(normalize-space($mqrc))= 4)) or ($ecode !=
'0x00000000')">
<dp:xreject reason="'MQ Error'" override="true"/>
</xsl:when>
<xsl:otherwise>
<dp:accept/>
</xsl:otherwise>
</xsl:choose>
When Units of Work is set to “1”, the appliance rolls back a message to its
configured Backout queue, if it cannot deliver to the destination queue. The error
conditions are handled by the device.
However, if you want to handle the error conditions using an error rule, the device
has to COMMIT the transaction after the MQ PUT is done. To do this, set the
variable “var://service/error-ignore” to “1” in the error rule to make sure that the
transaction is committed when the message is PUT to an alternate queue, not to
the Backout queue.
Further, the MQ URL must contain the “Sync=true” tag that allows the queue
manager to COMMIT immediately after the MQ PUT is done. If the “Sync=true”
tag is not used, there can be uncommitted message(s) in the alternate queue.
Here is an example of using the MQ URL in the error rule:
dpmq://qmgr-object/?RequestQueue=QUEUE4;Sync=true
Using this approach, performance increases significantly and four separate calls
become a single one. To inject MQOD headers for the back end MQ Queue
Manager, use DataPower's extension function <dp:set-request-header
name=”MQOD” value="$mqodStr"/> in the custom style sheet or MPGW's
Header injection tab.
The WSDL file provides critical information about a Web service, including
endpoint locations, instruction on binding to these endpoints, and expected
message schemas. With just the WSDL and a Front-Side Handler, a Web
Service Proxy has the basic configuration. Additional configuration can be
defined to meet your case requirements, such as AAA, document
transformations, message encryption, and so forth.
In addition to the Web Service Proxy being able to upload or fetch a WSDL file,
the configuration of a Web Service Proxy can be through a subscription to a
UDDI registry or WSRR server. Through subscription, the Web Service Proxy
receives automatic updates of the WSDL file or dynamically looks up the
endpoints for the service.
UDDI subscriptions
Another way to configure a WSP service is to retrieve the WSLD document from
an Universal Description Discovery and Integration (UDDI) repository. The
documentation of what is needed to configure a UDDI subscription is at the
following URL:
DataPower configuration with UDDI:
http://www-01.ibm.com/support/docview.wss?uid=swg21329054
The use of a WSRR registry as the source of WSDL documents offers a number
of benefits:
The registry provides a centralized, managed store for WSDL documents.
Here are some general considerations for using WSRR subscriptions to obtain
WSDL files:
Use “Saved Search” subscriptions when “loose coupling” is required between
the device and WSRR regarding which web services are being proxied by the
device.
Use “Concept” for a similar use case as previously described, but using
“Concept” tags instead of a named WSRR query.
Use “WSDL” for “tight coupling” of the device and WSRR registry information
regarding which web service to proxy. This selection allows only one WSDL
per subscription.
Use “Polling” as the default synchronization method to eliminate the need for
manual intervention.
The polling cycle time must be tuned depending on factors such as: (1)
frequency of changes to WSRR registry, (2) frequency of desired change
because the customer might decide that one time a week is good enough to
roll out new service updates, (3) expected appliance load, and (4) governance
requirements (this might also force the synchronization method to be
manual).
Use the WSRR governance process, which associates Governance states to
registry objects, to force (or prevent) the device from proxying published web
services, for example:
– Create a Concept tag called “PublishToGW” and only mark those WSDLs
desired with that tag to make them visible to the device.
– Create a WSRR query that performs some arbitrary matching of object
name/version/classification/govern state, and save that query as a “Saved
Search” in WSRR. On the device, configure a proxy to use the saved
search, making it possible to manage published and available web
services through their life cycles using WSRR rather than the device.
The device logs all poll that result in retrieving 0 WSDLI(s) as an error in the
logging system. If this occurs, check the spelling of the WSDL or Concept or
saved search name. Changes might be required on the WSRR side as well.
If the issue is on the WSRR side, correct the problem in the WSRR registry
and manually force a synchronization between the device and the registry. To
force a synchronization, select Status Web Service WSRR WSRR
Subscription Service Status, and click Synchronize.
The following IBM RedPapers explain the benefits of WSRR subscriptions and
the detailed steps to configure a subscription to a WSRR concept:
IBM WebSphere DataPower SOA Appliances Part IV: Management and
Governance, REDP-4366
Integrating WebSphere Service Registry and Repository with WebSphere
DataPower, REDP-4559
Caching WSDLs
The WSDL Cache Policy of the Web Service Proxy service controls how often a
WSDL is automatically refreshed. The WSDL Cache Policy is accessible through
the WebGUI main navigation bar under Objects Services Web Service
Proxy.
Select the appropriate WSP object and the WSDL Cache Policy tab. The
following fields are available to configure the caching policy:
URL Match Expression that is used to specify the WSDL
The Time to Live (TTL) in seconds as the refresh interval
Caching policy: The caching policy is only available for WSDL documents
that are hosted outside the appliance. The caching policy does not apply when
the WSDL document is stored in the device file system or in an external
repository, such as UDDI or WSRR.
The caching policy configured for a WSDL document also applies to the schema
files referenced by the WSDL document and other schemas referenced in turn by
those files.
Note: When using one WSDL to configure multiple WSP services in the same
or different domains, set a unique local Ethernet address for each one of the
WSDL documents. This is done by creating a separate FSH for each one of
the configured WSDL documents.
Note: To maximize the availability of your services upon restart, be sure to set
the automatic retry option that is under the appropriate Web Service Proxy
service configuration in the appliance objects. This way, even if the device fails
to retrieve the remote attachments and references on the initial try, it will retry.
Example 2-4 reflects the Endpoint Policy Subjects that can be shown using the
“?wsdl” query.
In all three cases, the ?wsdl command returns the merged policy attached to the
port, not the binding or the portType.
The ?wsdl command does not return what is actually in the WSDL document, but
rather the compiled version of it, modified to reflect the service that the client
sees based on all of the configuration information associated with the WSP
service.
2. Use different local URIs for each service version. Figure 2-1 shows the local
URI in the WSDL configuration window.
It is possible to change the port rather than the URI that is bound to one of the
WSDLs. Changing the port value requires creating a new Front-Side Handler.
More than one Web Service Proxy can use the same Front-Side Handler. This
configuration makes it possible to publish a single Internet Address and port
number to support a larger range of services split across more than one Web
Service Proxy. Using a unique URI for each service provides separation of
services for tracking and debugging purposes.
When an SLM policy is being enforced by more than one device, administrators
must also set a SLM Update Interval on the SLM tab of the XML Management
Interface configuration page found only in the default domain.
The SLM Update Interval must be equal to or less than the threshold interval set
in any SLM Policy statement in use on the device. If not, the updates deliver
outdated information to the individual policies and are ignored, defeating the
purpose of peering.
In addition, all of the standard benefits of a proxy apply. Clients do not access the
web service directly, obscuring its location.
Application
Server
DataPower Application
Server
Internet Load
Balancer
Application
DataPower Server
Application
Server
In some cases, the post processing stage of a policy used at this level can
produce results that do not meet the desired outcome. In such cases, use an
AAA action in the default Request rule of the Proxy Processing Policy.
Execute a style sheet on the device to add a time stamp to the message:
1. Place a Transform action after the AAA Action in the processing policy of the
service.
2. Remove the requirement for a time stamp contained in the Read-Only
LTPA-related Policy Set used by the server.
3. Make a copy of the Policy Set document to do this modification.
Prior to firmware version 3.8.1.x, only the option to generate a SAML assertion
with an Authentication statement was available. This SAML assertion can then
be signed, if desired. This option remains available in current releases as a
means of supporting compatibility with earlier releases.
The information gathered by this AAA Policy can be accessed and used in
processing policy actions by executing a stylesheet that makes the following call:
dp:auth-info('basic-auth-name')
The username, password, and certificate details are available in this manner.
To use this information for an LDAP Authorization Phase Group Member lookup,
use a custom style sheet to place the order of tokens inside the DN in the order
that LDAP expects, such as:
cn=bob,ou=users,o=mycorp,dc=agency
The variable containing the list of destinations has the content in Example 2-7.
Each destination can take unique input by changing the entry as follows:
<url input=contextname>URL</url>
As each delivery is made to each destination, the results are stored in a unique
Output context. Those contexts can then be recombined as needed by a style
sheet in a Transform action, as shown in Example 2-8.
Use an Event-Sink action to cause the processing rule to wait for asynchronous
actions to complete. Note that following an asynchronous action immediately with
an Event-Sink action often negates the advantage of asynchronous execution.
Any asynchronous action handling large files must be concluded before the end
of a processing rule by an Event-Sink action. This rule is because asynchronous
actions can continue executing beyond the end of a processing rule, consuming
memory and resources.
Technically there is also a fourth way to use SQL and that is through a
sql:// url-open. The Multistep action actually uses this under the covers, but there
are cases where this construction cannot be used, such as the backend URL for
an XML firewall; therefore, use the previously discussed methods in most
situations instead.
Although it is the most flexible, the extension element can have some additional
overhead than the extension function and action. Depending on the database,
fully specifying the precision, scale, type, mode, and nullable attributes on all
<argument> elements might result in a performance optimization (especially true
for DB2).
Stored procedures are the only way to guarantee atomicity within a transaction
requiring more than one operation. Put differently, if a customer needs to execute
several SQL statements (for example, insert a record and update a balance) as
part of a logical unit of work, a stored procedure is the only way to guarantee that
those operations occur atomically (with the exception of batched updates).
Developers can configure error rules that automatically catch errors thrown at
any of these points by using an appropriate Match action in the Error rule
designed to handle that error. A Match Action can match on event codes,
providing targeted error handling, if desired. As with request and response rules,
error rules are evaluated in the order in which they are listed in the processing
policy. Figure 2-5 shows the error rules in a processing policy.
Developers might want to do one or more of the following when an error occurs:
Change an HTTP 500 to an HTTP 200 so that a requesting agent does not
receive an error response from DataPower, but rather a response that the
requester can then process. To do this, execute a custom style sheet in the
error rule. The style sheet uses the following two lines:
<dp:set-variable name="'var://service/error-protocol-response'"
value="'200'"/>
<dp:set-variable
name="'var://service/error-protocol-reason-phrase'"
value="'OK'"/>
Return a custom message to the requesting agent, replacing the standard
responses. Usually this custom message contains more information about the
error that occurred. This process requires that a custom style sheet runs in
the error rule processing. The following read-only variables provide more
information about errors:
var://service/error-code
var://service/error-sub-code
The following variable contains an error message determined by the service.
This variable is both read and write:
var://service/error-message
This message is returned to the client.
Error rules can also generate log messages or take other actions as needed. The
type and degree of error handling needed for a particular implementation
depends on the policy of the enterprise.
In general, service chaining is robust and fast and does run in production
environments. Designing a solution that requires only one service provides better
results.
Furthermore, it is recommended that each service employ its own unique XML
Manager to eliminate the possibility that changes made to the object to support
one service inadvertently break the behavior of another service in the same
domain.
This can be achieved by configuring the XML Manager Document Cache Policy
Type as Fixed with a URL Match Expression designed to match map files (that is
*.dpa). The TTL value then determines how often the XML Manager retrieves a
fresh copy of the map files, as shown in Figure 2-7.
2.7 Streaming
This section discusses the file streaming within the DataPower appliance along
with its advantages and considerations.
Streaming is the only way to safely transfer large files. Using normal operation
modes exceeds the available memory on the device and causes a throttle or
other failure.
Using any of the following functions and commands can compromise streaming:
xsl:if statements
xsl:choose and xsl:when statements, which is true for any condition except
xsl:foreach statements
Boolean tests between node sets where one is the input document
Any style sheet that checks a node for two different purposes
If a processing policy uses more than one action (for example, both a transform
action and a validate action, or perhaps more than one transform action), the
contexts that is used to connect the actions must be set to PIPE. Thus, the
processing policy might contain the following actions in a processing rule:
Note: Style sheets also must be compatible with the DataPower processing
criteria. Otherwise your service might end up not being streamable even when
using the right processing actions.
Processing rules
When you must transfer files with several types and sizes, a good
recommendation is to use more than one processing rule. A good example is to
have a specific rule for large files streaming only, and to have more rules to deal
with the other files. This way it becomes easier to build streamable services and
rules on DataPower.
Streaming mode
The Streaming mode provides limited processing when compared to the Allow
mode. Because performance is better in this mode, it is the best option when
processing large attachments.
Unprocessed mode
The Unprocessed mode supports messages with attachments, but it does not
provide any processing of attachments. So it is the best option when large
attachments are required but no processing is required. In most cases, the root
part of the message has a SOAP content, and DataPower can apply filter and
transform on it if needed. It is the best option when processing SOAP with large
attachments.
Note: If the attachment is referenced in any of the style sheets or actions, the
attachment is buffered.
Public key cryptography uses a certificate (which contains a public key) and a
private key. Public keys and private keys can be imported on to the appliance or
generated directly on the appliance.
Session keys are secured with public key cryptography and then used to securely
transfer data using SSL.
The three common SSL scenarios using the DataPower appliance depend on
which side of the connection the appliance is responsible for securing:
DataPower appliance acts as an SSL client as a forward proxy: If the
appliance is set to authenticate and validate certificates, the appliance must
have a certificate that contains the public key to trust the back end server.
DataPower appliance acts as a SSL server as a reverse proxy: The appliance
must have a private key when acting as an SSL server.
DataPower appliance acts as a “two-way” proxy: The appliance must be
configured to both initiate connections as an SSL client and terminate
connections as an SSL Server.
3.2 Usage
Taking the components of cryptography and putting them together within the
DataPower appliance configuration requires an understanding of the various
relationships that these pieces have with each other.
A Crypto Key configuration object maps to a private key, and a Crypto Certificate
configuration object maps to the certificate. Pairing a corresponding Crypto Key
object and Crypto Certificate object creates a Crypto Identification Credentials
object.
If the Crypto Profile also contains an optional Validation Credential object, the
appliance can request the peer (the SSL client) to authenticate itself as part of
the SSL handshake for mutual authentication.
Because SSL is terminated with the DataPower appliance, the appliance can act
as both a server to receive an incoming SSL connection and as a client to initiate
an outbound SSL connection. In this case, two sets of identification and
validation credentials are required, one for each connection. The SSL direction is
set to “two-way” and both a “reverse” and a “forward” Crypto Profile are defined.
There is an SSL proxy profile object that refers to Crypto Profile objects. An SSL
proxy profile might refer to a forward (DataPower as an SSL client) Crypto Profile
or to a reverse Crypto Profile (DataPower as an SSL server).
The HSM allows for similar acceleration capability as the non-HSM appliances,
but additionally provides a separate secure private key storage location than the
flash file system.
If using an HSM appliance, the keys must be stored on the HSM. If using a
non-HSM appliance, the keys are stored in the flash file system.
Unless you use an appliance with HSM hardware, keys cannot be imported or
exported.
Note: The HSM has received FIPS 140-2 certification - level 2 or level 3
depending on how it is configured.
Generate Key
The Generate Key action generates a private cryptographic key on the appliance
and optionally a corresponding self-signed certificate. By default, the Generate
Key action also creates a corresponding Certificate Signing Request (CSR) that
is needed by a Certificate Authority (CA). CA policies can vary with the amount of
information they require in a CSR; therefore, we recommend checking with the
CA before generating a CSR to ensure that sufficient information is provided.
Figure 3-3 on page 63 shows the appliance key generating panel.
Note: The file designated must reside in the same directory as the private key
object file if private fields are used in the output format. The output format
includes private fields of the key. The file must be in the same directory as the
configured file of the private key object. The OpenSSH pubkey format does
not contain any private fields.
Either upload certificates directly to the file system or use the Import Crypto
Objects tab of the Crypto Tools panel to import certificate objects.
Using the Import Crypto Object tool to import certificates automatically creates a
corresponding Crypto Certificate object.
If directly uploading the certificate files, the Crypto Certificate objects must be
created manually.
After the key is uploaded, the key cannot be retrieved, copied off, or included in
an appliance configuration backup.
The only time a key can be exported off of a non-HSM equipped appliance is:
If the key is generated on the appliance with the option for export explicitly
enabled, a copy can be found in the temporary directory.
If the appliance is set for a secure backup using a Disaster Recovery setting
enabled during the appliance initialization.
The non-HSM equipped appliance can only use Crypto Key objects pointing at
the appliance flash file directory (for example, cert:///privatekey.pem).
Use the Import Crypto Object tool to import the key onto the HSM. The full
security benefit of the HSM is utilized when using the keygen to create a private
key that has never left the inside of the HSM or when the imported key file was
exported from another HSM.
The file can be in any of the supported private key formats: DER, PEM, PKCS#8,
or PKCS#12. It can also be the output of the Crypto Tool key export from an
HSM-equipped appliance.
The HSM only stores RSA private keys. It does not store DSA keys (public or
private), RSA public keys, X.509 certificates, or symmetric keys (AES or 3DES).
Note: Do not use HSM to store files other than keys. HSM is not a file system.
HSM initialization
The HSM arrives in an uninitialized state. The HSM can then be initialized in
FIPS 140-2 level 2 mode or in FIPS 140-2 level 3 mode. An uninitialized HSM
cannot store keys and is limited in most RSA operations (basically only SSH and
SSL will be able to do RSA).
To initialize the HSM, the hsm-reinit command is executed from the CLI (to put
it into either level 2 mode or level 3 mode), and then the appliance must be
rebooted to complete reinitialization.
Note: Be careful when switching HSM modes because all keys inside are
permanently destroyed during this operation.
PED keys
There are four PED keys: grey, blue, red, and black. The grey and blue key
represent the security officer (SO) role in the FIPS documentation. The red key
controls key sharing between two HSMs. The black key represents the USER
role in the FIPS documentation.
If using the HSM in level 2 mode only, then PEDs are not required. If using it in
HSM level 3 mode, at least one is required. One PED can be shared between
any number of appliances (it is a matter of logistics to physically move it around
between them, though). Two different kinds of PED cable might be needed if only
one PED is available to administer a mix of RoHS and non-RoHS HSM
appliances.
FIPS 140-2 Level 3 requires a secure channel for the login of the HSM user
before secrets can be accessed. The PED is how this particular HSM chose to
implement that secure channel. Each HSM does this differently.
After all of this is complete, private keys can move from system-to-system with
crypto-export and crypto-import.
Note that the non-HSM appliance can export keys immediately at keygen time,
but never at a later time. To export keys at keygen time, use the export-key
parameter (not to be confused with the exportable option that controls later
exportability on HSM appliances).
Note: The appliance supports CRLs that are in the DER format only and must
conform to RFC3280.
By default, the appliance does not disable certificates and credential sets that
use the expired certificate when the certificate expires.
If the certificate is expired and not set to ignore expiration, the Certificate Monitor
can be configured to mark the key object and any dependent objects down.
If Disable Expired Certificates is set to on, all objects that use the expired
certificate (either directly or through inheritance) are disabled and are no longer
in service, for example, certificate expiration triggers the disablement of the
associated certificate. Disablement of the certificate triggers the disablement of
all firewall credentials, identification credentials, and validation credentials that
use the expired certificate. In turn, crypto profiles that use disabled identification
credentials and validation credentials are disabled, which leads to the
disablement of SSL proxy profiles that depend on the now-disabled crypto
profiles. If this security measure is enforced, the DataPower service can be
disabled as the result of a referenced certificate expiration.
An alternative to using the Certificate Monitor is to monitor for the log event
shown here using SOMA, SNMP, a log target, or some other monitoring method:
0x01b6000c cryptoinfo Certificate is about to expire
If a certificate is exported using the Export Crypto Object tool, the certificate is
encapsulated within DataPower configuration object xml and might need to be
edited to revert back to a standard PEM file format.
Note: Certificates, keys, or any other such sensitive material must not be
stored on the local: directory.
If HSM private key operations are running slowly, check to see if any Crypto Key
objects reside in the flash file directory (cert:///alice.pem). If Crypto Keys are
found in a flash directory, use the crypto-import action to move the Crypto Key
object inside of the HSM (hsm://hsm1/alice).
Using a Crypto Key in the flash directory functions much slower because it must
be imported into the HSM, used, and then deleted on every single RSA
operation. Certificates and public keys do not have this problem.
When importing a private key, choose a new Object Name for the imported copy
of that key. After importing the keys, update the configuration so that the keys use
the imported object name rather than the name of the key stored on the file
system. For example, looking into a processing policy where a key is used in a
sign action and down at the bottom where the key is specified in a pull-down
menu, change the key reference to the imported key object name.
After generated on the HSM, private keys have a unique link designed not to be
used in different domains and cannot be imported from one domain to another.
Keys that were imported from an external source must be re-imported in the new
appliance.
Keys contained within the appliance are listed on the Objects Crypto
Configuration Crypto Key page.
Completely test the appliance before introducing the appliance back in to the
production, development, test, or other environment.
4.2 Benefits
The DataPower appliance has many offerings as a serviceability and
troubleshooting solution. When something behaves unexpectedly in a complex
system, understanding the troubleshooting tools that are available can greatly
reduce the time to resolve any problems. The major components of the
DataPower appliance troubleshooting and serviceability consist of:
Testing network connectivity
Appliance statistics and status providers
System and Audit logs
Error reports and failure notifications
XML file captures and Multistep Probe
4.3 Usage
There are several troubleshooting tools and techniques that are available for the
DataPower appliance. Understanding the capabilities for each available tool
helps you make a more informed decision when selecting which set of tools to
troubleshoot a particular problem.
In general, you must use certain tools during different parts of the development
life cycle, while the error report can be used at any phase of the development life
cycle. However, the most granular tools, such as debug loglevel and the multistep
probe, must be used mainly during the development and test phase because
these can be intrusive and generate a large amount of data in a high load
environment.
After production, use the status providers, logs, and error-report to check on the
operational state.
Figure 4-1 shows the Monitoring and Troubleshooting Panel in the Control Panel.
In the Monitoring and Troubleshooting Panel, there are various tools and settings
available to aid in identifying and resolving a problem, which we describe in the
following sections.
Setting the loglevel to Debug and recreating the issue can capture valuable
details about processing behavior, transaction flows, and help correlate with any
errors that are seen. You can then view the logs by selecting View Logs in the
Control Panel or directly from the file system, for example:
temporary:///default-log
The log file is displayed in the WEBGUI as a table where each row of the table
represents a particular event on the appliance. Figure 4-2 on page 76 shows an
example of the log view in the WEBGUI.
Field Description
The latency numbers measure in milliseconds the time elapsed since the
beginning of a transaction. Usually, this is at the start of the HTTP transaction.
Table 4-2 describes the latency log format (numbers are in milliseconds).
The Probe can display useful transaction information like metadata, variables,
headers, attachments, parameters, extensions, and execution tracing.
To enable the Probe function, there are basically two paths to follow:
Activate the Probe function directly from the service:
Service configuration Show Probe Enable Probe
Activate the Probe function from the Troubleshooting menu:
Control Panel Troubleshooting Debug Probe Select intended
service Add Probe
Consider using the two additional settings when the workload in the appliance is
high and manual capture of the intended transaction might be difficult.
1 1
2
This function can only be activated from the default domain, and after it is active,
it captures XML files for all domains running in the appliance. When the files
reaching the services are captured, they can be viewed through Main Menu
Administration Debug Browse Captured Files. A window similar to
Figure 4-5 is displayed.
Due to its nature of capturing any XML file reaching any active service, handle
this function carefully because this can certainly affect the performance of the
appliance when enabled.
The current object status can be viewed using the View Status icon in the Control
Panel
The DataPower appliance can also gather statistical data about various
resources on the appliance. By default, the gathering of statistics is disabled for
performance reasons and must be enabled for certain status providers. If a
status provider requires that statistics to be enabled and it is not enabled, the
window shows the “Statistics is currently disabled” message.
Enabling statistics can allow the calculation of certain status providers like CPU
Usage and Transaction Rates.
The Failure Notification function generates an error report and additionally can
be configured to include the following diagnostics:
Internal State
Packet Capture
Log Capture
Memory Trace
Enabling options: The Packet Capture, Log Capture, and Memory Trace
features are not enabled by default. The administrator must consider
serviceability and enable these options.
To use Failure Notification, you must enable the configuration and allow the error
report to be uploaded. The impact on performance after enabling these features
largely depends on the configuration. Test these features throughout the
development life cycle to fully understand the impact.
When upload error report is enabled, the Failure Notification status provider is
enabled. This status provider, in combination with the report history, tracks the
error reports that the appliance generates, the reason why the appliance
generated the error report, and its upload status to the specific destination.
The destination of the error report can be set to NFS, iSCSI, RAID, SMTP, FTP,
or the temporary file directory. When the appliance generates error reports, the
These messages are independent of messages written to log and trace targets
and this features is not governed by any logging target configuration.
When enabled and if memory falls below an internal threshold, the appliance
tracks all memory allocations. When the appliance reaches a critical condition, it
generates an error report that contains information about memory allocation. The
configuration of the Throttle Settings affects this feature and can prevent the
appliance from reaching the internal threshold.
The Report History specifies the maximum number of local error reports to
maintain when using the upload error report feature. After reaching this limit, the
next local error report overwrites the oldest local error report. To view the history,
use the Failure Notification status provider found in the Status section in the left
navigation bar.
The first step is to observe the issue’s behavior and try to isolate the issue to a
specific application domain, service, interface, message, error message, and so
on.
After these are confirmed, check to see if there are any log messages indicating
where the issue occurred.
Confirm that the firmware upgrade was successful by checking the appliance
firmware level using the WebGUI or the show version CLI command because
this message is typically informational.
Note: Take a secure backup after any firmware update because secure
backup restore requires the device to be at the same firmware level as the
backup.
Useful data can typically be found by enabling debug loglevel and the multistep
probe, and then recreating the issue.
The error report can show the correlating log messages and configuration
settings referenced during the issue. Correlating the probe with the error report
can help isolate configuration and connectivity issues.
After you recreate the issue, generate an error report for review.
Some general AAA best practices to aid in serviceability and troubleshooting are:
Ensure that the output from the Extract Identity step is unique for the current
user when using Authorization caching or inaccurate results can occur.
Avoid using the probe to determine the input to each step for custom steps.
Avoid using custom templates where possible to take advantage of the proven
functionality of existing templates.
Do not use the var://context/WSM variables in or outside of an AAA action.
Save them to a custom context variable instead or extract the values from the
XSLT input.
Do not configure more than one Post Processing method. Using multiple
methods can result in unexpected behavior.
The Extract Identity output might be more specific than expected or not
cacheable, if so, then unexpected results can occur.
The basic component of RBM is an access profile, which consists of one or more
access policies and is an URL-style expression that grants or denies rights to a
resource.
In the policy:
<device-ip> is the device management ip address or * for any
<domain> is the resource domain or * for any
<resource> is unique resource type descriptor * for any
(key=value) can optionally be used to narrow down the resource instances using
Perl Compatible Regular Expressions (PCRE), for example:
Name
LocalPort
LocalAddress
Directory
Note: Be sure to frame your match expression with ^ and $ if your intend is an
exact match.
Explicitly denying access to the user is through the use of the keyword NONE, for
example:
*/*/*/services/xmlfirewall?Access=NONE
This example denies any type of access to any firewall in any domain on any
device.
Access is evaluated against policy with the longest best-match, so the order is
not important, for example:
*/*/services/xmlfirewall?Access=NONE
*/*/services/xmlfirewall?Access=r+w&Name=^myName
This example denies access to all firewalls except for firewalls whose names
start with “myName”.
It is recommended to use only one default gateway to help prevent traffic from
being routed to an unreachable backend by accident.
To identify a network issue, the following tools are commonly used in conjunction
with each other:
Error report
Packet captures
On-board status outputs:
– Show route
– Show int
– Show int mode
– Show network
– Show tcp / show tcp summary
After this data is captured, review the data for any unexpected values, and try to
identify any network-related errors in the error report.
Use a network protocol analyzer to view the file, such as the commonly used
Wireshark utility.
Packet traces can be taken on all interfaces, including the loopback or specify a
single interface.
Example 4-3 shows an example of several CLI network status provider outputs.
Collect the following documentation from the time of the event. This is key to
determining root cause.
Generate an error report. This can be done from the default domain
Troubleshooting panel or from the command line using the co;save error
command. This will generate the error report into the device's temporary:
directory that can be downloaded and sent to IBM Support.
A full device backup is always helpful. If a service request (PMR) is already open
with IBM DataPower support, and you submitted a device backup, indicate this to
the L2 support person with whom you are working. Submit any and all statistical
data about the device leading up to and during the event. The statistics can be
from SNMP, XML Management retrieval, or other methods, such as command
line (CLI), snapshots of the current state and any other information pertaining to
the problem. Submit all off-device logging leading up to and during the time of the
event, including syslog, NFS, SOAP, or other methods of off-device logging.
Device monitoring through the command line interface (CLI) is available for
gathering statistics. This can help in a number of issues and cases from
unexpected reloads, reboots, to slow response, throttling, and so on. The interval
at which you perform the CLI captures is dependent on the speed at which the
problem occurs. For normal operating conditions, this can be as infrequent as
once every 12 hours.
If the appliance has a throttle in place, the high memory usage (load) might be
causing the throttle to refuse connections. To view, click Administration
Device Throttle Settings in the left navigation pane. Finding the cause of
excessive memory can involve the use of other tools.
After you set the tracing to on, capture the output of the following commands over
several iterations while the issue is occurring for comparison:
show clock
show load
show cpu
show throughput
show tcp
show int
diag
show mem
show mem details
show handles
show activity 50
show connections
exit
##************************************************************
## **** Edit next 3 lines according to your environment *****
##************************************************************
## Hostname or ip address of the DataPower device
DPHOST=datapower.device.company.com
## The INFILE is created then used each time the SSH connection is made
INFILE=cli.txt
## The filename prefix these will have a date and time stamp added
OUTFILE=cli_output.
##************************************************************
cat << EOF > $INFILE
DP_PRIVLEDGED_USER_ID
PASSWORD
echo show clock
show clock
echo show load
show load
echo show cpu
show cpu
echo show throughput
show throughput
echo show tcp
show tcp
echo show int
show int
diag
echo show mem
show mem
echo show mem details
show mem details
echo show connections
show connections
echo show handles
show handles
echo show activity 50
show activity 50
EOF
echo "Complete"
Insert the IP address and any other necessary settings for the desired
environment.
Example 4-5 A sample CLI script that can capture a specific network connection
co
loglevel debug
show clock
show system
show version
show filesystem
show interface
show interface mode
show interface eth0
show interface eth1
show interface eth2
show interface eth4
show vlan-interface
show vlan-sub-interface
show network
show ethernet-mau
show ethernet-mii
show standby
show self-balanced
show route
show netarp
show dns
show load
show throughput
show tcp
packet-capture-advanced all temporary://pcap1 -1 5000 9000 "ip host
<ip>"
ping <ip>
#<wait for ping to return before proceeding>
test tcp-connection <ip> <port> <timeout>
100 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
#<wait for tcp connection test to return before proceeding>
no packet-capture all temporary://pcap1
service show component-firmware
save error-report
loglevel error
exit
diag
show mem
show mem details
show handles
show activity 50
show connections
exit
The CLI command can generate useful CLI command output, an error report,
and a packet trace of the tested connection failure.
Chapter 5. Business-to-business
service implementation
This chapter provides a brief introduction to the WebSphere DataPower
business-to-business (B2B) Appliance and describes some of the challenges
and best practices surrounding device deployment and the implementation of the
B2B Service to support the transport of files utilizing both the
business-to-business messaging protocols and standard application layer
protocols.
If you are new to B2B, a good historical overview of the technology is in IBM
WebSphere DataPower B2B Appliance XB60 Revealed, SG24-7745.
The WebSphere DataPower B2B Appliance builds upon the functionality in the
WebSphere DataPower Integration Appliance by adding trading partner profile
management, B2B transaction viewing capabilities, and industry standards
based B2B messaging protocols to the already robust integration capabilities of
the core appliance. These three key capabilities are at the heart of the B2B
appliance. They are designed in such a way that the B2B appliance is positioned
well to handle simple partner connections with data passing through directly to
end applications for further processing. If more complex data flows are required,
the application integration capabilities of the B2B appliance can be used to
perform data validation, transformation, rules based enforcement, and content
based routing.
104 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Figure 5-1 shows the components that make up the B2B Gateway Object in the
XB60.
B2B Appliance
B2B Gateway
Service
Partner Connection External Partner
Front Side Handlers Destinations
Partner
Profiles
Metadata Document
Store Store
(DB) (HDD)
B2B Viewer
106 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Dynamically routes based on any message content attributes, such as the
originating IP, requested URL, protocol headers, and data within the
message, such as SOAP Headers, XML, Non-XML content, and so on.
Provides Service Level Management (SLM) to protect your applications from
over-utilization using frequency based on concurrency or based on messages
per time period. Take action when exceeding a custom thresholds (Notify,
Shape, or Throttle). Combine SLM with routing to make intelligent failover
decisions when a threshold is exceeded.
Figure 5-2 depicts how the WebSphere DataPower B2B Appliance utilizes B2B
services, integration services, and network services together to provide a robust
and complete B2B integration solution.
B2B Integration
Services Services
• Partner Provisioning • Protocol Bridging
• Community • Content-based Routing
Management • Any-to-Any
• Non-Repudiation Transformation
Firewall
Firewall
Network Services
• IP Address Filtering / Host Aliasing
• VLAN Support / Standby Control
• Packet Trace / SNMP Traps
Because most multi-gig large files are binary in nature and have no partner
information in the actual payload for us to extract, a multiprotocol gateway in
streaming mode must be used to route such files to their internal destinations.
When streaming data, no processing policy can be used and the data is passed
from the input to the output of the service untouched.
108 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
capacity planning and for deciding how much to scale to meet your specific
connection and throughput requirements.
Understanding the maximum capacity of a single device can help you determine
how many devices are needed to support your throughput requirements.
Performance results are subjective to many factors, which include but are not
limited to:
Network latency
Firewalls, routers and switches in the path of the flow
Average file/payload size
Peek Volume and measurement period
Usage of data encryption
Usage of data signatures
Usage of message disposition notifications (Sync or Async)
Usage of transport protocols
Usage connection security like SSL or TLS
Usage of authentication
Usage of authorization
Usage of processing policy
Usage of transformation and/or validation
Method used to measure throughput
The number of services actively running
Concurrent connections
With so many factors to consider, the matrix of possible test case variation is
rather large. For this reason, IBM does not publish performance results. Any
results obtained from IBM lab testing are of little value with regard to how the
appliance functions with your data in your environment. Take the following best
practices into consideration during testing:
Configure the B2B appliances to handle the appropriate capacity that is
expected in production. Review the Capacity planning section in “Chapter 6
B2B Configuration and Administration” in the IBM Redbooks publication
DataPower Administration, Deployment, and Best Practices Volume 1,
SG24-7901, before performing performance testing.
Test between two separate B2B appliances (source/sending partner to
target/receiving partner) on the same network subnet (for example, no router
or firewall between the two devices) to be sure data can be consumed and
produced as fast as possible. Be sure all testing tools are also on the same
sub-net as the B2B appliances. This allows you to establish a benchmark
baseline that can guide you when making decisions on how to scale to meet
your throughput needs.
When testing AS protocols or ebMS, test a few transactions without security
and AS MDNs and ebms acknowledgements first, correct any configuration
The B2B appliance receiving the B2B messages from the sender must have a
back side that can consume the files as fast if not faster then they are being
sent by the B2B appliance. If this is not possible, a Multi-protocol Gateway
service on the device can be configured as a simulated back end and be set
to consume and throw away the files that it receives and to respond with a
200 OK back to the B2B Gateway service.
The best way to measure throughput is with controlled test runs of a fixed
number of documents and then looking at the B2B viewer for the time stamp
of the first message in and then finding the time stamp for the last message
out and calculating using this formula: TimeOUT - TimeIN (convert to
seconds)= Elapse time in seconds. Number Docs processed / Elapse Time =
TPS. By doing the measuring using this manual method, you do not have the
overhead of turning on throughput monitors in the XB60.
110 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Run each test three times and use the average of the three tests as your final
result for each testing variation to provide you with a more accurate value
then any single test.
Run a Purge Now process on the B2B Appliance between each test run to
keep the payload store and metadata store clean, preventing drive space
issues.
If the intent is to use multiple services on the B2B Appliance, when in
production your configuration must reflect this and your testing must send to
Front-Side Handlers of all services at the same time. This gives you a more
accurate result when determining the saturation point of the device.
Saturation happens when CPU utilization reaches in the upper 90s to 100%.
The best way to monitor the B2B Appliance is to watch the system usage and
CPU status of both the sending and receiving devices when either hit
between 95% and 100% then this is the maximum threshold you can expect
from the device.
Tip: Information about monitoring methods for DataPower are at the IBM
Developer Works web site:
http://www.ibm.com/developerworks/websphere/library/techarticles/100
3_rasmussen/1003_rasmussen.html
After you have a baseline result for the B2B Appliances in isolation, you have a
good starting point to isolate any bottlenecks that might exist when you test your
B2B flows end-to-end. Because the end-to-end performance of the B2B flow is
going to only be as fast as the slowest link in the testing chain, the baseline value
from the systems in isolation is really a sanity check that the devices are meeting
your minimum requirements. After you add firewalls, routers, the Internet, and
network latency to the mix, you will find that you might have to utilize service-level
monitoring to compensate for the network deficiencies.
WebSphere MQ FTE
Applications
WebSphere Transformation
WebSphere Extender / Trading Manager
Partner DataPower B2B
Appliance
Figure 5-3 Compliment to the IBM application integration middleware software solutions
112 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Before deploying B2B appliances in production it is recommended that you
understand how much capacity you will need for both the metadata store and
document store. This information helps to determine how often to archive and
whether you can store documents off of the device. Section 5.4, “B2B
performance testing best practices” on page 108 provides detailed
information regarding capacity planning.
Note: When working with your external partners, the most common
configuration issue that prevents a successful exchange of data is typically
related connection security, data security, or network security, for example, IP
and port blocking at your firewalls and the partner’s firewall.
The B2B appliance has the ability to natively consume and produce B2B
messages that utilize the AS1, AS2, or AS3 protocol. This pattern, as depicted in
Figure 5-4 on page 114, demonstrates the B2B appliance’s ability to consume an
AS2 message from a trading partner that contains an EDI payload and
transforming the payload to XML.
This section does not describe the configuration steps needed to implement this
scenario; instead, it only covers the best practices, limitations, and variations of
the pattern needed to guide you when implementing similar patterns in your
environment. If you are interested in learning the configuration steps for this type
of pattern, refer to Chapter 11 of the IBM WebSphere DataPower B2B Appliance
XB60 Revealed, SG24-7745 Redbooks publication, which describes how to
configure a variation of this scenario.
1 AS2
2 3
EDI (EDI) B2B 4 XML
AS2 Process
AS2 Gateway Service
5
(MDN)
Internet Application
Transaction
Viewer
Browser
Data flow
The following list refers to the numbered items in Figure 5-4:
1. An EDI file is passed from Partner A’s back end application into their B2B hub
and is wrapped in an AS2 envelope based on settings in Partner B’s profile
configuration.
2. Partner A sends the AS2 message to Partner B over HTTP or HTTPS.
3. Partner B’s B2B Gateway service unwraps the AS2 message, transforms the
EDI file to XML using a WTX DPA mode map, and sends the XML file to
Partner B’s back end application using any protocol supported by the B2B
appliance.
4. Partner B’s B2B Gateway service routes the XML file to the back end
application.
5. Partner B generates an AS2 MDN and sends it back to Partner A over HTTP
or HTTPS. Partner A receives the MDN, correlates it to the outbound file, and
logs the transaction as complete.
Note: MDNs are optionally requested by the sender who dictates whether to
return the MDN synchronously or asynchronously. If sent back
asynchronously, the sender also provides a return-to address in the AS
headers for the MDN.
114 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Best practices
The best practices listed in this section are specific to this pattern and are in
addition to any best practices that are listed in section 5.5.1, “Best practices
common to all patterns” on page 112.
Limitations
The limitations that are listed in this section are in addition to the appliance
limitations, as described in section 5.3, “B2B appliance known limitations” on
page 107:
EDI functional acknowledgements are not natively supported in the B2B
appliance. It is possible to run a map in the response rule to create a
functional acknowledgement that simply states success or failure. If a
functional acknowledgement with detailed failure information is required, it is
recommended to use an external EDI processing engine in a downstream
process.
The is no support for data security that is not governed by the AS or ebMS
protocols for B2B messages (PGP).
Pattern variations
This section describes the different patterns that are derivative of the pattern
depicted in the example:
The pattern example in this section depicts an inbound flow. An outbound flow
pattern is similar with the main difference being that the input to the B2B
Gateway service is typically over a protocol that is required for integration with
the systems on the back end (MQ, WebSphere JMS, NFS, HTTP, and so on).
The payloads are not wrapped in a B2B messaging envelope, and the output
of the B2B Gateway service is typically an AS enveloped payload.
Use this pattern as a basis for any of the AS protocols supported in the B2B
appliance. Configuration of the destination are slightly different because each
EDIINT protocol supports separate application layer protocols.
Use this pattern as a basis for a file transfer flow over any of the non-B2B
protocols supported in the appliance. If the file is EDI or XML, and the sender
and receiver information can be extracted from the payload, the B2B Gateway
service automatically extracts the partner information and locates the profile.
If the file is anything other then EDI or XML or does not have sender and
receiver information in the payload, the partner information must be sent
Partner D
EDI
B2B Appliance
Partner A zzpartner_c AS2
(EDI) B2B
thevan
AS2 EDI
Gateway Service
VAN (MDN)
EDI zzpartner_b
Ext. profile IDs Application
Internet AS2 ID: thevan
zzpartner_a
Partner B zzpartner_c zzpartner_b
zzpartner_c
EDI Int. profile IDs
partner_d
zzpartner_d
Transaction
Partner C Viewer
Browser
116 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
5.5.3 Web Services bridged to AS2
The Web Services bridging pattern is a common pattern for company’s that need
to consume a B2B payload over Web Services but want to pass all inbound B2B
data as a canonical B2B messaging format into their B2B gateway service.
Typically this is because of a trading partner’s requirement to only exchange data
with external partners using the Web Services protocol.
The benefit of tying other DataPower services to the B2B Gateway Service is that
it provides you with the flexibility to utilize all of the integration functionality
included in the device to connect to a wide variety of trading partners whom
typically demand that you communicate in a manner that is convenient for them.
Essentially, the other services on the B2B appliances can act as a pre or post
process to the B2B Gateway Service giving you the extensibility needed to
support the most demanding B2B transaction flows.
This section does not describe the configuration steps needed to implement this
scenario; instead, it only covers the best practices, limitations, and variations of
the pattern needed to guide you when implementing similar patterns in your
environment. If you are interested in learning the configuration steps for this type
of pattern, refer to Chapter 14 in the IBM WebSphere DataPower B2B Appliance
XB60 Revealed, SG24-7745 Redbooks publication, which describes how to
configure a variation of this scenario.
Partner A Partner B
1
Flat B2B
5 Flat
Gateway Service
Transaction
Viewer
Browser
Best practices
The best practices listed in this section are specific to this pattern and are in
addition to any best practices that are listed in section 5.5.1, “Best practices
common to all patterns” on page 112:
When integrating to a B2B Gateway service from other services on the
appliance, use an EDIINT B2B messaging protocol with the minimal AS
header information required for the flow. This makes it easy for the B2B
Gateway service to process any payload type without requiring it to parse the
payload to find sender and receiver information.
Tip: A processing policy can be used to add minimal AS2 header information
to the payload before passing it to the B2B Gateway service. The Benefit of
passing the payloads into the B2B Gateway service are: persistence of off the
wire files for legal purposes and visibility of the state of the transaction flow in
the B2B viewer.
118 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Limitations
The limitations that are listed in this section are in addition to the appliance
limitations as described in section 5.3, “B2B appliance known limitations” on
page 107.
This pattern will not work for traditional Web Services request/response
processing where you are simply proxying to a Web Service host. This pattern is
best used when using Web Services or SOAP to transfer files over the Internet
that have an internal persistent destination in your network for the received file.
Pattern variations
This section describes the various patterns that are derivative of the pattern
depicted in the example:
The pattern example in this section depicts an inbound flow. An outbound flow
pattern is similar with the main difference being that the input to the B2B
Gateway service is typically over a protocol that is required for integration with
the systems on the back end (MQ, WebSphere JMS, NFS, HTTP, and so on).
Output of the B2B Gateway service is typically a SOAP wrapped payload that
is the input into the Web Services Gateway and sent to the partner.
This pattern can be accomplished with a Multi-Protocol Gateway service in
place of the Web Service Proxy service when no WSDL is used. This
variation is documented in great detail in Chapter 14 in the book, IBM
WebSphere DataPower B2B Appliance XB60 Revealed, SG24-7745.
This section does not describe the configuration steps needed to implement this
scenario; instead, it only covers the best practices, limitations, and variations of
the pattern needed to guide you when implementing similar patterns in your
environment. If you are interested in learning the configuration steps for this type
of pattern, refer to Chapter 7 in the Multi-Enterprise File Transfer with
WebSphere Connectivity, SG24-7886 Redbooks publication, which describes
how to configure a variation of this scenario.
Partner A Partner B
Browser
(Admin)
Transaction
Viewer Applications
DB Logger
(DB2 or Oracle)
MQ Explorer
Browser
(Partner View)
Browser
(LOB User)
120 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Data flow
The following list refers to the numbered items in Figure 5-7 on page 120:
1. Partner A sends a file into Partner B’s B2B Gateway service over any
supported protocol. The B2B GW uses profile management to identify the
partner and process any messaging envelopes that might exist (Security,
compression, acknowledgements, and so on, depends on the standard used).
2. The B2B Gateway routes the file to a MQ Queue that is shared with an MQ
FTE Agent. 2a: Optionally, a processing policy can be used in the B2B
Gateway to set RFH2 headers and or trigger the MQ FTE file transfer.
3. The B2B Gateway recognizes the responses from MQ and if a B2B
Messaging protocol (AS1, AS2, AS3) was used it will generate a message
disposition notification and sends it to the trading partner.
4. The Source Agent moves the file to the Target Agent based on either XML
command file instructions or if the Agent was set to poll the shared MQ
Queue.
5. The Target Agent moves the file off of the MQ Queue to the file system
destination.
6. The back end application uses the file to complete the flow.
Best practices
The best practices listed in this section are specific to this pattern and are in
addition to any best practices that are listed in section 5.5.1, “Best practices
common to all patterns” on page 112:
If transferring data outside of the protected network IBM recommends using a
B2B messaging protocol to secure the data. Additional security can also be
recognized by using SSL to secure the connection.
The security and partner management of the B2B appliance is not a
substitute for WebSphere MQ File Transfer Edition security. Use the security
of both offerings together to best mitigate risk.
Limitations
The limitations that are listed in this section are in addition to the appliance
limitations, as described in section 5.3, “B2B appliance known limitations” on
page 107.
Although MQ FTE can handle extremely large files when transferring between
agents, the B2B Gateway in the B2B appliance is limited in its ability to handle
Pattern variations
This section describes the various patterns that are derivative of the pattern
depicted in the example:
The pattern example in this section depicts an inbound flow. An outbound flow
pattern is similar with the main difference being that the input to the B2B
Gateway service is through a MQ FTE front-side protocol handler, and the
output of the B2B Gateway service is any protocol supported by the B2B
appliance.
This pattern can be accomplished using a B2B Gateway service and a
Multi-Protocol Gateway service together to transfer the file over NFS and
trigger it using a MQ FTE command message. This variation is documented
in great detail in chapter 7 of Multi-Enterprise File Transfer with WebSphere
Connectivity, SG24-7886:
– This pattern does not provide the capability to correlate files between MQ
FTE and the B2B appliance.
– This pattern does not allow B2B appliance users to see the state of the file
transfer end-to-end.
This pattern also works well and connecting to a value added network from
the B2B gateway. In this configuration, you associate all of the VAN subscriber
trading partner IDs to a single external trading partner profile.
Each trading partner has their own Collaboration Protocol Profile (CPP) object
that describes their abilities in an XML format. The Message Service
Specification (ebMS) describes a communication-neutral mechanism MSH that
must be implemented to exchange business documents. ebMS 2.0 is built as an
extension on top of the SOAP with Attachments specification and is the most
widely used version of the specification.
122 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
The B2B appliance provides a CPA Import utility that maps the public side
definitions of internal party in the CPA file to B2B Gateway structures, save the
certificates defined in the CPA file in the file system, and automatically configures
the Gateway with CPA entries, two Partner Profiles, front-side protocol
handler(s), and crypto objects. The import process attempts to capture as much
semantics contained in the CPA file to DataPower configuration, post import the
users will need to perform essential configurations to make the Gateway service
operational (for example, attach private key for the newly created Crypto Key
object since there cannot be private key materials inside the CPA file), and define
the internal side interfaces, such as front-side protocol handler, for accepting
documents coming from internal application in an outbound gateway or the
internal partner's Destination for an inbound gateway.
WebSphere DataPower
B2B Appliance
Transaction
Viewer
Browser
Data flow
The following list refers to the numbered items in Figure 5-8:
1. An external partner sends an ebMS message into Partner B’s B2B Gateway
service over http or https.
2. The B2B GW uses profile management in combination with CPA entries
associated with the B2B Gateway service to identify the ebXML collaboration
and process the ebMS message.
3. The B2B Gateway routes the xml payload to the back end applications.
Best practices
The best practices listed in this section are specific to this pattern and are in
addition to any best practices that are listed in section 5.5.1, “Best practices
common to all patterns” on page 112.
Although the CPA Import Wizard can create the B2B Gateway service for you,
IBM recommends that you import your CPAs into an existing B2B Gateway
service that already has the required Front-Side Handlers configured to support
your back side connections.
Limitations
The limitations that are listed in this section are in addition to the appliance
limitations, as described in section 5.3, “B2B appliance known limitations” on
page 107.
IBM implemented all of the functionality required to ensure that we can certify
interoperable with ebMS; however, there are many optional elements and
mechanisms that exist in the ebMS v2.0 specification that are not needed for
interoperability. These items are:
No support using SMTP as a transport protocol.
No support RSAData/DSAData Key formats.
No support for MessageOrder and Multi-hop modules.
The following items are not supported when using collaboration partner
agreements:
No support for multiple <ChannelID> element
No support for multiple <Endpoint> element
No support for nested Delivery Channel
No support for packaging element
No support for StartDate / EndDate
No support for CPA document-level verification
No support for <ConversactionConstraints> element
Limited support for <AccessAuthentication> element
Limited support for <Status> element
124 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Pattern variations
This section describes the various patterns that are derivative of the pattern
depicted in the example:
The pattern example in this section depicts an inbound flow. An outbound flow
pattern is similar with the main difference being that the input to the B2B
Gateway service is through any of the B2B appliance supported front-side
protocol handlers, and the output of the B2B Gateway service is an ebMS
packaged file based on information in the collaboration partner agreement
and the associated collaboration partner profiles.
Another popular pattern is to simply use ebMS to securely exchange files
between trading partners without the use of a CPA. In this scenario, the
standard B2B profiles are used and Action and Service is set in the
destination attributes or passed from the back side over MQ or JMS headers.
The B2B Appliance XB60 supports this pattern well in that it uses DataPower's
implementation of WebSphere Transformation Extender to execute maps
(DataPower Mode Maps) that are created in the WTX Design Studio and
compiled to run on DataPower. The maps transform the HL7 EDI format (v2) into
a canonical HL7 XML format (v3) before routing the data to trading partners or
the back side healthcare applications.
This section does not describe the configuration steps needed to implement this
scenario; instead, it only covers the best practices, limitations, and variations of
the pattern needed to guide you when implementing similar patterns in your
environment.
Figure 5-9 on page 126 shows the HL7 clinical data exchange example.
H
L Profiles Any Transport
7 HL7 V2.x
External Profile
Hospital Healthcare
V
Internet Applications
3
Internal Profile
Regional 4
Any Transport Center
HL7 V3.x 3
Transaction
Viewer
Healthcare
Applications
Data flow
The following list refers to the numbered items in Figure 5-9:
1. Partner A sends an HL7 v3.0 XML file wrapped in an AS2 envelope into
Partner B’s B2B Gateway service over http or https.
2. The B2B Gateway service uses profile management to identify the sender
and receiver partner profiles and routes the HL7 XML file into a processing
policy in the internal partner profile.
3. The B2B Gateway service validates the HL7 XML payload against its schema
and transforms the file into an HL7 EDI file using the processing policy.
4. The B2B Gateway service transfers the HL and EDI file to the back end
healthcare applications using any B2B appliance-supported protocol.
5. After the HL7 payload is successfully transferred to the back end, the B2B
Gateway Service generates an AS2 message disposition notification and
sends it to Partner A.
126 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Best practices
The best practices listed in this section are specific to this pattern and are in
addition to any best practices that are listed in section 5.5.1, “Best practices
common to all patterns” on page 112.
If you expect to support a large number of HL7 transactions types and need to
transform the HL7 documents, IBM recommends using the WebSphere Transfer
Extender HL7 Industry pack as a starting point for map development.
Tip: If your maps will be used in WebSphere Message Broker with the WTX
node, you can compile them as WTX native maps; otherwise, if you intend to
use the HL7 maps on the B2B appliance, you can compile the same map as a
DataPower Mode map.
Limitations
The limitations that are listed in this section are in addition to the appliance
limitations as described in section 5.3, “B2B appliance known limitations” on
page 107:
HL7 v2.x data does not adhere to the EDI X12 spec when it comes to
segments and thus it has no ISA segment, but rather a MSH segment.
Because we do not natively parse the MSH segment in a B2B Gateway and
because the elements used to identify sender and receiver are optional, HL7
data must be handled as binary data when passing it into a B2B Gateway for
outbound processing.
Tip: Use the binary routing style sheet to set the sender and receiver IDs of
HL7 EDI payloads for outbound data flows.
The B2B appliance does not support the HL7 MLLP protocol for exchanging
files.
The B2B appliance does not support HL7 sequencing; however, when the
appliance is used with WebSphere MQ, sequencing can be accomplished.
Pattern variations
This section describes the various patterns that are derivative of the pattern
depicted in Figure 5-9 on page 126:
The pattern example in this section depicts an inbound flow. An outbound flow
pattern is similar with the main difference being that the input to the B2B
Gateway service is through any of the B2B appliance supported front-side
protocol handlers, and the output of the B2B Gateway service is an HL7 file
wrapped in an AS2 message envelope.
128 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Abbreviations and acronyms
A2A application-to-application FIPS Federal Information Processing Standard
AAA Authentication, Authorization, Auditing FTP File Transfer Protocol
AES Advanced Encryption Standard HSM Hardware Security Module
ANSI American National Standards Institute HTML Hypertext Markup Language
APIs application programming interfaces HTTP Hypertext Transfer Protocol
AS Applicability Statements HTTPS HTTP over SSL
AU Authentication IBM International Business Machines
B2B business-to-business Corporation
130 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Related publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about
the topic in this document. Note that some publications referenced in this list
might be available in softcopy only:
DataPower SOA Appliance Administration, Deployment, and Best Practices,
SG24-7901
DataPower Architecture Design Patterns: Integrating and Securing Services
Across Domains, SG24-7620
IBM WebSphere DataPower B2B Appliance XB60 Revealed, SG24-7745
WebSphere DataPower SOA Appliance: The XML Management Interface,
REDP-4446-00
IBM WebSphere DataPower SOA Appliances Part I: Overview and Getting
Started, REDP-4327-00
IBM WebSphere DataPower SOA Appliances Part II: Authentication and
Authorization, REDP-4364-00
IBM WebSphere DataPower SOA Appliances Part III: XML Security Guide,
REDP-4365-00
IBM WebSphere DataPower SOA Appliances Part IV: Management and
Governance, REDP-4366-00
You can search for, view, or download Redbooks, Redpapers, Technotes, draft
publications and Additional materials, as well as order hardcopy Redbooks
publications, at this Web site:
ibm.com/redbooks
Online resources
These Web sites are also relevant as further information sources:
Monitoring WebSphere DataPower SOA Appliances
http://www.ibm.com/developerworks/websphere/library/techarticles/100
3_rasmussen/1003_rasmussen.html
Managing multiple DataPower Appliances with the WebSphere Appliance
Management Toolkit, Part 1: Introduction to the WebSphere Appliance
Management Toolkit
http://www.ibm.com/developerworks/websphere/library/techarticles/101
1_burke/1011_burke.html
Managing multiple DataPower Appliances with the WebSphere Appliance
Management Toolkit, Part 2: Scripting with the WebSphere Appliance
Management Toolkit
http://www.ibm.com/developerworks/websphere/library/techarticles/110
2_burke/1102_burke.html
Extending WebSphere DataPower with centralized appliance management
http://www.ibm.com/developerworks/websphere/techjournal/0809_roytman
/0809_roytman.html
Managing WebSphere DataPower SOA Appliance configurations for high
availability, consistency, and control, Part 1
http://www.ibm.com/developerworks/websphere/library/techarticles/080
1_rasmussen/0801_rasmussen.html
Managing WebSphere DataPower SOA Appliance configurations for high
availability, consistency, and control, Part 2: Application promotion strategies
hhttp://www.ibm.com/developerworks/websphere/library/techarticles/09
04_rasmussen/0904_rasmussen.html
WebSphere DataPower SOA Appliance performance tuning
http://www.ibm.com/developerworks/webservices/library/ws-dpperforman
ce/index.html
132 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
Managing WebSphere DataPower SOA Appliances via the WebSphere
Application Server V7 Administrative Console
http://www.ibm.com/developerworks/websphere/library/techarticles/100
3_das/1003_das.html
WebSphere DataPower SOA Appliances developerWorks library
http://www.ibm.com/developerworks/websphere/zones/businessintegratio
n/dp.html
Capacity Planning for WebSphere DataPower B2B Appliance XB60
http://www-01.ibm.com/support/docview.wss?uid=swg21329746
WebSphere DataPower V3.8.2 Information Center
http://publib.boulder.ibm.com/infocenter/wsdatap/v3r8m2/index.jsp
136 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
high CPU 96 memory growth 96
message filtering 10
message routing 11
I Message Service Handler 122
Identification Credentials 59–60
mgmt0 7
implementation plan 2
minimizing memory usage 44
import 88
monitoring 11
Import Crypto Object 64–65
Monitoring and Troubleshooting 75
import keys 71
monitoring console 8
Include Internal State 86
MQ Distribution Lists 29
infinite loop 25
MQ errors 28
internal integration 104
MQ File Transfer Edition 119
MQ header 21–22
K recommendations 23
Keep Alive Interval 25 MQ Queue Manager 24–27
Key Object 63–64 MQ return codes 27
key sharing domain 68 Multi-Protocol Gateway 18, 21, 34, 36
key storage 65 streaming 54
key wrapping key 68 XML Manager 50
Multistep action 45
L
latency log 78 N
latency messages 78 network interface 6
LDAP 8, 10, 40, 42, 89, 91–92 network issues 94
Group Membership 40 debugging 93
version 42 network protocol analyzer 94
level 2 mode 66–68
level 3 mode 66–68
lifecycle 12 O
On-Error action 48
Limit Port Range for Passive Connections 20
Open Database Connectivity 45
load balancer 8
optimize policy execution 44
log file 75
logging 8, 11
loglevel 75 P
Long Retry Interval 24 Packet Capture 80
LTPA performance testing 108
token 41 B2B 108
Pin Entry Device 67–68
keys 67
M PIPE 53–54
management console 9
PIPE context 44
management interface 7
planning 2
Manager User Agent 21
port range 20
managing certificates 70
private key 58–59, 66–68, 70–71
managing keys 70
export 67
Map Credentials 89, 92
Probe function 80–82
Map Resource 89
Probe Settings 81
Maximum Message Size 26
Probe Triggers 81
memory allocation 86
Index 137
processing policy 18, 39 best practice 49
Processing Rule 19, 54 service implementation 3
Propagate URI 18 Service Level Agreement 12
Proxy Processing Policy 39 Service Level Management 107
proxy profile 61 service-level monitoring 30, 111
public key 58–59, 66 session keys 58
PublishToGW 31 session management 38
Purge Now 111 Sign Action 41
single-phase COMMIT 27
SLM Policy 36
Q SLM Update Interval 36
Queue Manager 24
SNMP 8, 12
SOAP 81
R SOAP Validation 19
RADIUS 8 solution deployment package 14
Rate limits 37 SOMA
Redbooks Web site 131 best practice 50
Contact us xii source code control system 12
removing crypto objects 71 SQL 45
Report History 86 SQL injection 37
request headers 22 SQL Injection attack 10
Request Type 19 SSL 58–59, 61
response code 27 negotiation 59
response headers 22 proxy profile 61
response rule 27 scenarios 58
Response Type 21 Standby Control 7
REST interface 34 standby group 7
REST services 34 status providers 83
RESTful 19, 34 stored procedure 46
restore 9 streaming 51–55
Retry Attempts 24 advantages 52
Retry Interval 24 attachments 54
return codes 27 constraints 52
revision control system 12 symmetric key 66
Role-Based Management 91–92 Sync 27
debugging 90 system log 76
routing table 93
RSA private key 66
RSA public key 66 T
testbed 14
threat protection 37
S Time to Acknowledge 112
SAML assertion 41 Time to Live 32
SAML Attribute Assertion 39 Tivoli Access Manager 10
SAML Authentication Assertion 39 Total Connections Limit 24–25
schema validation 10 transform message content 11
Schema Validation Method 19 troubleshooting 87
Secure Sockets Layer 58 trusted Certificate Authority 58
service chaining
138 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
U xformbin 53
UDDI 9, 29, 32 XML attachments
repository 30 streaming 54
subscription 30 XML Bytes Scanned 26
unexpected restart 95 XML File Capture 82–83
Units of Work 25–26, 28 XML Firewall 34
upgrading 88 streaming 54
user accounts 8, 13 XML Management Interface 8, 12, 50
User Agent 20–21 XML Manager 20, 26, 50–51
best practice 50
XML threat 37
V XSL extension element 45
Validation Credentials 60–61
XSL extension function 45
View Status 83
virus checking 10
W
Web Application Firewall 36–38
best practice 38
Web Service Proxy 9, 13, 18, 29, 32–34
AAA 39
Front-Side Handler 35
streaming 54
WSDL 30, 33, 36
XML Manager 50
WebGUI 6
WebSphere Application Server 41
WebSphere MQ 18
WebSphere Transformation Extender 106, 125
WS-Addressing 30
WSDL 9, 18, 29–33, 35
Cache Policy 32
management 30
replace 33
retrieve 33
update 33
WS-Policy 30
WS-ReliableMessaging 30
WSRR 9, 29–32
Concept 30
considerations 31
governance 31
Saved Search 32
subscription 30–32
X
X.509 certificate 66
xform 53
Index 139
140 DataPower SOA Appliance Service Planning, Implementation, and Best Practices
DataPower SOA Appliance Service Planning, Implementation, and Best Practices
(0.2”spine)
0.17”<->0.473”
90<->249 pages
Back cover ®