server_customization_programmers_guide
server_customization_programmers_guide
Teamcenter 12.0
Server
Customization
PLM00074 • 12.0
Contents
Figures
• Teamcenter Services
A server-side customization method that uses standard service-oriented architecture (SOA)
to provide the framework.
Application interfaces
There are two application interfaces you can use to integrate external applications with Teamcenter:
• Teamcenter Services
Teamcenter Services allow different applications to communicate with one another using
services. Teamcenter Services use service-oriented architecture (SOA), which is appropriate
for integrating separate applications with a loose linkage and network-friendly APIs that do not
need to be modified with every Teamcenter version. This is the preferred method of integrating
third-party and client applications. Future releases will extend this interface and provide tools
so you can easily extend it.
Note
The Services Reference is available only in the Teamcenter HTML Help Collection.
It is not available in the PDF collection.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
to notify customers at the time of the release prior to the one that contains a published interface
behavior change.
Siemens PLM Software does not support code extensions that use unpublished and undocumented
APIs or extension points. All APIs and other extension points are unpublished unless documented
in the official set of technical manuals and help files issued by Siemens PLM Software Technical
Communications.
The Teamcenter license agreements prohibit reverse engineering, including: decompiling Teamcenter
object code or bytecode to derive any form of the original source code; the inspection of header files;
and the examination of configuration files, database tables, or other artifacts of implementation.
Siemens PLM Software does not support code extensions made using source code created from
such reverse engineering.
If you have a comment or would like to request additional extensibility, contact the Siemens PLM
Software customer support representatives at GTAC for further assistance.
Bold Bold text represents words and symbols you must type exactly as shown.
In the preceding example, you type harvester_jt.pl exactly as shown.
Italic Italic text represents values that you supply.
In the preceding example, you supply values for bookmark-file-name and
directory-name.
text-text A hyphen separates two words that describe a single value.
In the preceding example, bookmark-file-name is a single value.
[] Brackets represent optional elements.
... An ellipsis indicates that you can repeat the preceding element.
Before performing customizations, it is important to understand the Teamcenter data model structure.
The Teamcenter data model is the collection of all data modeling objects that represent real-world
objects.
The Teamcenter persistent object manager (POM) defines the architecture (schema) using classes
and business objects:
• Classes
The persistent representations of the schema. Classes can be seen in the Classes view of the
Advanced perspective in the Business Modeler IDE.
• Business objects
The logical representations of the classes, also known as types. Business objects can be seen in
the Business Objects view of the Advanced perspective in the Business Modeler IDE.
Each class is mapped to a primary business object whose name is the same as the class name.
POM_object
POM_application_object
WorkspaceObject
Dataset
Folder
Form
Item
ItemRevision
Term Definition
Class A class is the definition of an object implemented in the Teamcenter
data model. A class has a set of attributes, some of which are
inherited from parent classes and some of which are defined
directly for the class.
Business object A business object is the specialization of a Teamcenter class based
on properties and behavior.
Primary business object A primary business object corresponds to each POM class (that is,
the primary business object name is the same as the POM class
name). Teamcenter automatically creates the primary business
object when a new POM class is instantiated.
Secondary business object Secondary business objects are all business objects that belong to
the same POM class. Secondary business objects inherit all the
properties and behavior of the primary business object.
Attribute An attribute is a persistent piece of information that characterizes
all objects in the same class.
Property A property is a piece of information that characterizes all instances
of the same business object. At a minimum, all sub-business
objects have properties that correspond to the attributes for the
associated class. Properties can also represent other types of
information such as relationships and run-time data.
A primary business object corresponds to each POM class, and the primary business object name
is the same as the POM class name. The primary business object uses its corresponding POM
class to store its data. (Teamcenter automatically creates the primary business object when a new
POM class is instantiated.)
However, a secondary business object uses its parent POM class as its storage class (see the
following figure). Prior to Teamcenter 8, each POM class mapped to multiple business objects, and all
business objects were secondary. In this scenario:
• Custom properties were added to a master form.
• ITK extensions in prevalidation, postvalidation, and validation actions and user exits were the
only way to extend functionality.
• C++ classes were available only to Siemens PLM Software developers and were not exposed.
C++ classes mapped to one POM schema class and implemented the behavior for the primary
business objects. Business logic for custom business objects (for example, MyItem) was added
into the parent C++ class, or was written into separate ITK functions outside the C++ class
hierarchy.
• There are still ITK extensions used for prevalidation, postvalidation, and validation actions.
• C++ operations are available to everyone. The C++ class hierarchy is one-to-one with the
business object hierarchy.
• You can create new operations and services. You can put the business logic for custom business
objects in the C++ class (for example, MyItem) and use the inheritance model to add onto the
parent behavior or override custom behavior.
You can continue to use the master form to support legacy customizations or improve loading
performance when using large amount of properties.
Note
Secondary business objects can be converted to primary business objects using the
Business Modeler IDE by right-clicking the secondary business object in the Business
Objects view of the Advanced perspective and choosing Convert to primary.
Introduction to properties
A property is a piece of information that characterizes all instances of the same business object. To
see all the properties on a business object, open the business object from the Business Objects
view in the Advanced perspective in the Business Modeler IDE and click the Properties tab.
Properties tab
Properties contain information such as name, number, description, and so on. A business object
derives its persistent properties from the attributes on its persistent storage class. Business objects
can also have additional properties such as run-time properties, compound properties, and relation
properties.
You can add the following kinds of properties:
• Persistent
Persistent properties are properties of business objects that remain constant on the object.
• Run-time
Run-time properties are derived each time the property is displayed. Their data is derived from
one or more pieces of system information (for example, date or time) that are not stored in
the Teamcenter database.
• Compound
Compound properties are properties on business objects that can be displayed as properties
of an object (the display object) although they are defined and reside on a different object (the
source object).
• Relation
Relation properties are properties that define the relationship between objects. For example,
a dataset can be attached to an item revision with a specification, requirement, or reference
relation, among many others.
After you add a property, you can change characteristics of the property by using property constants.
For example, by default, when you add a persistent property is it read only. To make it writeable, you
must change the Modifiable constant on the property from Read to Write.
Libraries
A library is a collection of files that includes programs, helper code, and data. Keep the following
in mind when working with libraries:
• Create a library in the Advanced perspective of the Business Modeler IDE by opening the
Code Generation→Libraries folders in the Extensions view, right-clicking the Libraries folder,
and choosing New Library.
• After you write your C++ code, choose Project→Build Project in the Business Modeler IDE
to build the libraries containing server code.
• If you have codeful customizations, you cannot deploy from the Business Modeler IDE for testing
because codeful customizations have libraries. Instead, you must package your template so
that libraries are placed into a template-name_rtserver.zip file, and then install the packaged
template using Teamcenter Environment Manager (TEM).
o Place libraries for user exits, server exits, and custom exits (supplier custom hooks) in the
directory specified by the TC_USER_LIB environment variable (preferred), the TC_BIN
environment variable (for Windows), or the TC_LIBRARY environment variable (for UNIX).
o Place libraries for legacy operations in any directory as long as it is listed in the
BMF_CUSTOM_IMPLEMENTOR_PATH preference.
o Place libraries for the C++ operations in the directory specified by the TC_USER_LIB
environment variable (preferred) or the TC_LIBRARY environment variable (Windows and
UNIX).
Note
For ITK, as of Teamcenter 8.1, the mem.h header file moved from the libitk library to
the libbase_utils library. The mem.h header file contains low-level methods related to
memory allocation and release. These methods were exported from the libitk library and
had circular dependencies with some application libraries that affected delay loading of
libraries. To address this issue, a new low-level libbase_utils library is introduced. The
functions in the mem.h file are now exported from this new library.
The old header mem.h file still exists and includes the new base_utils/Mem.h header.
Existing customization code should compile without any changes. However, as the
functions are now exported from the libbase_utils library, the existing link dependency on
the libitk library does not work.
For any custom code using functions from the mem.h file, link dependency on the
libbase_utils library must be added. For code where usage of the libitk library is limited
to only functions from the mem.h file, the existing dependency on the libitk library can
be removed in its entirety.
Caution
Use samples at your own risk. Samples are subject to removal or change between
releases.
4. In the Configuration panel, select the configuration from which the corporate server was
installed. Click Next.
5. In the Feature Maintenance panel, under the Teamcenter section, select Add/Remove
Features. Click Next.
7. Click Next.
Depending on the control_string string, the function may expect a sequence of additional
arguments, each containing one value to be inserted for each %-tag specified in the
control_string parameter, if any. There should be the same number of these arguments as the
number of %-tag that expect a value.
Parameters Description
control_string Message to be written to syslog file. It can contain
embedded format tags that are substituted by the values
specified in subsequent arguments and formatted as
requested. The number of arguments following the
control_string' parameters should be at least as much
as the number of format tags.
• TC_report_serious_error API
Use this function to write error messages to Teamcenter syslog file, for example:
extern TC_API void TC_report_serious_error (
const char* file_name, /**< (I) */
int line_number, /**< (I) */
const char* control_string, /**< (I) */
... /**< (I) */
);
Parameters Description
file_name The name of the file in which the error occurred.
line_number The line number at which the error occurred
control_string Message to be written to the syslog file. It can contain
embedded format tags that are substituted by the values
specified in subsequent arguments and formatted as
requested. The number of arguments following the
control_string parameters should be at least as much
as the number of format tags.
Depending on the control_string string, the function may expect a sequence of additional
arguments, each containing one value to be inserted for each %-tag specified in the
control_string parameter, if any. There should be the same number of these arguments as the
number of %-tag that expect a value.
• Extensions customization
Allows you to write a custom function or method for Teamcenter in C or C++ and attach the rules
to predefined hook points in Teamcenter (preconditions, preactions, and postactions). Also,
existing operations can be extended to these hook points.
The following figure shows the client-server interaction in Teamcenter. The rich client typically
invokes the Teamcenter Services interface.. (All access and modification of data in the database is
done by the business object interface.)
Specify coding information when you install or configure the Business Modeler IDE
• When you install or configure the Business Modeler IDE, set up the JDK location and a compiler
application.
Example
The bmide.bat file starts the Business Modeler IDE. Set the location of the JDK in
the install-location\bmide\client\bmide.bat file, such as:
set JRE_HOME=C:\Program Files (x86)\Java\jre7
set JAVA_HOME=C:\Program Files (x86)\Java\jdk1.7.0
set JDK_HOME=C:\Program Files (x86)\Java\jdk1.7.0
On Windows when Microsoft Visual Studio is used for compiling, a call to the
vcvarsall.bat file in the bmide.bat file is required. The call should be before the
PATH statement, such as:
call "C:\apps\MVS12\VC\vcvarsall.bat"
set PATH=%JDK_HOME%\bin;%JRE_HOME%\bin;TC_ROOT\
lib;
%FMS_HOME%\bin;%FMS_HOME%\lib;%PATH%;
2. In the New Project dialog box, choose Business Modeler IDE→New Business Modeler IDE
Template Project.
3. Enter information in the pages of the New Business Modeler IDE Template Project dialog box.
On the Code Generation Information page, set up the location for generated source files.
On the Build Configuration Information page, set up the location of the Teamcenter installation
and the compiler information.
• To set up code generation for an existing project, modify the project's properties. Right-click the
project, choose Properties, and select Teamcenter→Code Generation.
Item
In Teamcenter, items are the fundamental objects used to manage information. Items provide an
identification for physical or conceptual entities about which an organization maintains information
and audit/change control. Typical uses of items are parts, documents, and equipment. In an
engineering/product development environment, parts are identified as items. An item has the
following basic information:
• ID
A unique identifier for an item.
• Name
A user-defined name to describe the item.
• Item type
A classification of items that allow different kinds of items to be treated separately. For example,
an item can be used to manage parts and documents. If you implement two item types, you can
enforce rules based on the type of items that are being manipulated.
Note
The system, as delivered, provides a generic business object called Item along with
several child business objects which all act as different item types. If you need
additional kinds of items, the system administrator can create children of the Item
business object at your site.
• Description
A text field of up to 240 characters used to describe the item.
Item revision
Item revisions are used to reflect modifications to an item. The original item revision is retained
for historical purposes.
Note
Each customer site defines its own procedures to determine how and when a new item
revision should be created.
• Revision
A unique identifier for an item revision.
• Name
A user-defined name to describe the item revision.
Relations
Organizations produce several documents that describe, are used by, or in some way relate to an
item. For the data to be understood, it has to be associated in a meaningful way to the item or item
revision. Teamcenter associates data to items and item revisions using relations.
Relations describe how the data is associated to an item and item revision. A dataset containing
the CAD model, which defines an item revision, is a specification of the item revision. Similarly, a
standards document describing the fit, form, and function of a particular drawing is a requirement
for the item.
An item or item revision can be related to several other objects, including datasets, forms, folders,
and other items and item revisions. Each object associated to the item represents various aspects of
that item. A dataset containing stress test information could be added by the engineer, while a form
containing size and weight information could be added by the specification engineer.
Teamcenter provides a basic set of relations. The relations between the object and the item and item
revision determines the rules enforced.
Note
Additional relations and their associated rules are defined by your system administrator.
• IMAN_master_form
The master form relation associates a form to an item or item revision. This form is a collection of
attributes that describe the item or item revision. Rules for this relation are as follows:
o An item or item revision must have write access to add or remove a master form relation
to a form.
o Only a form can have a master form relation to an item or item revision.
o An item can have only one master form. An item revision can have only one master form.
o A form can have a master form relation to only one item or item revision.
• IMAN_requirement
The requirement relation associates data to an item or item revision, such as standards, which
must be adhered to in a drawing, or technical requirements for a part. Rules for this relation
are as follows:
o An item or item revision must have write access to add or remove a requirement relation to
an object.
o Only version 0 (zero) of a dataset can have a requirement relation to an item or item revision.
o A folder, envelope, BOM view, or BOM view revision cannot have a requirement relation to
an item or item revision.
o An item revision can have a requirement relation to another item or item revision.
o An item revision cannot have a requirement relation to another item revision if they are
both revisions of the same item.
• IMAN_manifestation
The manifestation relation associates other related data to an item or item revision. This data
may, however, be necessary for information, such as analysis of the competing ideas from which
a part was originally conceived. The manifestation relation also associates data to the item
revision that contains data derived from the specification data (such as tool paths). Rules for
this relation are as follows:
o An item or item revision does not need to have write access to add or remove a manifestation
relation to an object. A manifestation relation can be added or removed from a released
item or item revision.
o Only version 0 (zero) of a dataset can have a manifestation relation to an item or item revision.
o A folder, envelope, BOM view, or BOM view revision cannot have a manifestation relation to
an item or item revision.
o An item revision can have a manifestation relation to another item or item revision.
o An item revision cannot have a manifestation relation to another item revision if they are
both revisions of the same item.
• IMAN_specification
The specification relation associates data which defines the item revision. Examples are CAD
models for parts or word processor files for documents. Rules for this relation are as follows:
o An item revision must have write access to add or remove a specification relation to an object.
o Only version 0 (zero) of a dataset can have a specification relation to an item or item revision.
o A folder, envelope, BOM view, or BOM view revision cannot have a specification relation to
an item or item revision.
o An item revision can have a specification relation to another item or item revision.
o An item revision cannot have a specification relation to another item revision if they are
both revisions of the same item.
• IMAN_reference
The reference relation associates any data to an item or item revision. Rules for this relation
are as follows:
o An item or item revision does not need to have write access to add or remove a reference
relation to an object. A reference relation can be added or removed from a released item or
item revision.
o Only version 0 (zero) of a dataset can have a reference relation to an item or item revision.
• IMAN_revision
The revision relation associates item revisions to the appropriate item. Rules for this relation
are as follows:
o An item must have write access to add or remove an item revision from the item.
o The revision relation in an item is established by creating a new revision of an item using the
provided functionality. An item revision cannot be cut or pasted into an item.
• BOMView
The BOMView relation associates a product structure to an item. Rules for this relation are
as follows:
o BOMView relations can represent different versions of a product structure. For example, one
can be the design structure and another can be the manufacturing structure.
• BOMView Revision
The BOMView Revision relation associates a product structure revision to an item revision.
Rules for this relation are as follows:
o BOMView Revision relations can represent different versions of a product structure. For
example, one can be the design structure and another can be the manufacturing structure.
Teamcenter is designed to organize the data created by other applications in a logical manner.
When an item is first created, an item ID is reserved. It is only by adding data to an item, however,
that the item becomes meaningful.
Datasets are used to store information created by other applications. These datasets can be
associated to an item and item revision.
In Teamcenter, both item revision and dataset versions are intended to represent modifications to
information. Item revisions represent the item at significant milestones. Dataset versions represent
day to day changes. An item has revisions; a dataset has versions.
There are two ways to modify an item and item revision. One method is to simply add or remove
relations from the item and item revision by using cut and paste. Another method is to modify the
contents of an object that is currently related to the item and item revision.
Business objects are used to specialize the behavior of Teamcenter objects. Datasets implement the
most extensive use of business objects. The Dataset business object is used to categorize datasets.
Examples of dataset business objects are a document, an NX part, and a spreadsheet. When
a document Dataset business object is opened, the word processor application associated to this
Dataset business object is opened. When an NX part Dataset business object is opened, the NX
application associated to this Dataset business object is opened. For each Dataset business object,
the available operations (for example print and open) are defined, as well as the specific behavior of
each operation.
Item business objects extend the ability to specialize the behavior of items. You may need to
distinguish between a part used in production and equipment used to produce that part. Both may
need to be an item, yet the information and business rules surrounding these items are different.
Dataset model
A dataset contains a series of dataset objects, which reference the ImanFile objects that hold the
data, and a RevisionAnchor object, which holds the list of dataset versions and the dataset ID, if
one exists. A dataset may also reference Form objects. The following figure from the UML (Unified
Modeling Language) editor in the Business Modeler IDE shows the RevisionAnchor object that
creates a new logical dataset.
Dataset model
Dataset IDs are optional, but they must be unique. The unique ID enforcement in datasets is
performed at the application level. Each time the New→Dataset menu command is used to create a
dataset, the Save As and Import commands are used, or the dataset is edited using the Properties
command, the validation function is called to ensure the dataset ID (id) and revision (rev) combination
is unique within that specified dataset business object.
Also, for each dataset business object, if the dataset ID is created using the Assign button, a
separate counter generates the next unique ID that is available for a standalone dataset. This counter
ensures id and rev are unique within the same dataset business object.
When you create a dataset, two revisions are created: revision 0 and revision 1. Revision 0 always
points to the latest revision, so it initially points to revision 1. If you create revision 2, then revision 0
points to that, and so on. In your programming, if you want to point to the latest revision, point to
revision 0.
ITK functions use this convention with revisions as follows:
• AE_ask_dataset
Given any dataset, returns revision 0.
• AE_ask_dataset_first_rev
Given any dataset, returns the oldest revision.
• AE_ask_dataset_latest_rev
Given any dataset, returns the most recent revision (not the revision 0 copy).
• AE_ask_dataset_next_rev
Given any dataset, returns the next revision in the sequence.
• AE_ask_dataset_prev_rev
Given any dataset, returns the previous revision in the sequence.
• AOM_save (dataset)
Creates a new, later version of the dataset and saves a copy to revision 0.
• AOM_delete
For the given dataset, deletes all revisions and the associated revision anchor.
All revisions of a dataset have the same owning user and group and the same protection; updating
any of those on any one revisions updates all other revisions.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Form model
Forms provide the ability to store customer defined attributes. Teamcenter provides standard forms.
Most of the forms used at a customer site, however, are modified to capture the information unique
to its business.
Forms are stored as POM objects. Attributes in the forms can be set and retrieved using Form ITK
functions and the POM query ITK functions. When a form is stored, the database schema must be
updated to recognize the new POM object. The following figure shows a diagram of the form model
using the UML (Unified Modeling Language) editor in the Business Modeler IDE.
Form model
to be. A CAD system may have a file in which it stores geometric data with a .gmd extension, finite
element modeling data in a file with a .fem extension, and drawing data in a file with a .drw extension.
For such a CAD system, you could find references named geometry, fem, and drawing, or
gmd/fem/drw as the shell chooses. If you use the extensions, users do not have to change the
reference names during import, since by default the import assumes the reference names correspond
to the file extensions.
The model shows that the named reference can refer to an ImanFile object or other descendant
class of the POM object. This means a shell can be implemented that does not use the ImanFile
object to store file data, but instead uses its own file object to store such information. However,
you should do this rarely since there is a loss of integrity that is provided by the ImanFile object.
However, the flexibility it provides may be necessary in some cases.
The RevisionAnchor object is provided to support versioning of datasets. Whenever a dataset is
created, a RevisionAnchor object is also created. This object maintains a list of working revisions
of the dataset. However, it is up to the shell/integrated tool to use this facility. To use it, you must
make revision copies and then make changes to these copies.
Class hierarchy
The information in this section describes the hierarchy of classes (taxonomy). Understanding this
hierarchy, or returning to it, helps with understanding the rest of the Teamcenter model description,
which shows relationships and gives a more detailed description of individual classes.
Class objects have a meaningful hierarchy that relates to the Teamcenter layered software
architecture previously discussed. It is presented in an indented outline form.
The following legend defines the mnemonic descriptions shown in parentheses in the figure.
Mnemonic Class
AE Application Encapsulation
AOM Application Object Module
BOM Bill Of Materials
CR Cascade Release
EPM Enterprise Process Modeling
FL Folder Manager
FORM Forms
IMF ImanFile
ITEM Item
MAIL Teamcenter Mail
POM Persistent Object Manager
PS Product Structure
SA System Administration
UOM Unit of Measure
VM Teamcenter Volumes
WSOM Workspace Object
The following figure shows the more common classes in the hierarchy.
Class hierarchy
Warning
You may lose data if you use Multi-Site Collaboration with a system that has both
pre-Teamcenter 8 and Teamcenter 8 or later sites.
• If you transfer data from a Teamcenter 8 or later site to a pre-Teamcenter 8 site, any
attributes you added to the subclasses in Teamcenter 8 or later versions are dropped
at the pre-Teamcenter 8 site.
To avoid data loss when transferring data from a Teamcenter 8 or later site to a
pre-Teamcenter 8 site, create new attributes on the master form in Teamcenter 8 or later
versions.
Object-oriented data
What is an object?
An object is a data structure. It is basically the same as an entity or an instance of a class. A class
defines a type of object, its attributes, and methods that operate on it.
For example, folder is a class in Teamcenter. It is defined to have a name, a description, and a list of
entries. These are the attributes of the folder class. When a folder is instantiated, an actual folder
object is created. It has attributes as described here. There are also methods or functions defined for
the folder class, such as Insert and Remove.
Attributes in Teamcenter can be integer, float, boolean, string, date, tag, or arrays of any of these
types. A tag is a unique identifier of a reference (typed, untyped, or external) or a relation (typed or
untyped).
The attributes in Teamcenter can have some of following characteristics:
• Unique
There can be no duplicate values of the attribute in all of the instances of the class.
• Protected
The attribute can only be modified by the application that created the object.
• NULL allowed
If this characteristic is not true, then the object cannot be saved until the attribute is set.
• Upper bound
The highest value that the attribute can have.
• Lower bound
The lowest value that the attribute can have.
• Default value
The attribute may have an initial value.
Inheritance
Object-oriented language
An object-oriented language is a programming language that has built in key words or constructs that
either force or facilitate object oriented programming. For example, C++ is one, because it supports
classes, objects, inheritance, and polymorphism. The C language is not.
However, you can do object-oriented programming with a nonobject-oriented language; it is just
harder. You can certainly have an object model with inheritance of attributes and methods without
using an object oriented language. This is what is presented through ITK.
Internally, much of Teamcenter is written in C++. This provides an easy way of handling
polymorphism. You can take advantage of this in the ITK as well. If you want to execute a method
on an object, you do not need to call a function specific to that action-class pair. You may call any
function with that action up the tree of superclasses for the selected object. For example, if you want
to ask the name of an envelope, you do not need to call the EMAIL_ask_envelope_name function
(in fact, that function does not exist). You can call the WSOM_ask_name function two levels up in the
hierarchy instead. That function realizes that it was actually passed an envelope and invokes the
envelope's method to get the name.
Some of the important concepts of object-oriented programming are:
• Encapsulation
Combining data types with valid operations to form a class. A class is a blueprint, or prototype,
that defines the variables and the methods common to all objects of a certain type.
• Inheritance
The ability to inherit class definitions and make modifications that specialize the class, thus
creating a new class.
• Polymorphism
Giving an action one name that is shared up and down an object hierarchy, with each object in
the hierarchy implementing the action in a way appropriate to itself.
Data-model-based customizations
Introduction to data-model-based customizations
The Teamcenter data model framework supports a single, coherent mechanism for developing new
functionality by defining business logic on business objects in the Impl class. This exposes the
business logic API in an object-oriented way (through interfaces).
Customization typically involves adding a new operation to a business object or overriding an
implementation of an existing operation on a business object.
Operations are Teamcenter functions, such as create, checkin, checkout, and so on. Operations can
be placed on business objects and properties. To see these operations, right-click a business object,
choose Open, and in the resulting editor, click the Operations tab.
Operations on a business object provide the interfaces for business logic. The implementation of
the business logic is done by functions or methods on the implementation class corresponding
to the business object. There are certain restrictions on operations. You cannot modify a COTS
operation (that is, noncustom operation). You cannot modify or delete a released operation; you
can only deprecate it.
Coding process
Following is the general process for writing data-model-based customization code in the Business
Modeler IDE. The Code Generation folder under the Extensions folder is where you do most of
this work.
1. Set up the coding environment.
Set up code generation, and create the release, library, and data types you want to use.
Note
The Business Modeler IDE packages the built C++ library as part of its template
packaging. The complete solution that includes the data model and the C++ run-time
libraries are packaged together.
The C++ API Reference documents APIs in C++ signatures.
To access the C++ API Reference, install the Teamcenter developer references when
you install Teamcenter online help, or go to the Global Technical Access Center
(GTAC):
https://support.industrysoftware.automation.siemens.com/docs/teamcenter/
2. Create a release.
Define the software release that the generated code is used in.
3. Create a library.
Make a set of library files that will hold the generated code.
Create a release
Releases are distributions of software products. By default, Teamcenter releases are listed under
the Extensions\Code Generation\Releases folder so that you can create code against these
releases. You can specify the release against which code is created by right-clicking a release and
choosing Organize→Set as active release. The code you create after setting the release is then
assigned to that release. When you create some code objects, such as operations or services,
you must specify a release.
1. Choose one of these methods:
• On the menu bar, choose BMIDE→New Model Element, type Release in the Wizards
box, and click Next.
• In the Extensions folder, open the Code Generation folder, right-click the Releases folder,
and choose New Release.
b. In the Name box, type the name you want to assign to the new release in the database.
When you name a new data model object, a prefix from the template is automatically affixed
to the name to designate the object as belonging to your organization, for example, A4_.
c. In the Display Name box, type the name of the release as it will display in the Business
Modeler IDE.
e. Select Set as Current Release to select this new release as the active one to use for all
data model creation processes.
You can also set a release as active by right-clicking the release and choosing Organize→Set
as active Release. A green arrow in the release symbol indicates it is the active release.
f. Click the arrow in the Type box to choose whether this release is considered a major release,
a minor release (which is dependent on a major release), or a patch (to a minor or major
release).
g. In the Service Version box, type the version of the services you plan to create in the release
using format _YYYY_MM, for example, _2010_06. You must fill in this box if you plan
to create services for this release.
h. Click the Browse button to the right of the Prior Release box to choose the previous release.
If you are creating a minor release, choose the major release that this minor release follows.
If you are creating patch, choose either the major or minor release that the patch is applied to.
i. Click Finish.
The new release appears under the Releases folder.
3. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
Create a library
A library is a collection of files that includes programs, helper code, and data. Open the
Extensions\Code Generation\Libraries folders. By default, Teamcenter libraries are listed under
the Libraries folder. You can create your own library that is dependent on Teamcenter libraries.
You can specify the library against which code is created by right-clicking a library and choosing
Organize→Set as active library. The code you create after setting the library is then assigned to
that library. When you create some code objects, such as business object or property operations,
you must specify a library.
1. Set the release the library is to be used for. Open the Extensions\Code Generation\Releases
folders, right-click the release, and choose Organize→Set as active Release. A green arrow in
the release symbol indicates it is the active release.
• In the Extensions folder, open the Code Generation folder, right-click the Library folder,
and choose New Library.
b. Select the isThirdParty check box if the library is built outside of the Business Modeler IDE
or the library is provided by a third-party vendor.
c. In the Name box, type the name you want to assign to the new library, or if you selected the
isThirdParty check box, click the Browse button to specify the third-party library file.
e. Select the Set as Active Library check box to select this new library as the one to use for all
data model creation processes.
You can also set a library as active by right-clicking the library in the Libraries folders and
choosing Organize→Set as active Library. A green arrow in the library symbol indicates it
is the active library.
f. Click the Add button to the right of the Dependent On pane to select the libraries this new
library is dependent on. This box is disabled if you are using a third-party library file.
g. Click Finish.
The new library appears under the Library folder.
4. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
Data types
In software programming, a data type is a definition for a specific type of data and the operations
that can be performed on that data. For example, if some code has the char data type applied
to it, it is character text and can contain only characters. Or if some code has the int data type
applied, it is integer type code and can contain only numbers. You use data types as the return type
when you create business object operations. To work with data types, open the Extensions\Code
Generation\Data Types folders.
You can work with the following kinds of data types:
• External data type
Custom data types that you can import into Teamcenter, such as date.
External data types are standard data types, as well as custom defined data types that can be
imported into the Business Modeler IDE. You can use external data types as the return type when you
create business object operations. To access the external data types, open the Extensions\Code
Generation\Data Types\External Data Type folders.
1. Choose one of these methods:
• On the menu bar, choose BMIDE→New Model Element, type External Data Type in the
Wizards box, and click Next.
• Open the Extensions\Code Generation\Data Types folders, right-click the External Data
Type folder, and choose New External Data Type.
2. Perform the following steps in the External Data Type dialog box:
a. The Project box displays the project to which this new data type is added.
b. In the Name box, type the name you want to assign to the new data type.
c. In the Namespace box, type a namespace for the type. Namespaces allow for grouping code
under a name to prevent name collisions.
d. In the Description box, type a description of the new external data type.
e. In the Declaration Header box, type the header file declaring the external data type. The
header should provide how the header file should be included in code, for example:
path/custom-data-type.hxx
f. Click Finish.
The new data type appears under the External Data Type folder.
3. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
4. Deploy your changes to the test server. Choose BMIDE→Deploy Template on the menu bar, or
select the project and click the Deploy Template button on the main toolbar.
• std::string
String from the standard namespace.
• std::vector<bool>
Boolean (true or false) vector.
• std::vector<char>
Character vector.
• std::vector<date_t>
Date vector.
• std::vector<double>
Double vector.
• std::vector<int>
Integer (number) vector.
• std::vector<long>
Long string vector.
• std::vector<PropertyDescriptor>
Property descriptor vector.
• std::vector<std::string>
String vector.
• std::vector<tag_t>
Tag vector.
• std::vector<Teamcenter::DeepCopyData*>
Deep copy data vector.
• tag_t
Tag to an object.
• Teamcenter::DateTime
Class that represents the date in the Teamcenter server.
• Teamcenter::OperationDispatcher
Helper methods for extension framework.
• Teamcenter::Soa::Common::ObjectPropertyPolicy
• Teamcenter::Soa::Server::InvalidCredentialException
Service-oriented architecture (SOA) exception error for incorrect ID.
• Teamcenter::Soa::Server::ModelSchema
Service-oriented architecture (SOA) representation of schema.
• Teamcenter::Soa::Server::PartialErrors
Service-oriented architecture (SOA) class for holding partial errors.
• Teamcenter::Soa::Server::Preferences
Service-oriented architecture (SOA) class for holding preferences.
• Teamcenter::Soa::Server::ServiceData
Service-oriented architecture (SOA) class that holds model objects and partial errors.
• Teamcenter::Soa::Server::ServiceException
Service-oriented architecture (SOA) service exception class.
• Teamcenter::Soa::Server::TypeSchema
Service-oriented architecture (SOA) representation of schema information.
• char
A single character, such as A, B, Z.
• double
A double-precision floating point decimal number. (For Oracle, the limit for the double property
value is 1E-130 to 9E125. For SQL Server, the limit is 2.3E-308 to 1.7E308.)
• float
A 4-byte decimal number from the range 3.4E to 38.
• int
An integer without decimals from 1 to 999999999.
• long
A string of unlimited length.
• void
Associated with no data type.
• std::set
Set from the standard namespace.
• std::vector
Vector from the standard namespace.
Operations
2. Set the active library for the operation. Open the Extensions\Code Generation\Libraries
folders, right-click the custom library, and choose Organize→Set as active Library. A green
arrow in the library symbol indicates it is the active library.
3. In the Business Objects folder, browse to the business object to which you want to add the
operation. To search for a business object, click the Find button at the top of the view.
4. Right-click the business object, choose Open, and click the Operations tab in the resulting view.
Note
The Add button is available only after you create a release.
• Constant?
Sets the return constant value to const.
• Published?
Declares the operation as published for use.
• PreCondition?
Places limits before an action.
• PreAction?
• PostAction?
Executes code after an action.
c. Click the Browse button to the right of the Library box to select the library to hold the
source for the operation.
d. To set parameters on the operation, click the Add button to the right of the Parameters table.
These parameters are the arguments to the C++ functions.
The New Parameter wizard runs.
B. Click the Browse button to the right of the Type box to select a data type to hold the
parameter.
The Find Data Type wizard runs.
C. In the Choose a Data Type Object dialog box, you can select the following data types to
filter:
• Primitive
Generic data types. Only boolean, double, float, and int are available for services.
• External
Standard data types. Only those listed are available for services, and they are
defined by Teamcenter.
• Template
Data types that allow code to be written without consideration of the data type with
which it is eventually used. Only vector is available for services.
• Interface
Data types for returning business objects. Only business objects with their
own implementation are selectable. For business objects without their own
implementation, the closest parent can be selected instead.
• Generated
Service data types, such as structure, map, and enumeration data types. Only the
data types set up for this service are shown as being available for this service. You
can manually type the name of a type that is in another service as long as it is in
the same service library.
E. Click the arrow in the Qualifier box to select the qualifier for the return value.
• No value
Indicates that no parameter is passed.
• &
Indicates that the parameter is passed by reference. The actual parameter is passed
and not just the value of the parameter.
• *
Indicates that the address of the parameter is passed. The address is the pointer
to the parameter.
• **
Indicates the address of the pointer to the parameter is passed. The address is
a pointer to the pointer.
• []
Indicates that the parameter is an array. The address of the first element in the
array is passed.
• *&
Indicates a parameter instance is created within the routine and the pointer of the
instance is returned. The caller must manage the return pointer.
H. Select the Free Parameter Memory? check box if the caller has to free the memory.
This applies only to output and input/output parameters.
I. Click the arrow in the Usage box select whether the parameter is an input, output, or
input/output parameter.
J. Click Finish.
The Parameters table displays the new parameter.
g. In the Return Description box, type a description of the return value for the operation.
h. Click Finish.
The Business Modeler IDE displays the new operation in the Operations editor.
7. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
8. Generate code.
Right-click the Business Objects folder and choose Generate Code→C++ Classes.
Operations are actions you can perform on business objects and properties in Teamcenter. You can
create operations on either COTS or custom business objects and properties. To see business
object operations, right-click a business object, choose Open, and in the resulting editor, click the
Operations tab; to see property operations, click the Properties tab, select a property, and click the
Property Operations tab.
A property operation is a function on a property. You can publish getter and setter methods on
a custom operation. After you create an operation on a property, you must generate code and
write an implementation for the operation. You can also add a property operation when you create
a persistent property.
You can also create a business object operation or operations on services.
1. Ensure that you have set the proper active release for the operation. Open the Extensions\Code
Generation\Releases folders, right-click the release, and choose Organize→Set as active
Release. A green arrow in the release symbol indicates it is the active release.
2. Set the active library for the operation. Open the Extensions\Code Generation\Libraries
folders, right-click the custom library, and choose Organize→Set as active Library. A green
arrow in the library symbol indicates it is the active library.
3. Select the business object whose property you want to add an operation to. In the Business
Objects folder, browse to the business object. To search for a business object, click the Find
button at the top of the view.
4. Right-click the business object, choose Open, and click the Properties tab in the resulting view.
5. Select a property in the properties table and click the Property Operations tab.
Note
The Add button is enabled only after you have created a release.
The Business Modeler IDE runs the New Property Operation wizard.
7. In the Property Operation dialog box, select the following check boxes that apply:
• Getter
Enables getting the value of the property.
• Setter
Enables setting the value of the property.
getproperty-nameDisplayableValues
• Published?
Generates the getter or setter method.
• PreCondition?
Places limits before an action.
• PreAction?
Executes code before an action.
• PostAction?
Executes code after an action.
9. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
Override an operation
You can make a business object or property operation overridable by other operations on child
business objects. An operation is overridable if it is inherited from a parent business object, marked
as overridable, is not already overridden. The Override button is available if these conditions are met.
To override a business object operation, in the Operations tab, select the operation and click the
Override button. To override a property operation, in the Property Operations tab, select the
operation and click the Override button. The button changes to Remove Override.
1. Select the business object or property whose operation you want to override.
2. For a business object, click the Operations tab. For a property, click the Property Operations
tab.
3. Select the operation you want to override in the Operations or the Property Operations tab.
The Override button is available if the operation is inherited from a parent business object, is
marked as overridable, and is not already overridden. The Remove Override button displays if
the operation is already overridden.
The following example shows selecting a business object operation to override.
5. Click Finish.
The button changes to Remove Override. To remove an override, click the Remove Override
button.
The following example shows an overridden business object operation.
6. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
7. Right-click the Business Objects folder and choose Generate Code→C++ Classes.
8. In the generated Impl file, write the implementation code. Point to the new operation to use
instead of the overridden operation.
Deprecate an operation
You can deprecate custom business object operations and custom property operations. Deprecation
means that an object will be deleted in a future release.
To deprecate a business object operation, in the Operations tab, select the operation and click the
Deprecate button. To deprecate a property operation, in the Property Operations tab, select the
operation and click the Deprecate button. The button changes to Undeprecate.
b. Select the Enable Deprecation Policy check box to allow for removal of obsolete objects
from the project.
c. Click the arrow in the Number of Allowed Releases before Deletion box to select how
many releases before objects can be deleted from the project.
d. Click OK.
2. Select the business object or property whose operation you want to deprecate.
3. For a business object, click the Operations tab. For a property, click the Property Operations
tab.
4. Select the operation you want to deprecate in the Operations or the Property Operations tab.
The Deprecate button is displayed.
Note
The Deprecate button is available only if the deprecation policy is set for the project,
the operation is custom, and the operation was created in an earlier release. If not, a
Remove button is displayed instead. An Undeprecate button displays if the operation
has already been deprecated.
6. In the Deprecate Object dialog box, type a description in the Deprecated Description box
and click Finish.
When an operation is deprecated, the button changes to Undeprecate.
Following is an example of a deprecated business object operation.
b. In the Extension dialog box, click the Add button to the right of the Availability table.
c. In the New Extension Availability dialog box, click the Browse button to the right of the
Business Object Name box and select the business object with the custom operation you
want to use.
e. In the Extension Point box, select the extension point that was set up when the custom
operation was created (PreCondition, PreAction, and/or PostAction).
c. Choose the point at which you have made the operation available (Pre-Condition,
Pre-Action, Base-Action, or Post-Action).
The Add button is enabled only on the available points.
Now the operation is run when the extension is invoked at a precondition, pre action, or postaction.
For example, you can write an operation (such as the hasValidSAPPartNumber operation example)
that checks to make sure that items have valid SAP numbers, and invoke the extension as a
precondition for checkin.
• Add the following TcExtension code to define an extern function for the MyCustomObject
property. The name of the extern function is getMyCustomObjectBase, which must be unique.
• Add the following TcExtensionAttach code to attach the external function as a BaseAction
type of the getter operation:
<TcExtensionAttach extensionName="getMyCustomObjectBase"
operationName="PROP_ask_value_tag" isActive="true" propertyName="MyCustomObject"
extendableElementName="UserSession" extendableElementType="Type"
extensionPointType="BaseAction"
conditionName="isTrue" description=""/>
The getter base function for a property operation is declared in the following manner to avoid C++
name mangling. In this example, the library name is MyLib:
#ifdef cplusplus
extern “C”
{
#endif
extern MYLIB_API int getMyCustomObjectBase(METHOD_message_t *, va_list args);
#ifdef cplusplus
}
#endif
The following is sample code for the getMyCustomObjectBase base function, which is expected
to return the tag of the singleton instance of the custom business object. The instance is created
but not saved so that this custom business object is actually used as a run-time object providing the
operation codes, causing no impact on database data. The tag of the instance is cached in a static
variable to make sure only one instance is created.
int getMyCustomObjectBase( METHOD_message_t *, va_list args )
{
va_list largs;
va_copy( largs, args );
va_arg( largs, tag_t ); /*Property Object tag_t not used*/
tag_t* customObjTag = va_arg( largs, tag_t* );
va_end( largs );
Teamcenter::BusinessObject* obj =
Teamcenter::BusinessObjectRegistry::instance().createBusinessObject(creInput);
myCustomObjectTag = obj->getTag();
}
*customObjTag = myCustomObjectTag;
return ifail;
}
Boilerplate code
Note
Generating code for services is a different process. Right-click the Code
Generation→Services folder and choose Generate Code→Service Artifacts.
Most of the code pieces are autogenerated; you only need to write code in the implementation (Impl)
file, as shown in the following figure. First you create the business objects and operations, and their
definitions are deployed to the database without writing any code. Then you generate code, creating
boilerplate files that provide inheritance and overriding behavior to the business object. Finally, you
write the custom logic for the operation in the stub provided in the Impl file.
Code→C++ Classes. This generates the C++ interface for the business object and other boilerplate
code. A stub for the implementation class also is generated the first time (Impl file). Add the business
logic manually in this class.
If you have any server code customizations from previous versions, you must regenerate the code
and rebuild your libraries.
Tip
If additional operations are added to the business object at a later time, the operations
must be added manually to this implementation class.
2. Ensure that the code generation environment is set up correctly. Right-click the project,
choose Properties, and choose Teamcenter→Build Configuration and Teamcenter→Code
Generation. Ensure that the project has the correct Teamcenter installation and compiler
home set.
3. To generate boilerplate code for business object and properties operations, right-click the
Business Objects folder and choose Generate Code→C++ Classes.
Caution
You must right-click the Business Objects folder and choose Generate Code→C++
Classes to generate code.
In addition to generating source code files for the customization, the generate process
also creates makefiles to compile the source code. There is a platform-specific
makefile created in the project’s root folder (makefile.wntx64) and there are also a
number of platform neutral makefiles created under the build folder for each library
that is defined in the template. All of these makefiles may be persisted and are only
updated by the Business Modeler IDE when something changes (a new library defined,
build options changed, and so on)
All generated files are placed in the output/generated folder, with a subfolder for each library. The
implementation stub files are placed under the src/server folder, with a subfolder for each library.
Note
The code generated from the Business Modeler IDE ignores the Published flag
set on the operation. In Teamcenter, every operation is a published operation. The
published/unpublished flag is provided so that users can tag their operations as
published or unpublished.
There is no specific workaround for this problem as it does not cause any error in
Business Modeler IDE or the code generated.
To generate boilerplate code for service operations, right-click the service containing the service
operation and choose Generate Code→Service Artifacts. Check the Console view to see the
progress of the generation.
4. After you generate code, open the Advanced perspective by choosing Window→Open
Perspective→Other.
The project folder is now marked with a C symbol. This indicates that the project has been
converted to a CDT nature project. The C/C++ Development Toolkit (CDT) is a collection of
Eclipse-based features that provides the capability to work with projects that use C or C++ as
a programming language. To learn more about CDT, in the top menu bar, choose Help→Help
Contents and choose the C/C++ Development User Guide in the left pane of the Help dialog box.
Choose the Project menu in the top menu bar to see that build options are now added to the
menu. Right-click the project in the Navigator view to see options added to the bottom of the
menu for building and running the project in the context of a CDT.
5. For business object and property operations, write your operation implementation in the
generated business_objectImpl.cxx file. For services, write the implementation in the generated
serviceImpl.cxx files.
The C++ API Reference documents APIs in C++ signatures.
Note
To access the C++ API Reference, install the Teamcenter developer references when
you install Teamcenter online help, or go to the Global Technical Access Center
(GTAC):
https://support.industrysoftware.automation.siemens.com/docs/teamcenter/
When an operation is added to a business object and the code generation command is executed,
the following classes are generated:
• BusinessObject interface that defines the API for the business object. All callers work only
with the interface.
• Dispatch class that implements the interface. It provides the overriding and extensibility
capabilities.
• An abstract generated implementation class. This class provides getters and setters for the
attributes on the business object and any other helper methods needed by the implementation.
• Impl class that is the actual implementation where the business logic is manually implemented by
the developer.
• Delegate class that delegates invocations to the implementation class and breaks the direct
dependency between the dispatch and implementation classes.
Implementation code
Note
The C++ API Reference is included in the Developer References for Customization
collection, which is available two ways:
Local Network
Install the Siemens PLM Documentation Server on your local network, then install the
Developer References for Customization collection.
Doc Center
Find the C++ API Reference in the Developer References for Customization on
Siemens PLM Doc Center. You must have a WebKey account to access the document.
https://www.plm.automation.siemens.com/locale/docs/
1. Create the operation for which you need to write the implementation.
• Business object operation
Right-click the business object against which you want to write the operation, choose Open,
click the Operations tab in the resulting view, and click the Add button.
• Property operation
Right-click the business object against which you want to write the property operation, choose
Open, select the property against which you want to write the operation, click the Property
Operations tab, and click the Add button.
Note
To change the location where generated code is saved, open the project's properties.
Right-click the project, choose Properties, and select Teamcenter→Code
Generation.
Note
The Business Modeler IDE packages the built C++ library as part of its template
packaging. The complete solution that includes the data model and the C++ run-time
libraries are packaged together.
Note
To access the C++ API Reference, install the Teamcenter developer references when you
install Teamcenter online help, or go to the Global Technical Access Center (GTAC):
https://support.industrysoftware.automation.siemens.com/docs/teamcenter/
3. On the Operation Descriptor tab, configure the CreateInput attributes to make the new
property visible in the user interface during object creation (when users choose File→New in
the rich client).
4. Add a property operation to the SAPPartNumber property to set the getter and setter code
generation for the property.
Given this example, following is some sample implementation code that could be written for it. This
sample code is for demonstration purposes only.
• The following code block demonstrates how interface methods and ITKs can be called
when adding a new operation on the business object.
/**
* checks if the BusinessObject has a valid SAPPartNumber
* @param valid - true if the SAPPartNumber is valid.
* @return - returns 0 is executed successfully.
*/
int CommercialItemImpl::hasValidSAPPartNumberBase( bool &valid )
{
//get the handle to the interface object first.
CommercialItem *cItem = getCommercialItem();
//invoke method on the Interface ( CommericalItem )
ifail = cItem->getSAPPartNumber(sapPartNumber, isNull);
//Write code here to validate the sapPartNumber value
//agaist Business Requirement and update "valid" variable.
//You can also invoke ITK using the tag.
//Use getTag() to get the tag to be passed on to the ITK.
if( ifail == ITK_ok )
{
tag_t cItemTag = cItem->getTag();
//You may write custom code involving ITK calls with input as cItemTag here.
}
return ifail;
//----------------------------------------------------------------------------------
// CommercialItemGenImpl::getSAPPartNumberBase
// Base for Accessor of SAPPartNumber
//----------------------------------------------------------------------------------
int CommercialItemImpl::getSAPPartNumberBase( std::string &value, bool &isNull ) const
{
int ifail = ITK_ok;
//Add custom code here if needed.
ifail = CommercialItemGenImpl::getSAPPartNumberBase( value, isNull );
//Add custom code here if needed.
return ifail;
}
• The following code block demonstrates how a super method is called when overriding an
existing operation from a child business object, and how a getter operation is added to a
property. Normally the super method should be invoked at the beginning of the overridden
method. However, this does not apply to post methods. The following sample method is a
postaction of create and therefore requires that the super_createPostBase method be called
at the end of the sample.
/**
* desc for createPost
* @param creInput - Description for the Create Input
* @return - return desc for createPost
*/
int CommercialItemImpl::createPostBase( Teamcenter::CreateInput *creInput )
{
int ifail = ITK_ok;
// In addition to the implementation in parent BusinessObject write
// your specific Implementation
if(ifail = ITK_ok)
{
//Add custom code here if needed.
/*
//use getters methods available on OperationInput/CreateInput to access the
//values in CreateInput.
//for example below code gets the SAPPartNumber set by the user during
//the create CommercialItem Action on the create UI.
std::string sAPPartNumber ;
bool isNull = false ;
ifail = creInput->getString("SAPPartNumber",sAPPartNumber,isNull);
//Implement custom code to meet the Business Requirement
*/
}
//Add custom code here if needed.
/*
Custom code
*/
// Call the super createPost to invoke parent implementation
ifail = CommercialItemImpl::super_createPostBase(pCreateInput);
return ifail;
}
Server code
b. Choose Teamcenter→Build Configuration in the left pane of the Properties dialog box.
The Build Configuration dialog box is displayed.
A. Verify that the build flags in the lower pane are set properly for the platform you are
building on.
Add any compile or link flags that are desired for your environment. By default the libraries
are built in release mode. The debug option can be added the Compiler Flags box.
Windows -Zi
Linux -ggdb
B. Click the Browse button to the right of the Compiler Home box to select the location
where your C++ compiler is located. For example, if your platform is Windows and you
are using Microsoft Visual Studio, browse to compiler-install-path\VC\bin.
For information about supported C++ compilers, see the Siemens PLM Software
Certification Database:
http://support.industrysoftware.automation.siemens.com/
certification/teamcenter.shtml
c. Click OK.
2. Generate code by right-clicking the Business Objects folder and choosing Generate
Code→C++ Classes.
After code is generated, a C symbol is placed on the project folder. This indicates that the project
has been converted to a CDT (C/C++ Development Tools) project. The C/C++ Development
Toolkit (CDT) is a collection of Eclipse-based features that provides the capability to work with
projects that use C or C++ as a programming language. You can work on the project in the CDT
perspective by choosing Window→Open Perspective→Other→C/C++.
After a project is converted to CDT, the Project menu in the top menu bar displays the following
build options.
Build All Builds all projects in the workspace. This is a full build;
all files are built.
Build Project Builds the currently selected project. This is a full build;
all files in the project are built.
To learn more about these menu options, choose Help→Help Contents in the top menu bar and
choose the C/C++ Development User Guide in the left pane of the Help dialog box.
4. While in the C/C++ perspective, choose Project→Build Project to build the libraries containing
server code.
If Project→Build Automatically is already selected, code is built automatically any time the
code is changed and saved.
The compiled artifacts are placed in the output/wntx64 folder. Each defined library has a
subfolder under the output/wntx64/objs folder for the intermediate object files (.obj) and all
compiled libraries (.dll and .lib) are placed in the output/wntx64/lib folder.
Caution
You must build on a Windows client if the code is to be used on a Windows server or
build on a Linux client for a Linux/UNIX server. If the packaged library files are installed
on the wrong platform, they do not work.
The build process in the Business Modeler IDE is designed to build server code on Windows
platforms only. If you intend to build the same custom server code on multiple platforms, you can use
the same project to generate the code and makefiles for each platform. You can use this process to
build on the platforms that appear on the Build Configuration dialog box for your project.
1. On Windows systems, create your new project in the Business Modeler IDE, and add new
business objects, properties, and operations. Generate code and build server code.
2. Right-click the project and choose Properties. On the left side of the dialog box, choose
Teamcenter→Build Configuration.
3. The dialog box includes a tab for each supported platform. For the platforms you want to build
libraries on, select the Create make/build files for the platform-name platform check box.
4. Fill in the Teamcenter Installation and Dependent Templates Directory boxes for the
appropriate locations relative to the target platform and host.
6. For the additional platforms, copy the new template directory and subdirectories, for example:
C:\apps\Teamcenter\bmide\workspace\release\MyTemplate
7. The makefile can be used on the target platform to generate code and build the libraries (for
example, make -f makefile.sol). This generates all code for the project and compiles the libraries.
There is a platform-specific makefile created for each selected platform (for example,
makefile.wnti32, makefile.sol, and so on). These files are created in the root folder of the project.
There are also a number of platform-neutral makefiles created under the build folder for each library
that is defined in the template. All of these makefiles may be persisted and are only updated by the
Business Modeler IDE when something changes (a new library defined, build options changed,
and the like). All platform-specific output files (such as .objs and .dll files) are created under the
platform-specific output folder (for example, output/wnti32, outputs/sol, and so on).
Extensions
Define an extension
An extension is a business rule that adds pre-defined behavior to a business object operation and
fires as a pre-condition, pre-action, or post-action. When you define an extension, you identify the
parameters passed to the extension when it is executed, and how the extension is made available to
other business objects.
1. Open the Extensions\Rules folders.
NameSpace::ClassName::MethodName
4. In the Description box, type a description of the extension and how to implement it.
5. Click the arrow in the Language box and choose the programming language of the function,
either C or C++.
6. Click the Browse button to the right of the Library box to select the library that contains the
function.
Note
You can place libraries in the TC_ROOT\bin directory, or you can set the path to valid
libraries with the BMF_CUSTOM_IMPLEMENTOR_PATH preference (BMF refers to
the Business Modeler Framework). To access preferences, in the My Teamcenter
application, choose Edit→Options and click Search at the bottom of the Options
dialog box.
7. If there are parameters to be passed to the extension, click the Add button to the right of the
Parameter List table.
The Extension Parameter dialog box is displayed.
b. Click the arrow in the Type box to select the parameter type as String, Integer, or Double.
c. Select the Mandatory check box if the parameter is mandatory (that is, the value for that
parameter cannot be null).
d. Select one of the following Suggested Value options to select the type of value for the
parameter:
• None if you type the argument value manually when assigning the extension.
• TcQuery if the values are derived from the results of a saved query.
e. If you selected TcLOV, click the Browse button to the right of the LOV Name box to locate
the list of values for the parameter.
f. If you selected TcQuery, click the Browse button to the right of the Query Name box
to locate the saved query.
The Teamcenter Repository Connection wizard prompts you to log on to a server to look up
its available saved queries.
g. Click Finish.
The parameter is added to the Parameter List table.
8. Click the Move Up or Move Down buttons to change the order in which the parameters are
passed.
9. To define availability of the extension to business objects, click the Add button to the right of
the Availability table.
The Extension Availability dialog box is displayed.
a. Click the Browse button to the right of the Business Object Name box to choose the
business object on which to place the extension. Availability defined for a business object
applies to its children.
Note
User exits also display in this list, so that you can make a user exit available for
use by the new extension.
b. To the right of Business Object Or Property, select Type if the rule is to be associated
with a business object, or select Property if the rule is to be associated with a property on
the business object.
c. If you selected Property, click the arrow in the Property Name box to select the property.
d. Click the arrow in the Operation Name box to select the operation to place on the business
object. The list includes both legacy and new operations for that business object. The new
operations have the C++ style signature.
e. Click the arrow in the Extension Point box to select the point as a PreCondition, PreAction,
or PostAction action.
Pre/post conditions for an operation are controlled during operation creation time. For
example, if a precondition is enabled for the XYZ(data-types) operation during its creation
time, then the availability wizard shows the precondition for that operation only.
f. Click Finish.
The validity item is added to the Availability table.
11. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
12. Now that you have added the extension, you can assign it to business objects or business
object properties.
2. In the Extensions view, open the Rules\Extensions folders, right-click the new extension you
have created, and choose Generate extension code.
The extension boilerplate code is generated into an extension-name.cxx C++ file and an
extension-name.hxx header file. To see these files, open the project in the Navigator view
and browse to the src\server\library directory.
Note
You may need to right-click in the view and choose Refresh to see the files that were
generated.
3. Write your extension code in the new extension-name.cxx and extension-name.hxx files. You
can use the standard methods of writing code for Teamcenter.
5. Deploy the customization to a test server by choosing BMIDE→Deploy Template on the menu
bar. Check to make sure the extension works properly.
D. Click Add to the right of the Availability table and perform the following in the Extension
availability dialog box:
i. In the Business Object Name box, select Folder.
a. In the Business Objects view, right-click the business object you have made available on
the extension, choose Open, and click the Operations tab in the resulting editor and click
create(Teamcenter::createInput*).
c. In the extension attachment wizard, for the extension point select PostAction and for the
extension select the H2FolderPostAction extension.
d. Click Finish.
The library file is saved as libH2libext. XML entries store the exact name of the library
without the library extension (.dll, .so, and so on).
The extension boilerplate code is generated into a H2FolderPostAction.cxx C++ file and a
H2FolderPostAction.hxx header file. To see these files, open the project in the Navigator
view and browse to the src\server\H2libext\ directory.
Note
You may need to right-click in the view and choose Refresh to see the files that
were generated.
b. Open the H2FolderPostAction.cxx file in a C/C++ editor and add your custom business
logic. Following is a sample file:
#include <H2libext/H2FolderPostAction.hxx>
#include <stdio.h>
#include <stdarg.h>
#include <ug_va_copy.h>
#include <CreateFunctionInvoker.hxx>
using namespace Teamcenter;
int H2FolderPostAction (METHOD_message_t* msg, va_list args)
{
std::string name;
std::string description;
bool isNull;
// printf("\t +++++++++++ In H2FolderPostAction() ++++++++++\n");
va_list local_args;
va_copy( local_args, args);
CreateInput *pFoldCreInput = va_arg(local_args, CreateInput*);
pFoldCreInput->getString("object_name",name,isNull);
pFoldCreInput->getString("object_desc",description,isNull);
// Write custom business logic here
va_end( local_args );
return 0;
}
The symbol should not be mangled (C++). The output should look similar to the following:
b. Put the libH2libext.dll library file in the TC_ROOT\bin directory, or from the rich client, set
the preference BMF_CUSTOM_IMPLEMENTOR_PATH to the path of the library.
7. Test the H2FolderPostAction extension by creating a folder object from the rich client. The
extension should get executed.
Workflow extension example: Define the sample extension in the Business Modeler IDE
1. Open the Advanced perspective by choosing Window→Open Perspective→Other→Advanced.
2. Navigate to the Extensions view and open the Rules→Extensions folder. Right-click the
Extensions folder and choose New Extension Definition.
The New Extension Definition wizard runs.
5. Click the Browse button to the right of the Library box and select the libsampleExtension library.
D. Click Finish.
c. Click the Add button in the Parameter List table. Now create the Company ID parameter.
A. In the Name box, type Company ID.
D. Click Finish.
B. In the Business Object or Property area, select the Type check box.
E. Click Finish.
2. In the Business Object view, right-click the Item business object, choose Open, and click
the Operations tab.
A view displays the extension operations attached to the business object.
4. Click the Extension Attachments tab and click the Add button.
The Add Extension Rule wizard runs.
Workflow extension example: Develop the C custom extension code for the sample extension
This custom extension initiates a workflow process on an item revision (after it is created) when
creating an item. This is custom code developed using standard ITK development practices.
1. Open the Advanced perspective by choosing Window→Open Perspective→Other→Advanced.
Note
You may need to right-click in the view and choose Refresh to see the files that were
generated.
3. Using the generate boilerplate code files, create the custom code. Following is an example of the
prefixbmf_extension_workflow_sample.c source file.
Note
In the following example, change G4_ to your project's naming prefix.
#include <ug_va_copy.h>
#include <itk/bmf.h>
#include <epm/epm.h>
#include <epm/epm_task_template_itk.h>
int getArgByName(const BMF_extension_arguments_t* const p, const int arg_cnt, const char*
paramName)
{
int i = 0;
for(; i < arg_cnt; i++)
{
if(tc_strcmp(paramName, p[i].paramName) == 0)
return i;
}
return -1;
}
int G4_bmf_extension_workflow_sample(METHOD_message_t *msg, va_list args)
{
int ifail = ITK_ok;
tag_t* new_item = NULL;
tag_t* new_rev = NULL;
tag_t item_tag = NULLTAG;
tag_t rev_tag = NULLTAG;
tag_t job_tag = NULLTAG;
tag_t template_tag = NULLTAG;
char* item_id = NULL;
char* item_name = NULL;
char* type_name = NULL;
char* rev_id = NULL;
char relproc[BMF_EXTENSION_STRGVAL_size_c + 1] = {'\0'};
char job_desc[WSO_desc_size_c + 1] = {'\0'};
char job_name[WSO_name_size_c + 1] = {'\0'};
int index = 0;
int paramCount = 0;
int companyid = 0;
int attachment_type = EPM_target_attachment;
BMF_extension_arguments_t* input_args = NULL;
/********************/
/* Initialization */
/********************/
//Get the parameters from the ITEM_create_msg
va_list largs;
va_copy(largs, args);
item_id = va_arg( largs, char *);
item_name = va_arg(largs, char *);
type_name = va_arg(largs, char *);
rev_id = va_arg(largs, char *);
new_item = va_arg(largs, tag_t *);
new_rev = va_arg(largs, tag_t *);
item_tag = *new_item;
rev_tag = *new_rev;
va_end(largs);
// Extract the user arguments from the message
ifail = BMF_get_user_params(msg, ¶mCount, &input_args);
if(ifail == ITK_ok && paramCount >= 2)
{
index = getArgByName(input_args, paramCount, "Release Process");
if(index != -1)
tc_strcpy(relproc, input_args[index].arg_val.str_value);
index = getArgByName(input_args, paramCount, "Company ID");
if(index != -1)
companyid = input_args[index].arg_val.int_value;
MEM_free(input_args);
if(relproc != NULL && rev_tag != NULLTAG)
{
sprintf( job_name, "%s/%s-%d Job", item_id, rev_id, companyid);
sprintf( job_desc, "Auto initiate job for Item/ItemRevision (%s/%s-%d)",
item_id, rev_id, companyid);
//Get the template tag.
ifail = EPM_find_template(relproc , 0, &template_tag);
if(ifail != ITK_ok)
return ifail;
// Create the job, initiate it and attach it to the item revision.
ifail = EPM_create_process(job_name, job_desc, template_tag, 1, &rev_tag,
&attachment_type, &job_tag);
if(ifail != ITK_ok)
return ifail;
}
else
{
// User supplied error processing.
}
}
return ifail;
}
Keep in mind:
• An extension should be expected to process one message.
• The message data and custom extension user data are independent of each other so it does not
matter which is processed first. However, they should be processed together as a set.
• You are expected to know the data that exists for the message. This includes data types and the
retrieval order so this data can be processed properly using va_arg.
• You are expected to know the user data that exists for the custom extension. This includes data
types and whether each data element is mandatory or optional so that this data can be processed
properly after the BMF_get_user_params function is used to retrieve this data.
o Use the compile scripts (discussed below) and specify multiple targets to be compiled.
o Execute the link script once to produce the library containing these extensions.
1. Deploy the customization to a test server by choosing BMIDE→Deploy Template on the menu
bar.
You see a TCM workflow process attached to the item revision when the item is created.
Note
User exit attachments, unlike operation extension attachments, cannot be attached at the
level of child business objects. User exit attached extensions get defined and executed as
callbacks only at a particular business object.
To work with user exits, right-click a user exit and choose Open Extension Rule. You can add actions
on the right side of the editor. The Base-Action section is only shown for user exits operations.
You can make a user exit available for use when you define an extension rule. Right-click the
Extensions folder, choose New Extension Definition, click the Add button to the right of the
Availability table, and click the Browse button on the Business Object Name box. The user exits
display on the list along with business objects.
The Integration Toolkit Function Reference documents users exits.
Note
To access the Integration Toolkit Function Reference, install the Teamcenter developer
references when you install Teamcenter online help, or go to the Global Technical Access
Center (GTAC):
https://support.industrysoftware.automation.siemens.com/docs/teamcenter/
• Provides a consistent approach that can be used for internal Siemens PLM Software development
as well as for external customizations.
Using the Business Modeler IDE, you can autogenerate the C++ plumbing code required to
implement the business logic. This:
• Increases productivity because you can spend more time in writing business logic.
• Dispatch class that implements the interface. It provides the overriding and extensibility
capabilities.
• An abstract generated implementation class. This class provides getters and setters for the
attributes on the business object and any other helper methods needed by the implementation.
• Impl class that is the actual implementation where the business logic is manually implemented by
the developer.
• Delegate class that delegates invocations to the implementation class and breaks the direct
dependency between the dispatch and implementation classes.
Framework interfaces
Auto-
Class generated Purpose
Interface Yes C++ interface class. Defines published/unpublished
API. All server caller code dependent on this business
object works with only this interface.
Auto-
Class generated Purpose
Dispatch Yes Implements the interface. Provides the overriding and
extensibility (pre/post) capabilities to the business
object.
Generated Implementation Yes C++ abstract Class. This class provides methods to
(GenImpl) get/set attribute values in the database and any other
helper methods needed by the implementation.
Implementation No Implementation class. The developer implements the
(Impl) core business logic for the business object in this class.
Delegate Yes C++ abstract class. This class breaks the direct
dependency between the dispatch and implementation
classes.
Note
Code is autogenerated in the Business Modeler IDE by opening the Advanced
perspective, right-clicking your project folder in the Business Objects view, and choosing
Generate Code→C++ classes.
Generated code
Teamcenter interaction
To enable server customization, configure the Business Modeler IDE to generate C++ boilerplate
code and write implementation code.
In the Solutions panel of the Teamcenter Environment Manager (TEM), select Business
Modeler IDE.
Choose File→New→Project, and in the New Project dialog box, choose Business Modeler
IDE→New Business Modeler IDE Template Project.
As you work through the wizard, ensure that you fill in the Code Generation Information dialog
box and the Build Configuration Information dialog box.
3. Set up the coding environment in the Advanced perspective of the Business Modeler IDE.
Create the following objects:
• Release
In the Extensions view, choose Code Generation→Releases, right-click the Releases
folder and choose New Release.
• Library
In the Extensions view, choose Code Generation→Libraries, right-click the Releases
folder and choose New Library.
After the environment is set up, you are ready to create C++ customizations.
• To add a business object operation, click the Operations folder and click the Add button.
• To add a property operation, click a custom property in the Property Operations folder
and click the Add button.
• To override an operation, click the operation you want to override in the Operations or
the Property Operations folder, and click the Override button. (The Override button is
available if the operation is inherited from a parent business object, is marked that it can be
overridden, and is not already overridden.)
5. Generate code.
In the Business Objects view, right-click your project folder and choose Generate Code→C++
classes.
** Your Implementation **
Note
You cannot use live update to distribute codeful customizations to a server. The
Business Modeler IDE packages the built C++ library as part of its template packaging.
The complete solution that includes the data model and the C++ run-time libraries
are packaged together.
2. After writing a new operation or overriding an operation, generate the boilerplate code.
In the Business Objects view, right-click your project folder and choose Generate Code→C++
classes.
/**
* This is a test operation.
* @param testParam — Test Parameter
* @return — Status. 0 if successful
*/
int MyObjectImpl::myOperationBase( int testParam )
{
// Your Implementation
}
• Override an operation
For example, you override the myOperation operation in the MySubObject
business object. The business logic for the override needs to be implemented
in the MySubObjectImpl::myOperationBase method. It needs to call
MySubObjectGenImpl::super_myOperationBase to invoke the parent implementation:
/**
* This is a test operation.
* @param testParam — Test Parameter
* @return — Status. 0 if successful
*/
int MySubObjectImpl::myOperationBase( int testParam )
{
// Your Implementation
// Your Implementation
Note
You should not call MyObjectImpl::myOperationBase for invoking parent
implementation.
// Your Implementation
Note
You should not call MyObjectImpl::myOperation2Base.
The create operation in turn calls a protected method, createBase, that is implemented as an
orchestration method and is not overridable, as shown in the following example:
int POM_objectImpl::createBase( Teamcenter::CreateInput * creInput )
{
try
{
ResultStatus stat;
stat = finalizeCreateInput(creInput);
stat = validateCreateInput(creInput);
stat = createInstance(creInput);
stat = setPropertiesFromCreateInput(creInput);
stat = createPost(creInput);
}
The child operations that are marked in bold in the example can be overridden at a child business
object:
• finalizeCreateInput operation
Override this operation to implement any special business logic code for handling computed
properties. Use it to:
o Compute property values if not defined.
o Assign a computed value to the CreateInput object using generic get/set APIs.
For example, the item_id value is computed using the existing USER_new_item_id user exit in
the following method:
Note
Whereas the USER_new_item_id user exit is used to create the ID for a single item,
the USER_new_item_ids user exit is used for bulk creation of the IDs for all the
associated objects that are created at the same time as the item (revisions, forms,
and so on).
• validateCreateInput operation
Override this operation to implement any special business logic code for validating the
CreateInput. You can use it to perform uniqueness validation, for example, the item_id input
is unique.
• setPropertiesFromCreateInput operation
Override this operation to implement any special business logic code for populating the POM
instance. Use it to:
o Populate input values onto the corresponding attributes of the POM instance. (This is
implemented generically at the POM_object level.)
o Populate or initialize special attributes in terms of special business logic. (This should be
avoided.)
• createPost operation
Override this operation to implement any special business logic code for creating compounded
objects.
o Create compound objects or secondary objects.
o Assign compound or secondary objects to the primary object through references or GRM
relations.
For example, the master form and item revision on an Item business object are created in:
Note
The create operation logically creates objects and then the save operation commits the
changes to the database. This pattern must be followed whenever the object creation is
initiated. Following this pattern requires that any postactions on the create operation
should not assume the existence of the objects in the database. If database interactions
like queries or saving to the database against the newly created objects are needed in
the postactions, they must be attached as a postaction to the save message and not the
create message. The save message is initiated following the create or modify operation.
To distinguish between the save types, use the following C API:
TCTYPE_ask_save_operation_context
2. Override the child operations on the create operation in the Operations tab for the new business
object.
3. Generate the C++ code in the Advanced perspective by right-clicking your project folder in the
Business Objects view and choosing Generate Code→C++ classes.
• Allows a business object to override the parent create operation to add additional validations
or to handle additional processing.
• Handles additional properties added to a COTS business object by an industry vertical solution
or a customer.
• Handles sub-business objects in a codeless manner (no code generation and writing code) when
they are added by the core model, an industry vertical solution, or a customer.
The metadata for the CreateInput API is different for different business objects and is configured on
the Operation Descriptor tab for the business object in the Business Modeler IDE. This metadata
is inherited and can be overridden down the business object hierarchy. You can add additional
attributes, and clients can query this metadata to provide the appropriate create user interface.
2. Populate the CreateInput object with the input data, using generic property set (C++) APIs.
3. Construct CreateInput objects for compounded objects and populate and assign them to the
CreateInput object of the primary object.
1. Construct a CreateInput object and populate the input into the appropriate property value map.
2. Construct a CreateInput object for the compounded objects and populate and assign them into
the compound object map.
Note
Whereas the createObjects service operation is used to create a single object, the
bulkCreateObjects service operation is used for bulk creation of all the associated objects
when you create an object with associated objects, such as an item business object.
• Services
Organize operations into groups having to do with a specific area.
• Libraries
Place similar services together.
Teamcenter ships with service libraries and interfaces to build applications in Java, C++, and C#
(.NET), allowing you to choose the method that best fits with your environment. Teamcenter also
ships with WS-I compliant WSDL files for all operations. To see these libraries, see the soa_client.zip
file on the installation source.
A Services Reference is available in the Teamcenter HTML Help Collection.
Note
It is not available in the PDF collection.
• Release
In the Extensions view of the Advanced perspective, choose Code Generation→Releases,
right-click the Releases folder, and choose New Release.
Ensure that you select the Set as Current Release check box to indicate that this release is
the active one to use for your new services. In the Service Version box, type the version
of the services you plan to create in the release using format _YYYY_MM, for example,
_2011_10. You must fill in the Service Version box if you plan to create services for this
release.
• Service library
In the Extensions view, choose Code Generation→Services, right-click the Services
folder and choose New Service Library.
After the environment is set up, you are ready to create services.
2. Add service data types (if needed). Service data types are used by service operations as return
data types and can also be used as parameters to operations.
Right-click the service, choose Open, click the Data Types tab, and click the Add button.
Note
You cannot use live update to distribute codeful customizations to a server.
Services
A service is a collection of service operations that all contribute to the same area of functionality. A
service operation is a function, such as create, checkin, checkout, and so on. Service operations are
used for all internal Teamcenter actions, and are called by external clients to connect to the server.
When you use the Business Modeler IDE to create your own services and service operations, follow
this process:
3. Add a service.
Teamcenter ships with service libraries and interfaces to build applications in Java, C++, and C#
(.NET), allowing you to choose the method that best fits with your environment. Teamcenter services
also ships with WS-I compliant WSDL files for all operations. To see these libraries, see the
soa_client.zip file on the installation source.
A service library is a collection of files used by a service and its operations that includes programs,
helper code, and data. Services are grouped under service libraries. You must create a service
library before you create a service.
You can create your own XML file, such as services.xml. In the Project Files folder, right-click
the extensions folder and choose Organize→Add new extension file.
• On the menu bar, choose BMIDE→New Model Element, type Service Library in the
Wizards box, and click Next.
• Open the Extensions\Code Generation folders, right-click the Services folder, and choose
New Service Library.
c. If this library requires other libraries in order to function, click the Add button to the right of
the Dependent On box to select other libraries.
d. Click Finish.
The new library appears under the Services folder.
4. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
5. Create services to go into the library. Right-click the library and choose New Service.
Add a service
A service is a collection of Teamcenter actions (service operations) that all contribute to the same
area of functionality. Before you can add a service, you must create a service library to place the
service into.
Teamcenter ships with its own set of services and service operations to which you can connect your
own client. See the soa_client.zip file on the installation source.
1. Expand the Extensions\Code Generation\Services folders.
2. Under the Services folder, select a service library to hold the new service, or create a new
service library to hold the new service.
3. Right-click the custom service library in which you want to create the new service, and choose
New Service.
The New Service wizard runs.
b. In the Name box, type the name you want to assign to the new service.
d. Click the Dependencies button to select services that the new service is dependent upon.
You can use the Ctrl or Shift keys to select multiple services.
e. Click Finish.
The new service appears under the service library in the Services folder.
5. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
6. Create data types and operations for the service. Right-click the service, choose Open, and click
the Data Types tab or the Operations tab.
• Map
Links one type to another. Corresponds to a C++ typedef data type.
• Enumeration
Contains a list of possible values. Corresponds to a C++ enum data type.
A structure data type is a kind of service data type that contains a set of types. It corresponds to a
C++ struct data type. Service structures can be composed of many types: primitive types, string,
enumerations, maps, business objects, structures (for example, ServiceData), as well as vectors
of these (except for ServiceData).
Note
Service data types are used only by the service library for which they are created.
1. Ensure that you have set the proper active release for the data type. Open the Extensions\Code
Generation\Releases folders, right-click the release, and choose Organize→Set as active
Release. A green arrow in the release symbol indicates it is the active release.
3. Under the Services folder, open a service library and select the service to hold the new data
type. You must create a service library and service before you create a service data type.
4. Right-click the service in which you want to create the new data type, choose Open, and click
the Data Types tab.
b. In the Name box, type the name you want to assign to the new data type.
c. Select the Published check box to make the data type available for use by client applications.
Clear the box to unpublish the type, and to put it into an Internal namespace. Unpublished
types can only be used by unpublished operations (published can be used by either).
d. Click the Add button to the right of the Data Type Elements table to add elements to the
structure data type.
The Structure Element dialog box is displayed.
B. Click the Browse button on the Type box to select the data type object to use for the
element.
Note
When defining service types or operations, you can only reference structure
data types from a previous release or only map data types or enumeration
data types from the same release.
C. Click the Edit button to type a description for the new element.
D. Click Finish.
e. Click the Add button again to add more elements to the structure data type. The Structure
Elements table shows a preview of the new data type.
f. Click the Description button to type a description for the new structure.
g. Click Finish.
The new data type displays on the Data Types tab. To see the characteristics of the data
type, select it and click Data Type Definition on the right side of the editor.
8. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
9. Write a service operation that uses the new data type. When you create a service operation,
service data types are shown as Generated data types in the Choose a Data Type Object
dialog box.
A map data type is a kind of service data type that links one type to another. It corresponds to
a C++ typedef data type of the std::map form. A map data type consists of two components, a
key type and a value type.
Note
Service data types are used only by the service for which they are created.
1. Ensure that you have set the proper active release for the data type. Open the Extensions\Code
Generation\Releases folders, right-click the release, and choose Organize→Set as active
Release. A green arrow in the release symbol indicates it is the active release.
3. Under the Services folder, open a service library and select the service to hold the new data
type. You must create a service library and service before you create a service data type.
4. Right-click the service in which you want to create the new data type, choose Open, and click
the Data Types tab.
b. Click Browse to the right of the Key Type box to select the key to use for this map.
Note
When defining service types or operations, you can only reference structure data
types from a previous release or only map data types or enumeration data types
from the same release.
c. Click the Browse button to the right of the Value Type box to select the value to use for
this map.
d. Select the Published check box to make the data type available for use by client applications.
Clear the box to unpublish the type, and to put it into an Internal namespace. Unpublished
types can only be used by unpublished operations (published can be used by either).
e. Click the Description button to type a description for the new data type.
f. Click Finish.
The new data type displays on the Data Types tab. To see the characteristics of the data
type, select it and click Data Type Definition on the right side of the editor.
8. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
9. Write a service operation that uses the new data type. When you create a service operation,
service data types are shown as Generated data types in the Choose a Data Type Object
dialog box.
An enumeration data type is a kind of service data type that contains a list of possible values. It
corresponds to a C++ enum data type. An enumeration data type consists of string names known
as enumerators.
Note
Service data types are used only by the service for which they are created.
1. Ensure that you have set the proper active release for the data type. Open the Extensions\Code
Generation\Releases folders, right-click the release, and choose Organize→Set as active
Release. A green arrow in the release symbol indicates it is the active release.
3. Under the Services folder, open a service library and select the service to hold the new data
type. You must create a service library and service before you create a service data type.
4. Right-click the service in which you want to create the new data type, choose Open, and click
the Data Types tab.
b. Select the Published check box to make the data type available for use by client applications.
Clear the box to unpublish the type, and to put it into an Internal namespace. Unpublished
types can only be used by unpublished operations (published can be used by either).
c. Click the Add button to the right of the Enumerators box to add enumerators to the data type.
The New Enumeration Literal wizard runs. Type a value in the Value box and click Finish.
d. Click the Add button again to add more enumerators to the data type, or click the Remove
button to remove enumerators.
e. Click the Description button to type a description for the new data type.
f. Click Finish.
The new data type displays on the Data Types tab. To see the characteristics of the data
type, select it and click Data Type Definition on the right side of the editor.
8. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
9. Write a service operation that uses the new data type. When you create a service operation,
service data types are shown as Generated data types in the Choose a Data Type Object
dialog box.
Note
Teamcenter ships with its own set of services and service operations to which you can
connect your own client. See the soa_client.zip file on the installation source.
1. Ensure that you have set the proper active release for the operation. Open the Extensions\Code
Generation\Releases folders, right-click the release, and choose Organize→Set as active
Release. A green arrow in the release symbol indicates it is the active release.
3. Under the Services folder, open a service library and select the service to hold the new service
operation. You must create a service library and service before you create a service operation.
4. Right-click the service in which you want to create the new service operation, choose Open, and
click the Operations tab.
b. Click the Browse button to the right of the Return Type box to select the data type to be
used to return data:
• bool
Returns a Boolean value (true or false). The bool data type is a primitive data type.
• std::string
String from the standard namespace. The std::string data type is an external data type.
• Teamcenter::Soa::Server::ServiceData
Returns model objects, partial errors, and events to the client application. If you select
the ServiceData type, include it only in the top-level structure that a service is returning.
The ServiceData data type is an external data type.
You can also create your own structure service data types and select them in this dialog box.
Note
When defining service types or operations, you can reference only structure data
types from a previous version, or only map data types or enumeration data types
from the same version.
c. Select the Published check box to make the operation available for use by client applications.
Clear the box to unpublish the operation and place it into an Internal namespace.
d. Select the Throws Exception check box if a return error causes an exception to be thrown.
This enables the Exception Condition button.
e. To set parameters on the service operation, click the Add button to the right of the
Parameters table.
The New Service Operation Parameter wizard runs.
B. Click the Browse button to the right of the Type box to select a data type to hold the
parameter.
In the Choose a Data Type Object dialog box, you can select the following data types to
filter out:
• Primitive
Generic data types. Only boolean, double, float, and int are available for services.
• External
Standard data types. Only those listed are available for services; they are defined by
Teamcenter.
• Template
Data types that allow code to be written without consideration of the data type with
which it is eventually used. Only vector is available for services.
• Interface
Data types for returning business objects. Only business objects with their
own implementation are selectable. For business objects without their own
implementation, select the closest parent instead.
• Generated
Service data types, such as structure, map, and enumeration data types. Only the
data types set up for this service are shown as being available for this service.
However, you can type in the name of a data type that is in another service as long
as it is in the same service library.
Note
When defining service types or operations, you can only reference structure
data types from a previous release or only map data types or enumeration
data types from the same release.
C. Click the arrow in the Qualifier box to select the qualifier for the return value. Only the
valid qualifiers display depending on the type of parameter selected.
• &
Reference variable.
• *
Pointer variable.
• **
Double pointer. (Pointer to a pointer.)
• []
Array container.
D. Click the Edit button to type a complete description of this input parameter.
Follow these best practices:
• Specify what this argument represents and how it is used by the operation.
• Specify if this parameter is optional. For an optional argument, describe the behavior
if the data is not passed for that argument.
• Describe how data is to be serialized or parsed if the parameter type hides multiple
property types behind a string.
• Avoid specifying just the type in the description. In case of hash maps, document the
type of the map key and values.
• Use correct formatting with fixed and bold text where appropriate.
E. Click Finish.
The Parameters table displays the new parameter.
f. Click the Operation Description box to type a complete description of the functionality
exposed through the service operation.
Follow these best practices:
• Describe what this operation does. Explain more than simply stating the method name.
• Make the description complete in its usefulness. Keep in mind the client application
developer while writing the content.
• Whenever appropriate, describe how each argument works with other arguments and
gets related to each other when this operation completes.
• Use correct formatting with fixed and bold text where appropriate.
g. Click the Return Description button to type a complete description of what the service
operation returns. Follow these best practices:
• Describe what the output represents and provide high-level details of the output data. Do
not specify only the type of service data returned.
• Specify returned objects that are created, updated, or deleted as part of service data.
• Use correct formatting with fixed and bold text where appropriate.
h. Click the Exception Condition button to type a complete description of conditions that must
be met for the operation to throw an exception.
This box is enabled when the Throws Exception check box is selected.
i. Click the Use Case button to describe how the user interacts with this operation to
accomplish the operation’s goal. Follow these best practices:
• Document when and why this operation can be consumed.
• Describe how operations interrelate to satisfy the use case (if there is interrelation
between operations).
• Use correct formatting with fixed and bold text where appropriate.
j. Click the Dependencies button to select the other operations that are used in conjunction
with this operation to accomplish the goal.
k. Click the Teamcenter Component button to select the component you want to tag this
operation as belonging to. Teamcenter Component objects identify Teamcenter modules
that have a specific business purpose.
Tip
Create your own custom components to use for tagging your service operations.
To create your own custom components, right-click Teamcenter Component
in the Extensions folder, choose New Teamcenter Component, and type the
details for the name, display name, and description. In the Name box, you can
either choose your template name or choose a name to help you easily organize
and identify your set of service operations. After the component is created, it
appears in the list of available components.
l. The preview pane shows how the operation information appears in the service API header. If
you choose to generate API documentation for services (for example, using Javadoc), this
preview shows you how the documentation appears when generated.
Click Finish.
The new service operation displays on the Operations tab. To see the characteristics of the
operation, select it and click Operation Definition on the right side of the editor.
7. To save the changes to the data model, choose BMIDE→Save Data Model, or click the Save
Data Model button on the main toolbar.
8. Generate service artifacts. Right-click the service containing the service operation and choose
Generate Code→Service Artifacts.
Click the buttons at the top of the editor to format the text:
• Bold
Applies a bold font to the selected text. You can also apply bold text by pressing the CTRL+B
keys.
• Italics
Applies italicized font to the selected text. You can also apply italicized font by pressing the
CTRL+I keys.
• Bullets
Applies bullets to the selected items. You can also apply bullets by pressing the CTRL+P keys.
• Variable
Applies a fixed-width font (such as Courier) to the selected text to designate that the text is a
variable. You can also apply the variable format by pressing the CTRL+O keys.
• Check Spelling
Performs a spell check for the text in the Description Editor dialog box. You can also perform a
spell check by pressing the CTRL+S keys.
2. Ensure that the code generation environment is set up in the project the way you want it.
a. Right-click the project folder and choose Properties.
3. To generate code for all services, open the Extensions\Code Generation folders, right-click the
Services folder, and choose Generate Code→Service Artifacts.
In addition to generating source code files for the customization, the generate process also
creates makefiles for C++ artifacts, Ant build.xml files for Java artifacts, and .csproj files for
.NET artifacts, which are used to compile the source code. There is a platform-specific makefile
created in the project’s root folder (for example, makefile.wntx64) and there are also a number
of platform-neutral build files created under the build folder for each library that is defined in the
template. All of these build files may be persisted and are only updated by the Business Modeler
IDE when something changes (such as a new library is defined or build options are changed).
The root makefile (for example, makefile.wntx64) is used to compile all library source files for the
template. This includes the autogenerated code and your hand-written implementation code.
4. All generated files are placed in the output/generated folder, with a subfolder for each library. The
implementation stub files are placed under the src/server folder, with a subfolder for each library.
5. Write implementations in the generated serviceimpl.cxx files where it contains the text: TODO
implement operation.
Note
To access the Services Reference, install the Teamcenter developer references when you
install Teamcenter online help, or go to the Global Technical Access Center (GTAC):
https://support.industrysoftware.automation.siemens.com/docs/teamcenter/
1. Create the operation for which you need to write the implementation.
In the Extensions\Code Generation\Services folder, open a service library. Right-click the
service to hold the new operation, choose Open, click the Operations tab, and click the Add
button.
{
const BusinessObjectRef< BOMWindow > &bomWindow = *iter;
try
{
// validate the input
if(bomWindow.tag() == NULLTAG)
{
ERROR_store( ERROR_severity_error, error-constant );
resp.serviceData.addErrorStack ();
continue;
}
// call tcserver function to process this
rStat = BOM_save_window( bomWindow.tag() );
// include modified objects
resp.serviceData.addUpdatedObject( bomWindow.tag());
}
// process errors as partial failures, linked to this input object
catch( IFail & )
{
ERROR_store( ERROR_severity_error, error-constant);
resp.serviceData.addErrorStack( bomWindow );
}
}
return resp;
}
In the sample, error-constant is replaced with a constant that maps to the correct error message in
the text server. Typically, you create your own error constants. Constants can be exposed through
ITK (for server code customization) by addition of the header file in the kit_include.xml file.
Caution
This sample is for example only and is not expected to compile in your customization
environment. Do not copy and paste this sample code into your own code.
6. If you want the implementation files to be regenerated after making changes to the services, you
must manually remove or rename the files and then right-click the service and choose Generate
Code→Service Artifacts. The Business Modeler IDE does not regenerate the files if they exist,
to avoid overwriting the implementation files after you edit them.
ServiceData implementation
The ServiceData class is the primary means for returning Teamcenter data model objects. Objects
are sorted in created, deleted, updated or plain lists. These lists give the client application access to
the primary data returned by the service. The Teamcenter Services server-side framework provides
the service implementor access to the following ServiceData class with methods to add objects to the
different lists (tag_t can be passed to these in place of the BusinessObject tag):
class ServiceData
{
public:
void addCreatedObject ( const BusinessObjectRef<Teamcenter::BusinessObject> obj );
void addCreatedObjects( const std::vector< BusinessObjectRef<Teamcenter::BusinessObject>
>& objs );
void addDeletedObject ( const std::string& uid );
void addDeletedObjects( const std::vector<std::string>& uids );
void addUpdatedObject ( const BusinessObjectRef<Teamcenter::BusinessObject>
obj );
void addUpdatedObjects( const std::vector< BusinessObjectRef<Teamcenter::BusinessObject>
>& objs );
void addUpdatedObject( const BusinessObjectRef<Teamcenter::BusinessObject> obj,
const std::set<std::string>& propNames );
void addPlainObject ( const BusinessObjectRef<Teamcenter::BusinessObject> obj );
void addPlainObjects ( const std::vector< BusinessObjectRef<Teamcenter::BusinessObject>
>& objs );
...
};
…
void addObject ( const BusinessObjectRef<Teamcenter::BusinessObject> obj );
Note
If the service is returning a plain object in a response structure, you must also add the
object to the ServiceData class using the addPlainObject tag. Otherwise, only a UID is
sent and the client ModelManager process can’t instantiate a real object (which can
cause casting issues when unpacking the service response). The framework typically
automatically adds created, updated, and deleted objects to the ServiceData.
The nature of the Teamcenter data model is such that simply returning the primary requested
objects is not enough. In many instances you must also return the objects referenced by primary
objects. The ServiceData class allows this to be done with the addObject method. Objects added
to the ServiceData class through the addObject method are not directly accessible by the client
application; they are only accessible through the appropriate properties on the primary objects of
the ServiceData class:
class ServiceData
{
public:
...
void addObject ( const tag_t& obj );
void addObjects ( const std::vector<tag_t>& objs );
};
The different add object methods on the ServiceData class only add references to the objects to the
data structure. These object references are added without any of the associated properties. There
are two ways for adding object properties to the ServiceData class: explicitly added by the service
implementation, or automatically through the object property policy. To explicitly add properties use
the addProperty or addProperties method:
class ServiceData
{
public:
...
void addProperty ( const BusinessObjectRef<Teamcenter::BusinessObject> obj,
const std::string& propertyName );
void addProperties( const BusinessObjectRef<Teamcenter::BusinessObject> obj,
const std::set<std::string>& propNames );
};
2. Select the project in the Advanced perspective and choose Project→Build All on the menu bar.
You can also perform this step in the C/C++ perspective. The C/C++ perspective is an
environment you can use to work with C and C++ files, as well as to perform build operations.
The output libraries are generated for all the client bindings, and the server side library is built.
Output library files are placed under the output folder in the client, server, and types subfolders.
3. After all services are built, you can package them into a template.
On the menu bar, choose BMIDE→Generate Software Package.
Note
The Business Modeler IDE packages the built C++ library as part of its template
packaging. The complete solution that includes the data model and the C++ run-time
libraries are packaged together.
Note
You define the bindings that are generated when you create a new project.
• Server libraries
A server-side library is created with the name libprefix-service-library-name.library-extension in
the output\server\lib folder.
For example, if a prefix is set to xyz3 and the service library name is MyQuery, the library file
name is libxyz3myquery.dll for Windows servers and libxyz3myquery.so for UNIX servers.
This library must be deployed into the Teamcenter server using Teamcenter Environment
Manager (TEM).
• Type library
Based on the binding options specified, the corresponding language type libraries
are also generated. For example, if the client application is a Java application, the
prefix-service-library-nameTypes.jar file must be added to the client applications classpath for
using the types defined in the service.
If a prefix is set to xyz3 and the service library name is MyQuery, the type library file name
for the Java client is xyz3MyQueryTypes.jar, and for the C# client, the library name is
xyz3MyQueryTypes.dll. The C++ client binding does not generate a types library.
• WSDL bindings
WSDL files are also generated depending on the option specified, and the corresponding
Axis bindings are created with a prefix-service-library-nameWsdl.jar naming convention. This
must be deployed onto the Web tier. After it is deployed, client applications can consume the
services using WSDL definitions.
Following is typical usage (for example, if the prefix is xyz3, the solution name is MyCustom, and the
service library name is MyQuery).
Deploy- Language /
ment Technology Libraries to use (Windows) Libraries to use (UNIX)
Client C# xyz3MyQueryStrong.dll
xyz3MyQueryTypes.dll
xyz3StrongModelMyCustom.dll
Client WSDL Generate client bindings using Generate client bindings using
exposed WSDL files. exposed WSDL files.
Note
Siemens PLM Software recommends that instead of using ITK for server customization,
you should use the data model customization method. The data model framework is
the structure used by Teamcenter itself.
Bold Bold text represents words and symbols you must type exactly as shown.
In the preceding example, you type harvester_jt.pl exactly as shown.
Italic Italic text represents values that you supply.
In the preceding example, you supply values for bookmark-file-name and
directory-name.
text-text A hyphen separates two words that describe a single value.
In the preceding example, bookmark-file-name is a single value.
[] Brackets represent optional elements.
... An ellipsis indicates that you can repeat the preceding element.
Format
All ITK functions have a standard format that attempts to give the most information possible in a small
space. A template is shown below, followed by a more detailed description. All prototypes are located
in include files named classname.h for the class of objects that the functions operate on.
Additional information about specific functions can be found in the Integration Toolkit Function
Reference. (The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.)
The design intent for the format of ITK functions is to provide the maximum amount of information in a
small space. The standard format is:
int module_verb_class_modifier ( const type variable-name[dimension]
/* [I/O/OF] */ );
int Nearly all ITK functions return an integer error code. This code can be
passed to the EMH_ask_error_text function.
module This is the module designator. Related classes are grouped together
in modules. For example, the Dataset, DatasetType, Tool, and
RevisionAnchor classes are all handled by the AE module. Other
examples of modules are PSM, POM, FL, and MAIL.
verb This is the first key word describing an action to be taken on an object
or set of objects. Common actions are create, add, remove, copy,
set, and ask.
class This is the class name or an abbreviation of the class name. The
exceptions are for modules that only deal with one class, like Workspace
or modules that deal with many classes, like WSOM and POM.
modifier This is a word or several words that give more description of how
the action of the verb applies to the class. For example, in the
RLM_update_af_status function, status indicates what is being
updated in the af (authorization folder).
const Input pointer variables which are not meant to be modified normally are
declared with a const to ensure that they are not accidentally modified.
type This is the data type of the argument (for example, int, tag_t*, or
char***).
variable-name This is a variable name that is descriptive enough so a programmer
does not need to look it up in the documentation.
dimension This value is normally specified for arrays where the calling program is
responsible for allocating space. They are normally specified with a
constant definition of the form module_description_c. These constants
should be used in dimensioning arrays or looping through the values of
an array to make your programs more maintainable. However, you may
see the values of these constants in the include files. This is useful so
you do not establish naming conventions that leads to name strings
longer than can be passed to the functions.
I/O/OF These characters indicate whether the particular argument is input,
output or output-free. Output-free means that the function allocates
space for the returned data and this space should be freed with the
MEM_free function.
Most typedefs and constants begin with the module designator like the function names to make it
clear where they are meant to be used.
Note
The following constants have a length of 128; therefore, class attributes can contain 128
bytes:
• ITEM_id_size_c
• ITEM_name_size_c
• WSO_name_size_c
Beginning in Teamcenter 10.1, these fixed-size buffer ITK APIs are deprecated.
Many of the Integration Toolkit (ITK) APIs in Teamcenter are based on fixed-size string
lengths. As long as Teamcenter is operating in a deployment scenario where each
character occupies only one byte (such as western European languages), there are not
any issues. However, when Teamcenter is deployed in a scenario where each character
can occupy multiple bytes (such as Japanese, Korean, or Chinese), data truncation can
occur. Therefore, the fixed-size buffer ITK APIs are deprecated, and wrapper APIs are
provided for each of the fixed-size buffer APIs.
Siemens PLM Software recommends that you refactor any customizations that use the
older fixed-size buffer APIs and replace them with the new APIs. The new APIs use the
char* type instead of fixed char array size in the input and output parameters. The new
APIs use dynamic buffering to replace the fixed-size buffering method.
For a list of the deprecated ITK APIs and their replacements, see the Deprecated tab in
the Integration Toolkit Function Reference.
Class hierarchy
Because Teamcenter is an object-oriented system, it is not necessary to have a function for every
action on every class. Usually, functions associated with a particular class work with instances for
any subclasses. For example, the FL_ functions for folders work with authorizations and envelopes.
The AOM module and the WSOM module consist of functions that correspond to the methods of
the POM_application_object and WorkspaceObject abstract classes respectively. AOM functions
work for all application objects and WSOM functions work for all workspace objects.
Caution
When you create objects, an implicit lock is placed on the newly created object. After you
perform an AOM_save, it is important that you call AOM_unlock.
The same mechanism described here also enables standard Teamcenter functions to work on
subclasses that you may define. For example, if you define a subclass of the folder class, you can
pass instances of it to FL functions.
Include files
All ITK programs must include tc/tc.h. It has the prototype for ITK_init_module and the definition
of many standard datatypes, constants, and functions that almost every ITK program requires,
such as tag_t and ITK_ok.
The include files are located in subdirectories of the TC_ROOT/include directory. You must include
the subdirectory when calling the include file. For example, when calling the epm.h include file, which
is located in the epm subdirectory, you must call it as:
#include <epm/epm.h>
All of the other ITK functions have their prototypes and useful constants and types in a file called
classname.h. In addition there are header files for many of the major modules that contain constants
and include all of the header files for the classes in that module. Sometimes this header file is
required, sometimes it is just convenient. The Integration Toolkit Function Reference tells you which
header files are required.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
There is also an error include file for every module called module_errors.h or sometimes
classname_errors.h. These files all contain error offsets from a value assigned in the
tc/emh_const.h file. One exception is POM, which has its errors in the pom/pom/pom_errors.h file.
Teamcenter also comes with a set of include files that mimic standard C include files.
For example, instead of using:
#include "stdarg.h"
These include files are used in Teamcenter to insulate it from the operating system. Sometimes the
files include the system files directly, but other times system dependencies are handled in the tc_
include files, thus enabling the Teamcenter source code and yours to remain system independent.
The files provided are:
fclasses/tc_ctype.h
fclasses/tc_errno.h
fclasses/tc_limits.h
fclasses/tc_math.h
fclasses/tc_stat.h
fclasses/tc_stdarg.h
fclasses/tc_stddef.h
fclasses/tc_stdio.h
fclasses/tc_stdlib.h
fclasses/tc_string.h
fclasses/tc_time.h
fclasses/tc_types.h
fclasses/tc_unixio.h
{
tag_t dataset_tag;
int error_code;
login, etc. ...
error_code = AE_find_dataset ("my special dataset", &dataset_tag)
if( error_code ==ITK_ok )
{ if (dataset_tag == NULLTAG)
{ /*
* Could not find my special dataset, so go create one.
*/
error_code = AE_create_dataset_with_id (....
}
}
else
{
report error ....
Overview
The Teamcenter ITK build tools are provided in the form of script files such as compile.bat and
linkitk.bat. These script files do not take advantage of the features available in Visual Studio IDE,
such as debugging, code completion, symbol browsing, and project management. You use the files to
configure Visual Studio to build debug and release versions of an ITK application and to configure the
debugger to debug the application as follows:
Prerequisites:
• A Teamcenter installation
• The Teamcenter sample files installed using Teamcenter Environment Manager (TEM)
Note
Consult the hardware and software certifications for Teamcenter 12 for the required version
levels of third-party software.
The various files you need are located in the Teamcenter sample files directory, TC_ROOT\sample.
1. Create a Visual Studio solution for your ITK code using one of the provided sample files.
For example: ...\utilities\find_recently_saved_item_rev_itk_main.c
2. Set up the project property sheets to compile and link the ITK sample code.
Use the compile and link script files to provide the necessary directories, dependencies,
preprocessor definitions, and libraries to your solution.
You may now build the solution. The resultant executable file must be run from within a Teamcenter
command-line environment.
2. Paste them into the Environment field on the debug property page for the debug configuration.
You can also add Command Arguments to save typing. For example:
-u=user -p=password
Debugging ITK
Journal files
Teamcenter optionally journals the calls to all of the ITK functions. This is in a file called journal_file
after the program is run. This file can be very useful in finding out what your program has done
and where it went wrong. There are many journaling flags in the system for the modules and ITK
in general. Also, there are flags for NONE and ALL. Another option is to set the TC_JOURNAL
environment variable to FULL, SUMMARY, or NONE.
For POM, use the POM_set_env_info function; otherwise, use the xxx_set_journaling(int flag)
function, where flag is equal to 0 (zero) for off or not equal to 0 for on. You can have ITK and EPM
journaling on if you are working in that area and leave POM and PS journaling off, ensuring that your
journal file does not get cluttered with unuseful information. The following code shows how the
POM logs input and output arguments:
POM_start ( 'aali', 'aali', 'Information Manager', user, topmost_class_id,
pom_version)
@* --> 6s
@* FM_init_module
@* --> 8s
@* EIM_PM_id_of_process ( 103, process_id)
@* process_id = 558 returns 0
@* returns 0 [in 2 secs]
@* AM_init_module
@* FM_init_module
@* returns 0
@* FM_allocate_file_id ( file)
@* returns 0, file = 000127cc08b2
@* --> 9s
@* AM_identify_version ( identity)
@* returns 0
@* returns 0
@* EIM_PM_id_of_process ( 105, process_id)
@* process_id = 559 returns 0
@* --> 10s
@* FM_read_file ( 00650002)
@* returns 0 @* --> 11s
@* FM_ask_path ( 00650002, 750, path)
@* returns 0, path = '/home/demov2/user_data/d1/f000627cbd90f_pom_schema'
@* --> 12s
@* FM_unload_file ( 00650002)
@* returns 0
@* --> 14s
@* FM_allocate_file_id ( file)
@* returns 0, file = 000227cc08b2
The first few hundred lines of a journal file contain a lot of text like the example above. This is the
result of reading in the schema. If you define POM classes, you can see this information in there
also. This is helpful in determining if you have in the database what you think you have in it. Also,
when you see class IDs or attribute IDs elsewhere in the journal file, you can refer to this section
to see what they are.
The following code is an example of an application function being journaled:
RLM_ask_approver_roles ( 006500e1)
@* POM_length_of_attr ( 006500e1, 00650009, length)
@* length = 1 returns 0
@* POM_length_of_attr ( 006500e1, 00650009, length)
@* length = 1 returns 0
@* POM_ask_attr_tags ( 006500e1, 00650009, 0, 1, values, is_it_null,
is_it_empty)
@* values = { 006500dc <AAlccPghAAAAyD> }, is_it_null = { FALSE },
@* POM_free ( 0041cf50)
@* returns 0
@* POM_free ( 0041d390)
@* returns 0
@* role_count = 1, approver_roles = { 006500dc <AAlccPghAAAAyD> } returns 0
The journal information for lower level routines are nested inside the higher level routines.
System logs
The system log can be useful in diagnosing errors because when a function returns an error, it
often writes extra information to the system log. For example, if your program crashes you may
not have an error message, but the end of the system log may show what the program was trying
to do just before crashing.
Logging
Information on logging can be found in the Log Manager (LM) module reference section of the
Integration Toolkit Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Error handling
Teamcenter stores errors on a central error stack. An error may be posted at a low level and each
layer of the code handles the error reported by the level below, potentially adding a new error of its
own to the top of the stack to add more contextual information to the exception being reported. This
stack of errors is what you see displayed in the Teamcenter error window in the user interface. ITK
functions always return the top error from the stack if there is one. If the call is successful, ITK_ok
is returned.
The Error Message Handler (EMH) ITK functions enable you to access the full error stack and
decode individual Teamcenter error codes into the internationalized texts that are defined in the XML
code and displayed in the error windows at the user interface. EMH ITK functions are defined in the
tc/emh.h header file. They give the ability to store errors, access errors on the stack, and decode
error codes into internationalized text.
Additional information on EMH functions can be found in the EMH section of the Integration Toolkit
Function Reference. (The Integration Toolkit Function Reference is not available in PDF format. It is
available only in the Teamcenter HTML Help Collection.)
Note
It is good practice to acknowledge that you have received and handled an error reported
by an ITK call by clearing the error store with a call to the EMH_clear_errors function.
However, if you do not do this it normally does not cause a problem, as all initial errors
should start a new stack with a call to the EMH_store_initial_error function.
Caution
ITK extension rules cannot provide information or warning feedback to users.
For example, if you want a warning dialog box to appear after an operation is done, you
cannot define the warning in ITK extension rule code because of the code processing order:
1. Teamcenter code is executed.
For example, you put the following into the extension rule custom code:
EMH_store_error_s2(EMH_severity_warning, ITK_err,
"EMH_severity_warning", "ITK_ok");
return ITK_ok;
You place this code in step 2 because you want the Teamcenter framework to proceed with
step 3 so that when the whole execution is done, the user receives a warning message in
the client.
However, the only errors or warnings returned from a service operation are the ones
explicitly returned by the business logic of the service implementation. This is typically
done with code like this:
Try
{
....
}
Catch( IFail& ifail)
{
serviceData.addErrorStack(); // This will add the current
IFail and whatever other errors/warnings are currently on the error store
}
The Teamcenter Services Framework does not look at the error store to see if there
are warnings or errors. Unless the exception moves up to the Teamcenter (service
implementation) code where it is caught, the error or warning is lost.
Memory management
Frequently, memory is allocated by a Teamcenter function and returned to the caller in a pointer
variable. This is indicated in the code by the /* <OF> */ following the argument definition. This memory
should be freed by passing the address to the MEM_free function. For example, to use the function:
int AE_tool_extent (
int* tool_count, /* <O> */
tag_t** tool_tags /* <OF> */
);
Initializing modules
You must either call the initialization function for all modules, ITK_init_module, or the individual
module initialization function, such as FL_init_module, before using any of the functions in that
module. If you do not, you get an error like FL_module_not_initialized. The only reason that
a module initialization should ever fail is if the module requires a license and there is no license
available.
You should also remember to exit modules. In some cases this could cause significant memory to be
freed. It may also free up a concurrent license for use by another user.
Compiling
Compile your program using the ANSI option. The ITK functions can also be called from C++.
Sample compile scripts are available in the TC_ROOT/sample directory. Assuming the script name is
compile, it can be used by executing the following command.
TC_ROOT/sample/compile filename.c
The actual compile statements are as follows:
• HP
cc filename.c -c -Aa filename.o -D_HPUX_SOURCE -DHPP -I$TC_INCLUDE
• Solaris
cc -KPIC -DSUN -DUNX -D_SOLARIS_SOURCE -DSYSV +w -Xc -O -I.
-I${TC_INCLUDE} -c filename.c -o filename.o
• Linux
gcc -c -fPIC -m64 -DPOSIX -I${TC_INCLUDE} filename.c -o filename.o
• Windows
If you are using Windows, you must supply the -DIPLIB=none argument to tell the compile script
that a stand-alone program object is being compiled:
Note
For information about system hardware and software requirements for Teamcenter 12, see
the Siemens PLM Software Certification Database:
http://support.industrysoftware.automation.siemens.com
/certification/teamcenter.shtml
User exits are places in the server where you can add additional behavior by attaching an extension.
Additional information about ITK user exits can be found in the Integration Toolkit Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
4. Complete your customization. It is good practice to include a printf statement with a coding
version and compile date. The following is a minimal example:
$ cd ../code
a. Copy all of the files that you want to modify (user_gsshell.c in the following example):
$ cp $TC_ROOT/sample/examples/user_gsshell.c .
c. Create additional .c files as required, but see below for Windows considerations when
exporting new symbols from the library.
5. Compile your library objects and move them into the library directory:
$ $TC_ROOT/sample/compile *.c
$ cp *.o ../user_exits
7. You now have a new libuser_exits.s file. Set the TC_USER_LIB command to get Teamcenter to
use it:
$ export TC_USER_LIB='pwd'
$ $TC_BIN/tc
You see your printf messages after the Teamcenter copyright messages.
User exits are places in the server where you can add additional behavior by attaching an extension.
Additional information about ITK user exits can be found in the Integration Toolkit Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
4. Complete your customization. It is considered good practice to include a printf statement with a
coding version and compile date. The following is a minimal example:
$ cd ../code
a. Copy all of the files that you want to modify (the user_gsshell.c file in the following example):
$ cp $TC_ROOT/sample/examples/user_gsshell.c .
c. Create additional .c files as required, but see below for considerations when exporting new
symbols from the library.
5. Compile your library objects and move them into the library directory:
%TC_ROOT%\sample\compile -DIPLIB=libuser_exits *.c
copy *.obj ..\user_exits
6. Move into the library directory and link the library. You do not need .def files:
cd ../user_exits
%TC_ROOT%\sample\link_user_exits
7. You now have new libuser_exits.dll and libuser_exits.lib files. Set the TC_USER_LIB
command to get Teamcenter to use it:
for /f "delims==" %i in ('chdir') do set TC_USER_LIB=%i
%TC_BIN%\tc
Note
To run stand-alone programs against this libuser_exits.dll library, you must link them
against the corresponding libuser_exits.lib file.
You see your printf messages after the Teamcenter copyright messages.
2. Enter USER_EXITS_API after the word extern to mark each function declaration for export.
This is called decoration.
3. Enter #include <user_exits/libuser_exits_undef.h> at the end of the header file after all
declarations (within any multiple include prevention preprocessing).
In each source file defining new functions for export from the .dll file, ensure that the .c file includes
its own corresponding .h file, so the definitions in the .c file know about the decorated declarations
in the .h file. If you do not do this, the compiler complains about missing symbols beginning with
__int__ when you try to link stand-alone code against the libuser_exits.lib file.
Now you can compile and link as normal (in other words, compile with -DIPLIB=libuser_exits).
The order of Teamcenter libraries is important or your code may not link. Also add any additional
object files you have created to the link script to include them in the main executable. A sample
linkitk script file exists in the TC_ROOT/sample directory.
If the script name is linkitk, you can execute the script with the following command:
$TC_ROOT/sample/linkitk -o executable-name filename.o
Note
• The linkitk.bat file contains only the core Teamcenter libraries. To call server APIs
from optional solutions, you must add the libraries from the optional solutions to the
linkitk.bat file. You must also ensure these libraries are available in the TC_LIBRARY
location.
For example, to add the libraries for the optional Content Management
solution, add the following lines near the end of the linkitk.bat file (before the
"-out:$EXECUTABLE.exe" line):
Be careful to execute this command in a directory where additional files can be created since session
and log files are created at run time.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
If you are upgrading from a pre-Teamcenter 2007 version and you used ITK functions to create data
model functions, you must remove those functions from your programs. They are obsolete and may
corrupt your data model if you continue to use them.
Batch ITK
ITK_exit_module( TRUE);
return status;
}
Caution
Make sure the TC_INCLUDE and TC_LIBRARY environment variables are set correctly.
The -DIPLIB argument specifies the target library; use none for batch programs.
2. Link your program with the system library with the linkitk script:
linkitk -o executable filename.obj file3.obj file2.obj
The compile and linkitk scripts are Teamcenter-specific scripts and are available in the
TC_ROOT/sample directory.
Custom exits
Caution
You should use the Business Modeler IDE user exits to do this kind of customization
work. Only create your own custom exits if you cannot accomplish your goals with user
exits in the Business Modeler IDE.
All the user exit customizations of the site are built in the form of a single library, called
site-name.dll/so/sl.
For example, if the site name is cust1, the library is cust1.dll/so/sl. The custom library
site-name.dll/so/sl is linked with libuser_exits. Use the link_custom_exits.bat file from
the TC_ROOT/sample directory. All custom stand-alone ITK utilities are linked with the
site-name.dll/so/sl file instead of libuser_exits as the customization is in the site-name.dll/so/sl
file. To see the customization, define the custom library name in the TC_customization_libraries
preference. Make sure the prefix and the library prefix are the same. Set the preference in the
Options dialog box, accessed from the Edit menu in My Teamcenter.
2. Copy all your user exits and server exits customization code to this directory.
3. Compile your object files inside this directory using the TC_ROOT/sample/compile command
with the -DIPLIB=NONE option.
4. Build the dynamic linked/shared library with the link_custom_exits command. For example:
link_custom_exits cust1
Note
• Do not modify or rebuild the libuser_exits and libserver_exits libraries with the
custom code.
• Do not unarchive the libuser_exits.ar.lib file. These object files are not required
to be linked with the cust1 library.
• Any custom library that uses the USERARG_* ITK method requires
dependency over the COTS (standard) libict shared library. If you build
this custom library on the UNIX or Linux platforms, you must modify the
$TC_ROOT/sample/link_custom_exits script to register dependency of your
custom library on the libict library. For example, add the following lines to the
link_custom_exits script:
set optionalLibraries="-lict"
$LinkTC -o $customLibrary $objectFiles $optionalLibraries
This ensures that the custom library is loaded correctly even by those standalone
ITK programs that do not have explicit dependency over the libict library.
5. If you are using Windows, copy the .dll and .PDB files to the directory that contains the .dll
files. For example:
copy cust1.dll %TC_ROOT%\\bin\\.
copy cust1.pdb %TC_ROOT%\\bin\\.
If you are using UNIX, copy the shared library to the TC_USER_LIB directory or put
the cust1 directory in the shared library path (Hewlett-Packard: SHLIB_PATH, Solaris:
LD_LIBRARY_PATH).
#include <tccore/custom.h>
#include <user_exits/user_exits.h>
#include your-include-files
extern DLLAPI int cust1_register_callbacks()
{
CUSTOM_register_exit (“site-name”, “Base User Exit Function”, (CUSTOM_EXIT_ftn_t)
your-custom-function-name);
}
• site-name_register_callbacks
This is the entry point to the custom library. This function is executed after loading the
custom library. Ensure that the site-name name is the same as the value stored in the
TC_customization_libraries preference. For example, if the custom library is cust1, this
function should be called cust1_register_callbacks.
• your-custom-function-name
This is the function pointer registered against the above base user exit. The definition of this
function does not need to exist in this file, it could be in another file.
• your-include-files
Add any custom include files needed during the compilation and linking of the custom library.
The custom functions registered in the site-name_register_calls.c file should have an integer pointer
as the first argument. This integer pointer should return the decision, indicating whether to execute
only the current customization, execute all customizations, or only the default core functionality. The
following variables are defined in the custom.h file in the TC_ROOT/include/tccore directory:
#define ALL_CUSTOMIZATIONS 2
#define ONLY_CURRENT _CUSTOMIZATION 1
#define NO_CUSTOMIZATIONS 0
cust1_register_calls.c:
#include <tccore/custom.h>
#include <user_exits/user_exits.h>
#include <cust1_dataset.h>
#include <cust1_register_callbacks.h>
#include <cust1_register_callbacks.h>
extern DLLAPI int cust1_register_callbacks()
{
CUSTOM_register_exit (“cust1”, “USER_new_dataset_name”, (CUSTOM_EXIT_ftn_t)
CUST1_new_dataset_name);
}
cust1_register_callbacks.h:
extern DLLAPI int cust1_new_dataset_name (int *decision, va_list args);
cust1_dataset.c:
#include <cust1_register_callbacks.h>
#include <tccore/custom.h>
extern DLLAPI int CUST1_new_dataset_name (int *decision, va_list args)
{
/*
Expand the va_list using va_arg
The va_list args contains all the variables from USER_new_dataset_name.
*/
tag_t owner = va_arg (args, tag_t);
tag_t dataset_type = va_arg (args, tag_t);
tag_t relation_type = va_arg (args, tag_t);
const char* basis_name = va_arg (args, const char *);
char** dataset_name = va_arg (args, char **);
logical* modifiable = va_arg (args, logical *);
/*
if cust1 wants all other customizations to be executed, then set
*decision = ALL_CUSTOMIZATIONS;
elseif cust1 wants only its customization to be executed then set
*decision = ONLY_CURRENT_CUSTOMIZATION;
else if cust1 wants only base functionality to be executed, then set
*decision = NO_CUSTOMIZATIONS;
*/
/* Write your custom code
*/
return ifail;
}
Note
• The va_list argument in the CUSTOM_EXIT_ftn_t function should match the
arguments of the base user exit or server exit function in the CUSTOM_register_exit
function.
• To register pre- or post-actions against Teamcenter messages, you must use the
USER_init_module method, for example:
CUSTOM_register_exit("lib_myco_exits", "USER_init_module",
(CUSTOM_EXIT_ftn_t)register_init_module);
If your site wants to execute some other site’s customization other than your customization, follow
these steps:
1. Both the cust1 and cust2 sites should follow the sections above to register their customization
for USER_ exits.
2. Both sites should build their cust1.dll/sl/so cust2.dll/sl/so custom libraries to share them.
4. Add the custom library names in the TC_customization_libraries preference in the Options
dialog box, accessed from the Edit menu in My Teamcenter:
TC_customization_libraries=
cust1
cust2
5. The preference values should match the library names. For example, the cust1 site should have
the cust1.dll/sl/so file and the cust2 site should have the cust2.dll/sl/so file.
6. The custom libraries of the cust1 and cust2 sites (which would be cust1.dll/sl/so and
cust2.dll/sl/so, respectively) should be in the library path:
• LD_LIBRARY_PATH for Solaris or Linux
CUSTOM_register_callbacks
This function registers customizations for all customization contexts registered in the Options dialog
box (accessed from the Edit menu in My Teamcenter):
int CUSTOM_register_callbacks ( void )
This function loads the custom library and calls the entry point function pointer to register custom
exits. The entry point function contains the custom registrations for the required USER_ exits.
CUSTOM_register_exit
This ITK function should be called only in the USER_preint_module function in the user_init.c file.
It should not be called anywhere else. This function registers a custom exit (a custom function
pointer) for a given USER_ exit function:
int CUSTOM_register_exit ( const char* context, /* <I> */
const char* base_ftn_name, /* <I> */
CUSTOM_EXIT_ftn_t custom_ftn /* <I> */
);
• context
Specifies the context in which this custom exit has to be registered. It is the name of the
customization library (for example, GM, Ford, Suzuki).
• base_ftn_name
Specifies the name of the USER_ exit for which the custom exit must be registered (for example,
USER_new_dataset_name).
• custom_ftn
Specifies the name of the custom exit (a custom function pointer).
CUSTOM_execute_callbacks
This function executes the custom callbacks registered for a particular USER_ exit:
int CUSTOM_execute_callbacks ( int* decision, /* <O> */
const char* ftn_name, /* <I> */
variables /* <I> */
);
• decision
Executes one of the following options:
o ALL_CUSTOMIZATIONS
o ONLY_CURRENT_CUSTOMIZATION
o NO_CUSTOMIZATIONS
• ftn_name
• variables
The variables that need to be passed to the custom exit (a custom function pointer).
For each custom library registered in the Options dialog box, accessed from the Edit menu in
My Teamcenter:
• Get the corresponding custom function pointer.
This function is called in all the USER_ functions in user exits and server exits. The va_list list must
be expanded in the custom function.
CUSTOM_execute_callbacks_from_library
This function executes custom callbacks registered in a particular library for a particular USER_
exit by:
• Getting the corresponding custom function pointer.
• decision
Executes one of the following options:
o ALL_CUSTOMIZATIONS
o ONLY_CURRENT_CUSTOMIZATION
o NO_CUSTOMIZATIONS
• lib_name
Specifies the name of the customization context.
• ftn_name
Specifies the name of the USER_ exit.
• variables
Specifies the variables that need to be passed to the custom exit (a custom function pointer).
Example
The following code (using cust1 as the site) shows registering a custom handler in a custom exit:
extern int cust1_action_handler(EPM_action_message_t msg) {
/* insert custom handler code here */
}
extern DLLAPI int cust1_register_custom_handlers(int *decision, va_list args) {
int rcode = ITK_ok;
*decision = ALL_CUSTOMIZATIONS;
rcode = EPM_register_action_handler("cust1-action-handler", "",
cust1_action_handler);
if (rcode == ITK_ok)
fprintf(stdout, "cust1-action-handler successfully registered!\n");
else
fprintf(stdout, "WARNING - cust1-action-handler NOT registered!\n");
return rcode;
}
extern DLLAPI int cust1_register_callbacks () {
CUSTOM_register_exit ( "cust1", "USER_gs_shell_init_module",
(CUSTOM_EXIT_ftn_t) cust1_register_custom_handlers);
return ( ITK_ok );
}
2. To register the property methods, follow this sample registration in the following code:
USER_prop_init_entry_t icsTypesMethods[] =
{
{ (char *)"WorkspaceObject", icsInitWorkspaceObjectProperties, NULL
}
};
int npTypes = sizeof ( icsTypesMethods )/sizeof( USER_prop_init_entry_t );
for ( i = 0; i < npTypes; i++ )
{
// firstly, findout if a base method is already registered for
// TCTYPE_init_user_props_msg
ifail = METHOD__find_method( icsTypesMethods[i].type_name,
TCTYPE_init_user_props_msg, &initUserPropsMethod );
if( ifail == ITK_ok)
{
if ( initUserPropsMethod.id != 0 )
{
// already registered, the add the method as a post_action method
// add method as post action to the existing base method
ifail = METHOD_add_action( initUserPropsMethod,
METHOD_post_action_type,
icsTypesMethods[i].user_method,
icsTypesMethods[i].user_args );
}
else
{
// not yet registerd, add the method as a base method
// there is no method registerd
ifail = METHOD_register_method ( icsTypesMethods[i].type_name,
TCTYPE_init_user_props_msg,
icsTypesMethods[i].user_method,
icsTypesMethods[i].user_args,
&methodld );
}
}
}
• Action
Note
This function should be called in the user_gsshell.c file.
4. Give the new root template a name and select another root template to base this template on.
Click OK.
6. Select the task action where you want the handler to execute, and then select the handler.
7. Select the arguments needed for your handler and type their values.
8. Click the plus button to add the handler to the task action. When you are done, close the
Handlers dialog box.
Note
These functions relocate only the files associated with the dataset. If the job is released
for an item, folder, form and dataset, then this function only relocates the files for the job
released for the dataset.
• smp_cr_mv_hl.c
Contains the USER_cr_init_module function. This function should be called in the
USER_gs_shell_init_module function.
Once called, a new SMP-auto-relocate-file action handler is registered and the job is released.
If it contains a dataset, the dataset is relocated to the FLRELDIR directory.
• smp_cr_mv_fl.c
Contains the relocate_released_dataset_file function. This function's purpose is to relocate files
associated with the datasets being released. Using the tag of the release job as input, it finds all
target objects. If any of the target objects is a dataset, all files referenced by it are copied to the
operating system directory specified by FLRELDIR. The directory is not created by this function.
• sample_cr_move_files_main.c
Works as a standalone ITK program. It logs into Teamcenter and initializes the AE, PSM and
SA modules. Next, it creates the FLRELDIR operating system directory. It then searches for all
released jobs containing datasets and relocates these dataset files to the FLRELDIR directory.
The user-id variable is the Teamcenter user name; password is the Teamcenter user password;
and group is the Teamcenter user group name.
Memory allocation
If you need to manually allocate memory when writing code for the Teamcenter server, use
MEM_alloc instead of malloc.
MEM_alloc is part of the BASE_UTILS_API group, one of several functions that parallel standard C
memory allocation functions. Use MEM_free to release memory back to the system.
More information about these and other functions can be found in the Integration Toolkit Function
Reference, which is part of the Developer References for Customization.
Method arguments
Method arguments extracted using va_arg must be extracted in the sequence indicated in the
Integration Toolkit Function Reference. For example:
tag_t prop_tag = va_args(args, tag_t);
char **value = va_args(args, char**);
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Caution
A method registered against this message does not exist for most objects. Therefore
the METHOD_find_method, METHOD_add_action, and some other functions cannot
be used unless another method is registered. The METHOD_find_method function
does not consider this an error and does not return a failure code. It merely returns
a null method_id.
• PROP_ask_value_type_msg
Returns the value of a property.
Note
This message is called by most ITK functions and internally by Teamcenter. However,
some ITK functions (for example, Form, FL, AE, and so on) may be bypassing this
code at this time.
Caution
Never call the PROP_ask_value_type function from inside a method registered
against the PROP_ask_value_type_msg function on the same property. This forces
Teamcenter into a recursive loop. Instead use the PROP_get_value_type function to
get the raw data value.
• PROP_set_value_type_msg
Is passed whenever the user chooses OK or Apply in the Attributes dialog box or sets a value
in the workspace, Structure Manager, or with ITK. The PROP_assign_value_type function
can be used.
Caution
Never call the PROP_set_value_type function from inside a method registered
against the PROP_set_value_type_msg function on the same property. This forces
Teamcenter into a recursive loop. Instead use the PROP_assign_value_type function
to store the raw data value.
Value caching
Value caching is a technique used with run-time properties to improve performance. This is valuable
when the value of a derived property is not likely to change during the current Teamcenter session.
To cache values, use the following technique:
1. The first time a run-time property is called using the PROP_ask_value_string method, cache the
string value with the PROP_assign_type method.
2. On subsequent PROP_ask_value_string method calls, use the cached value by calling the
PROP_get_type method.
Overview
Bulk property retrieval loads all the requested properties for a list of objects in one request,
significantly reducing the number of database trips. Bulk property retrieval is used by the Teamcenter
services framework to improve property retrieval performance for all service operations.
A run-time property computes its value when requested based upon other persistent properties.
It may run a database query to retrieve various persistent objects and their properties from the
database. A run-time property can register a bulk loader function with the bulk property retrieval
framework. Bulk property retrieval calls the bulk loader functionality with all of the objects upon
which the run-time property is requested, so that the bulk loader can perform bulk query and bulk
loading of persistent objects.
To implement a custom bulk loader:
1. Register the bulk loader.
• Locking
Support for many different applications accessing the same data concurrently.
• Referential integrity
Protection against the deletion of data used by more than one application.
• Access controls
Support for the access control lists attributed to objects. The access controls themselves are
manipulated by functions provided by the Access Manager.
To understand how the Persistent Object Manager (POM) works, you need to know about classes,
attributes, instances, indexes, and tags:
• Attribute
Attributes are the individual data fields within classes. Each attribute has a unique name within
the class that identifies it. It also has a definition that defines the type of data object and its
permitted values. For example, a Human has attributes of weight and eye color. Weight is a real
number. Eye color may be one of a set (for example, blue, grey, brown).
You can declare attributes to be class variables. This is an attribute that has the same value for all
instances of the class. When the attribute value is changed by modifying any instances, the value
changes for all instances of the same class (or subclass). This is also known as a static variable.
• Class
A class is a definition of a data structure containing one or more attributes. Classes are organized
in hierarchy, with superclasses and subclasses. A class inherits attributes from its parent
(superclass) and passes its attributes on to its children. POM allows only single inheritance; that
is, any class can have only one parent, one grandparent, and so on.
In the following figure, classes B and C can inherit from (in other words, be direct subclasses of)
only one immediate superclass, class A. Classes B and C cannot also inherit from class XX.
Classes
Each class has a unique name. For example, the Human class is a subclass of Primate, which is
itself a subclass of Mammal. Primate inherits attributes such as hairiness and warmbloodedness
from the definition of Mammal. In addition to the inherited attributes, it adds its own, such as
binocular vision. As Human inherits from Primate, it therefore has all the attributes of a Primate.
In addition, it can have specific attributes, such as I.Q.
• Index
Indexes are created on one or more of the attributes that exist within a class definition. Indexes
can only be defined across attributes that belong to the same class.
Indexes aid performance. For example, two attributes that the Human class has are weight and
eye color. To list all Humans who have an eye color of brown, you would probably search the
database for all instances of the Human class that matches the applied criteria.
Though this type of search is performed well by a RDBMS, it is slow. The alternative is to have
an index on the eye color attribute. Internally, the database maintains its index of where to find
objects with the required attributes. Therefore, the search for Humans with brown eyes is much
faster if an index is put on a value attribute.
To optimize performance when defining the indexes, you have to anticipate what searches the
application is likely to be required.
Note
Indexes improves query performance when searching on that indexed attribute, but at
a small time penalty for save, update, and delete operations which must update the
instance and the index.
An index can be defined as unique. This ensures that the POM checks for an existing value
for an index when a class with attributes containing an index is instantiated. The uniqueness
is tested only upon a save to the database. While an instance remains in a user's session,
duplicates may exist.
For example, the POM_user class has a name attribute that has a unique index defined on it.
This prevents two POM users with the same name from being registered with the POM.
When the unique index spans more than one attribute, the uniqueness must be for all the
attributes combined. For example, the POM_user class with first_name and second_name
attributes, both covered with a unique index, allows (John Smith and John Brown) and (John
Smith and Bill Smith), but not (John Smith and John Smith), to be registered with POM.
• Instance
Instances are the individual data items created from class definitions. Making an instance from a
class definition is instantiating that class. Each instance has the attributes defined in the class,
but has its own values for these attributes. For example, an individual Human might have a
weight value of 160 pounds and an eye color value of brown.
Some classes are defined to be uninstantiable, meaning they cannot be instantiated. An example
of such a class would be Mammal.
• Tag
A tag is a short-lived reference to an instance. The tag persists for the length of a session and
can be exchanged between processes in a session.
To exchange a reference to an instance between processes of different sessions, the tag must
be converted to a persistent identifier called a handle by the sender, and from handle to tag by
the receiver. The receiver's tag can be different in value to the sender's as they are in different
sessions.
The data manipulation services allow you to execute various actions on instances.
Note
Do not use these methods on WorkspaceObject business objects. Only use these
methods if a more specific function is not available.
• POM_class_of_instance
Finds the class to which an instance belongs. This function returns the class in which the
instance was instantiated.
Use the POM_loaded_class_of_instance function to find the class of an instance as it has
been loaded.
• POM_copy_instances
Creates copies of a set of existing instances that must be loaded in the current session. The new
instances are of the same class as the source. The attributes are maintained as follows:
o Application-maintained attribute values are copied.
The copy is created as an instance of the class in which the original instance was instantiated.
This applies even if the original instance is loaded as an instance of a superclass.
• POM_create_instance
Creates an instance of a specified class.
Caution
Only use this method when you cannot use the
Teamcenter::BusinessObject::create(Teamcenter::CreateInput*)? method.
You can start the POM module without setting a current group. Because of this, it is also possible
that an attempt can be made to create an instance for which an owning group cannot be set.
When this happens, the function fails with no current group.
A function fails with an improperly defined class when the class is only partially defined. For
example, if you define an attribute to have a lower bound but have not yet set the value for
this lower bound.
When an instance is created, all attributes with an initial value defined are set to that value.
System-maintained attributes, such as creation date and owning user, are set by the system.
All other attributes are empty. These must be set to allowable values before the instance can
be saved.
• POM_delete_instances
Enables the deletion of a specified instance. The POM_delete_instances_by_enq function
enables the execution of a specified enquiry followed by the deletion of the resulting instances
from the database.
Note
The instances to be deleted must not be referenced or loaded by this or any other
POM session. This is how the POM maintains referential integrity. Newly created
instances (in other words, those that have not yet been saved) cannot be deleted. This
is because deletion is a database operation and these newly, unsaved instances do
not yet exist there. Unloading these instances effectively deletes them. If a class
is application-protected, then instances of that class can only be deleted by that
application.
• POM_instances_of_class
Returns all the instances of a specified class.
• POM_load_instances
Loads and creates complete instances of a class. To load instances that are of different classes,
use the POM_load_instances_any_class function. However, the class that they are loaded as
must be common to all loaded instances as in the following figure.
Instances can be loaded as instances of their own class.
Class hierarchy
When the modify_lock argument for this function is set, the instances are locked for modification,
therefore no other session can lock them for further modification at the same time. When they are
not locked for modification but are either set to read_lock or no_lock, this permits other sessions
to lock them for modification or read as required.
The POM_load_instances_by_enq function executes a specified enquiry and loads the resulting
set of instances from the database, creating in-memory copies of the instances.
The POM_is_loaded function checks whether a specified instance is loaded in the caller's local
memory. Newly created instances are counted as loaded, as they are already present in the local
memory having been created during that session.
To check whether the given instance is loaded in the caller's local memory for modification,
use the POM_modifiable function.
• POM_order_instances
Orders an array of loaded POM instances on the values of attributes that are common to all
the instances.
For this operation to be successful, the following criteria must be applied:
• POM_refresh_instances
Refreshes a specified set of instances when instances are all of the same class.
The refresh operation is the equivalent to unloading and then reloading an instance without
having to complete the two separate operations. This operation causes the attribute values to be
refreshed to the corresponding values in the database.
Occasions when the refresh operation can be used are:
o When you need to update the instance that is currently being worked on to reflect changes
that may have occurred since the user's copy was made.
o When you decide to scrap all changes that have been made to the instance.
To refresh a specified set of instances that are not all of the same class, use the
POM_refresh_instances_any_class function. The instances are refreshed in their actual class.
These refresh functions allow the specified instance's locks to be changed.
The POM_refresh_required function checks whether a specified instance, which is loaded for
read in the caller's local memory, requires a refresh by testing if the database representation
of the instance has been altered.
• POM_save_instances
Saves to the database created instances or changes that were made to a set of instances. An
additional option of this function is the ability to discard the in-memory representation.
The POM_save_required function checks whether a specified instance was modified. The
instance may be newly created or loaded for modification in the caller's local memory storage.
The function determines modification by testing whether the local memory representation of the
instance was altered since the instance was loaded.
After an object instance is saved, it remains locked. The application must unlock the object by
calling the POM_refresh_instances function.
• POM_unload_instances
Unloads specified instances from the in-memory representation without saving any changes that
were made to the in-memory representation.
To save changes that were made to the in-memory representation, use the POM_save_instances
function.
• POM_references_of_instance
Enables you to find all instances and classes in the database which contain references to
a specified instance.
The level of the search through the database is determined by the n_levels argument. The
where_to_search argument determines which domain to search (in other words, local memory,
database or both).
The DS argument limits the search to the working session (things not yet committed to the
database). The DB argument limits the search to the database only. DS and DB together
extends the search to both domains.
When a class is found to be referencing an instance, there is a class variable which contains
that reference.
An inquiry to the Persistent Object Manager (POM) is a pattern that can be applied to saved instances
of a specified class (and its subclasses) to select a subset of those instances.
The inquiry takes the form of restrictions on attribute values and combinations of the same, using the
logical operators AND and OR. For example, if you have a furniture class with a color attribute and a
table subclass with a height attribute, it is possible to create an inquiry to list all instances of the table
class that has the restriction that the color is red.
Once an inquiry is created, it can be combined with other inquiries (such as height less than three
feet). It can then either be executed (to return a list of tags of those instances that match the inquiry)
or it can be passed to the POM_load_instances_by_enq, POM_select_instances_by_enq, and
POM_delete_instances_by_enq functions which then load, select, or delete, as appropriate, those
instances which satisfy the inquiry.
The following functions enable the user to create, but not to execute, an inquiry of a specified
attribute or array-valued attributes:
• POM_create_enquiry_on_type
• POM_create_enquiry_on_types
To execute a specified inquiry and return a list of tags of instances as specified by the inquiry, use
the POM_execute_enquiry function.
To combine two inquiries use the POM_combine_enquiries function.
The POM_order_enquiry function specifies the order in which to produce the instances.
• Each attribute may be ordered in ascending or descending order. The instances are ordered on
the value of each attribute in turn, starting with the first attribute in the array.
• This function can be used to override a previously specified order. Therefore, for instances,
the same enquiry_id can be used more than once to produce different orderings of the same
instances.
Note
It is not possible to order on either variable length arrays (VLA) or arrays.
Note
Use these methods only if you must manage attributes outside the scope of a business
object.
• POM_modify_type
• POM_modify_types
• POM_modify_null
• POM_modify_nulls
• POM_modify_type_by_enq
• POM_modify_types_by_enq
• POM_modify_null_by_enq
• POM_modify_nulls_by_enq
• All or some of the specified array-valued attribute to the given array of values, or to NULL, for
specified instances in the same class.
• The specified attribute to the given value for all instances found by an enquiry.
• All or some of the specified array-valued attributes to the given array of values for all instances
found by an enquiry.
These functions modify data directly in the database and not on loaded instances.
The following functions enable the user to set the value of a specified attribute in an instance:
• The POM_set_attr_type and POM_set_attr_null functions change the specified attribute to the
specified value or to NULL in the local memory only, for instances when the specified instances
are all in the same class.
• The POM_set_attr_types and POM_set_attr_nulls functions change all or some of the specified
array valued attribute to the specified array of values or to NULL, in the local memory only,
for instances all in the same class.
When changing the specified array-valued attribute to NULL, the attribute must exist for all the
instances, therefore there must be a class with that attribute and instances must be in that class or in
a subclass of it. This also applies to nonarray attributes.
You can use the following functions within the POM module to retrieve the value of a specified
attribute in an instance, for loaded instances only:
• POM_ask_attr_type
Returns the value of the attribute for a specified instance. The value returned is the value of that
instance in the local memory; this could differ from the corresponding value in the database.
• POM_ask_attr_types
Returns n-values of values of the array-valued attribute for a specified instance starting from the
position start. The values returned are those of the value of that instance in the local memory;
these could differ from the corresponding values in the database.
The POM provides the following functions for conversion and comparison operations:
• POM_tag_to_string
• POM_string_to_tag
• POM_compare_dates
Variable length arrays (VLA) are attributes that can have a variable number of elements of their
declared type. The data type of the VLA and class type (for typed references) or string length (for
notes and strings) are declared when the attribute is defined.
When an instance of a class with a VLA attribute is created, that VLA attribute is initialized to a
zero length.
The following functions are used to extend or remove elements from the VLA. All these functions, with
the exception of POM_reorder_attr because it can be applied to any array, are specific to VLAs:
• POM_insert_attr_types
Inserts the specified values into a specified VLA attribute at a specified position. The maximum
value for the specified position is the length of the VLA.
• POM_clear_attr
Clears all values from the specified VLA, effectively setting its length to 0 (zero).
• POM_remove_from_attr
Removes a specified number of elements from the VLA.
• POM_append_attr_types
Adds the specified values to the end of the specified VLAs.
• POM_length_of_attr
Returns the length of a specified VLA (in other words, the number of values in the VLA).
• POM_reorder_attr
Reorders elements within a VLA.
Check reference
To perform reference checking to find the type of a specified attribute of an instance, use the
POM_check_reference function.
The POM_check_reference function permits checking for consistency between the class definition
and the instance information.
Note
If you create a new member object for yourself, you cannot immediately set your current
group (using the POM_set_group function) to that specified. You must save the member
object first.
User, group and member objects are instances of application-protected classes, though some of the
attributes of these instances are declared to be public read so that they can be asked about using the
normal calls. The effect of making these classes application protected is to restrict access to certain
attributes (for example, changing the password of a user is only possible by someone who knows the
old password) or an system administrator and to prevent every user of the system from being able to
generate new users or delete bona fide users.
Create
The following functions create instances of users, groups, and members:
• POM_new_user
• POM_new_group
• POM_new_member
The group administrator of the group in which the member resides, as well as the system
administrator, can create members of a group.
Users must be associated with a group using the member functions before the user and group
become registered.
Initialize
The following functions initialize users, groups, and members:
• POM_init_user
• POM_init_group
• POM_init_member
Delete
The following functions delete users, groups and members:
• POM_delete_user
• POM_delete_group
• POM_delete_member
These functions are required because of the application protection. They take tags of instances
of groups, users or members as appropriate or any subclass of the same. If the subclass is also
application protected, then that application must also be identified.
Only a system administrator can use these functions.
Note
When a user logs into the POM, the appropriate user and group objects are locked so that
they cannot be deleted. This lock is updated whenever the user changes the group.
After a member object is deleted, the user specified cannot log onto the listed group.
Set group
To set the group to the named group, use the POM_set_group function. Alternatively, the
POM_set_group_name function sets a specified name for a specified group.
The POM_set_default_group function sets the default group for the current user. The user object
must be loaded for modification and then must be saved again afterwards for this change to be
permanent.
The POM_set_user_default_group function sets the tag of the specified user's default group. This
is the group which the user is logged into if the POM_start function is given an empty string for the
group_name argument. Only a system administrator or that individual user can use this function.
To enable a user that is logging into a group to have group system administrator privileges, use
the POM_set_group_privilege function with the privilege value of 1. A privilege value of 0 (zero)
ensures that a person logging into the specified group only has the privileges associated with an
ordinary user. The POM_set_member_is_ga function sets the group administration attribute for the
specified group to the specified value, either 0 (zero) or 1.
A member is a loaded form of the database link that gives the user authority to log onto the chosen
group (in other words, gives the user membership to the group).
The POM_set_member_user function sets the user (tag) attribute of the specified member instance
to the supplied user tag. The POM_set_member_group function sets the group (tag) attribute of the
specified member instance to the supplied group tag.
• POM_is_user_sa advises if the current user is logged into the POM under a privileged group.
• POM_ask_default_group returns the tag for the default group of the current user.
• The POM_ask_group function returns the name of the group, as handed to the POM_set_group
or POM_start functions, and the group_tag, as returned by the POM_set_group function.
• The POM_get_user function returns the name of the logged in user, as handed to the POM_start
function, and the user_tag, as returned by the POM_start function.
Application protection
Classes can be associated with a particular application, so that only that application can perform the
following operations on instances of these classes:
• Set
• Ask
• Delete
• Load
Ask is required to get at the information.
• Save
This fails if there are any empty attributes, and set is required to fill them in.
• Modify
Modify is the equivalent to load, set, save, and unload.
The application protection affects all attributes defined in the given class in all instances of it or its
subclasses. Attributes can be unprotected by declaring them as public, read, or modify in the attribute
definition descriptor.
The association between a class and an application is not inherited by subclasses of the class.
Those subclasses may have an association with some applications. This could be the same one as
the class itself.
The POM_identify_application function is used by an application to identify itself to the POM so that
the POM allows it to perform protected operations on classes with which the application is associated.
On completion of the protected operations, the application becomes anonymous. POM does not
allow further operations without the application re-identifying itself.
For an application to be registered with POM and thereby obtain the application ID and code that is
required by the POM_identify_application function, use the POM_register_application function.
The code number returned by the POM_register_application function is a random number which,
in turn, is used as a level of security checking.
The POM_start function is the first call to the POM module and therefore it logs the user into
the POM. If you call any other function before this function, then that function fails except for
POM_explain_last_error which always works.
To log out of the POM, use the POM_stop function.
Password manipulation
To set the password for the specified user, use the POM_set_password function.
Note
Because of the severity of this operation, it can only work on one user at a time.
The POM_check_password function checks the password for a user by comparing the given
password with the stored password.
For security, the stored password is encrypted and never decrypted or returned through the interface.
The encryption algorithm is not public.
Note
When Security Services are installed and the TC_SSO_SERVICE environment variable
is set in the tc_profilevars file, passwords are managed by an external identity service
provider rather than Teamcenter. In this case, the POM_set_password function returns
POM_op_not_supported (the ability to change a password in the rich client and ITK is
disabled).
Rollback
The rollback facility enables you to restore a process to a previous state. The state at which a process
is restored is dependent on the position of the specified markpoint.
Markpoints are used at your discretion and are placed using the POM_place_markpoint function.
When a markpoint is placed, an argument is returned advising you of the identifying number of
that markpoint.
To rollback to an earlier state in the process, use the POM_roll_to_markpoint function specifying the
markpoint to which you want to rollback. This function restores the local memory and the database to
the corresponding state that it was in when the specified markpoint was set.
As an overhead saving, rollback can be turned off with the POM_set_env_info function.
The rollback removes the caller's changes to the database. Any changes to the database by other
users are unaffected. If other users have made changes which prevent the caller's changes being
reversed (for example, deleted or changed values in an instance in the database), then the rollback
fails.
Time-outs
You can use the POM_set_timeout function to set a session wide time-out value to determine the
period during which a POM function repeatedly tries to lock an instance for change (for example,
when calling the POM_load_instances function).
After the time-out period, if the POM function that is to perform the change is unsuccessful at getting
the lock, it returns the appropriate failure argument as described in the relevant function.
The POM_ask_timeout function advises the user what time-out period has been set for the session
in seconds.
Errors
In the event of a failure, the POM_explain_last_error function explains why the last failing POM
function failed. The information that is returned by this function includes:
• The actual error code.
The POM_describe_error function returns a string describing the specified error code.
The description returned by this function is the same as that returned by POM_explain_last_error.
For convenience the POM_describe_error function can be called with any error code at any time.
Control of logons
The control of logons, which allows or prevents other users from accessing POM, is set by the
POM_set_logins function. When the system administrator needs to perform an operation that
requires only one POM user, the system administrator can use the POM_set_logins function to
prevent any other user from logging into the POM. In the unusual event of the process which
disabled logons terminating abnormally, the system administrator does have the authority to log on
using the appropriate group and password as identification, thus overriding the effect of the earlier
POM_set_logins function call.
However, all POM users can use the POM_ask_logins function to determine whether other users are
currently allowed to access the POM.
Environmental information
The POM_set_env_info function offers alternative methods of enabling and disabling operations as
explained in its description. This function sets the following environmental information:
• Enable or disable rollback.
• Enable or disable attribute value checking in the local memory (for example, for the duplication of
unique attributes or for checking against upper/lower bounds).
• Enable or disable journalling (for example, for writing a function's arguments to the system log
when debugging and bug reporting).
The POM_ask_env_info function returns the environmental information that was specified by the
POM_set_env_info function.
SQL
The application data stored by the POM module is accessible using direct SQL statements for read
and write purposes. The POM_sql_view_of_class function creates a database view of the selected
attributes in the given class. The view is given the name supplied and the columns are aliased with
the column names that are given in the argument for the column names.
The attributes can be from a specified class or any superclass, but they cannot include any array
types.
• The POM_type_of_att function returns the array type of the attribute identified by its class_id
and attr_id. The returned types of arrays are:
o Nonarray (for example, an integer)
o Small array
o Large array
• POM_attr_to_apid
Converts the external class and attribute identifiers into the internal integer attribute identifier.
• POM_class_to_cpid
Converts the external class identifier into the internal integer representation for a class.
• POM_get_char_ordering
Returns the ordering on the character set in use. This is useful when creating an enquiry which
selects on a string-valued attribute.
• POM_describe_token
Returns a string describing the specified token (for example, for the POM_int token, which is a
defined value, the returned string might be integer).
• POM_free
Calls MEM_free. Frees the space that the POM allocated internally when returning an array
of information.
Values that can be handed to this function are indicated by the letters <OF> after the argument
in the function syntax.
Note
POM enquiries are not the same as saved queries. They are completely different
mechanisms.
• It supports more features present in SQL and allows you to do more complex queries.
• You can ask for more than one attribute to be returned. The old system only returned the tags
of matching objects.
• You can call the POM_load_instances_possible_by_enquiry function using a new enquiry and
have all the matching instances loaded efficiently.
The definition of all the API calls for the new enquiry system are in the enq.h header file and the
Integration Toolkit Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
If you have legacy code, you can continue to use the old enquiry system. However, only the newer
enquiry system is described here. To make the code more readable, the return values of the API
calls are ignored in the examples. In ITK, you should always check return values before proceeding
to the next call.
There are a number of basic building blocks for an enquiry. These are described below and then the
examples explain what you actually do with these and what API calls to use.
Each enquiry has a unique name used to identify it. You specify the name when you create the
enquiry and must specify this name for anything you do with this enquiry. Once you create an enquiry,
it exists in memory until you delete it. Therefore, you can access it using the unique name even if the
tag for it has gone out of scope. It also means you must clean it up explicitly once you’ve finished with
it. An enquiry is made up of a number of smaller parts:
• SELECT attributes
The values you want to get back from the enquiry.
• ORDER BY clauses
If you want your results to come back in a particular order, you can use an ORDER BY. Note that
this adds a SELECT attribute for the thing you are ordering by.
• Attribute expressions
These allow you to control what data you want returned. These expressions form the where part
of the generated SQL statement. There are two main types of attributes:
o Fixed attributes
These are the same every time you run the enquiry. They can be simple values (for example,
name = ‘Bill’) or used to create joins between tables (described below).
o Bind variables
These allow different values to be passed to the database each time the enquiry is run.
These typically would be used either to find related objects or for values that you do not know
at compile time (for example, the value of tags).
When you add attribute expressions, you normally give both the name of the attribute and the
class it is from.
• Where clause
This combines a number of attribute expressions using different set expressions. The most
common one is AND.
These parts are the ones that you come across most often. There are a few more than can be used;
they are listed later in this document.
For example, compare the different building blocks with a very simple SQL statement:
SELECT attr1 from table1 where attr3=’test’ ORDER BY attr1
3. Add any attributes you need for where clauses, joins, or other items.
4. Create expressions with the attributes from the previous step to make a where clause.
7. Clean up.
Below are a number of examples that illustrate these steps for different cases.
Note
In the example, expr1 is any string.
The program can then iterate through the results and copy the results. Once that is done, the results
array should be freed up using the MEM_free function.
POM queries are largely built in terms of class and attribute names. Where a class name is required,
you can use the name of any POM class or any persistent Teamcenter type. (POM enquiry does
not support queries on run-time Teamcenter types.) On execution, the query returns results from
the instances of the class or type you use. You do not need to know that names of the underlying
database tables that POM uses to store class or type instances.
An attribute is always associated with a class. Where an attribute name is required, you can use the
name of any of the class’s attributes or any of its persistent compound properties. (There are some
cases where compound properties are not supported.) You do not need to know the names of the
underlying database columns that POM uses to store the attributes.
For example, if you want to get the object_desc attribute of an item, you would refer to it using the
Item class and object_desc attribute. It does not matter that the object_desc attribute is actually on
the workspaceobject parent class or what the name of the column the data is stored in. The one
exception is the puid attribute. This is, in effect, the tag of the object and its unique identifier. If you
want to load an object or follow a reference from one object to another, you must use this attribute.
POM enquiry supports queries on both attributes and on certain compound properties. Whether
a compound property is supported depends on how it is defined. POM enquiry only supports
compound properties:
• Whose values are stored in the database (not those calculated at run time).
• That are defined in terms of traversing POM references or Teamcenter relations (not those
defined in terms of traversing from an item to its item revision).
• That are defined in terms of traversing to objects whose type is specified (rather than in terms
of traversing untyped references).
• If the compound property value comes from a form, where the class used to store the value is
the one specified in the current property definition (rather than some other class specified in a
previous version of the definition).
Reusing an enquiry is more efficient. However, be careful in case your code is called just before a
POM_roll_to_markpoint function call or something else that undoes any allocations in the POM.
To protect yourself against this, use a static variable and the POM_cache_for_session function.
For example:
static tag_t my_tag = null_tag;
if (my_tag == null_tag)
{
/* Create enquiry objects, set up bind values, etc */
…
POM_cache_for_session( &my_tag );
my_tag = 1;
}
else
{
/* just set up bind values */
…
}
/* now execute the enquiry and use the results */
…
The POM_cache_for_session function adds to the variable passed to it the data that is rolled
back during an UNDO operation or the POM_roll_to_markpoint function. If one of these occurs,
then your variable is set back to its previous value (in this case null_tag) and you know you have
to create the enquiry again.
Simple join
Each entry in the process_stage_list array points to an EPMTask using its puid field. The example
in the following code is more complicated. The information you want is the object_name attribute on
all the EPMTasks referenced by the workspace object:
/* Create enquiry and give it a name */
POM_enquiry_create (“find_referenced_task”);
/* And now join the workspace object to the EPMTasks via the
process_stage_list attribute on the workspace object */
POM_enquiry_set_join_expr (“find_referenced_task” , "task_join" ,
"workspaceobject" , "process_stage_list" ,
POM_enquiry_equal , "EPMTask" , "puid" )
To fully understand and be able to create queries, you must understand the following important
terms and concepts:
Class alias
This is the equivalent of a table alias in the SQL-92 standard. The class alias is very useful if
you want to define a query that has a self-join. The classic example of this is employees and
managers (Find the list of managers and their employees).
Pseudo class
The pseudo class represents a compound attribute of a given Teamcenter class. The attribute
must be either a:
• Large array (LA) or
This is useful if you want to query the LA or VLA based on a condition on one of the array attribute.
Set expression
This is the way of combining the result of two or more queries into one using (UNION,...). The
queries must be union-compatible (in other words, they must have the same number of attributes
selected and they must respectively be of the same domain).
Expression
There are two types of expressions that can be built in an SQL statement. These are:
• Logical expressions
Any expression that results to a logical value (true or false). For example, A or B, ( A and B )
or C, A=B, A>=B, A IS NULL, A in [a,b,c].
• Function expressions
Any expression that uses a function to manipulate an attribute. For example, SUBSTR (
A,1,14), UPPER, A+B,LOWER(A), MAX(A), MIN(A), A+B,A/B.
Escape character
Teamcenter provides two special wildcard characters: POM_wildcard_character_one and
POM_wildcard_character_any. You can set these values by calling the POM_set_env_info
function. However, if the string contains any of these wildcard characters as part of the data, you
have no way of telling Teamcenter how to treat these values. For this reason, there is a token for
the escape character called POM_escape_character.
You can set this character in the same way as for the POM_wildcard_character_any or
POM_wildcard_character_one characters, by calling the POM_set_env_info function with its
first argument POM_escape_character token. If any of the wildcard characters is part of the
data, then you must escape it with the chosen escape character.
Note
• If the escape character is found in the string and is not followed by either the
wildcard characters or itself, the POM removes it from the string.
• The PUID, PSEQ and PVAL keywords are reserved Teamcenter words.
PUID can be used as an attribute of any POM class. PSEQ and PVAL can only be
used as attributes of a pseudo-class.
Query specification
The SQL-92 standard query specification consists of the following clauses:
• SELECT select-list
• FROM class-list
• [ WHERE search-condition ]
• [ ORDER BY attribute-list ]
• [ GROUP BY attribute-list ]
• [ HAVING search-condition ]
Building expressions
The POM enquiry system APIs contain a list of functions that can be used to build expressions.
Function Description
POM_enquiry_set_attr_expr This is the recommended way for creating expressions on
class.attr. It is used as follows:
Create a type value: POM_enquiry_set_type_value
(enqid, valid, n_values, values, property); here
property can be either POM_enquiry_const_value or
POM_enquiry_bind_value.
Create the expression: POM_enquiry_set_attr_expr
(enqid, exprid, class, attr, operator, valid)
Note
valid cannot be NULL. If the right-hand side of
the expression is NULL such as for the unary
operators UPPER and LOWER, you must
create (char *valid="").
Function Description
POM_enquiry_set_expr This function is used to combine two expressions into one.
For example, - expr1 OR expr2, expr1 AND expr2. -
UPPER ( SUBSTR ( A, 1,14 )). - expr1 theta-op expr2
where theta-op can be: =,>,>=,<,<=.... ( a+b)*c.
Note
If you have an expression like the following:
(class.attr1 + class.attr2)/class.attr3 in the
select-clause, you need to add an expression to
the where-clause as follows: ( (class.attr3 > 0)
AND (class.attr3 IS NOT NULL)).
Practical example
For example, if you want to find the name and type of all objects created since March 8, 2005,
the SQL statement is:
SELECT object_name, object_type FROM WorkspaceObject WHERE creation_date >=
08-Mar-2005 00:00
The POM enquiry APIs required to create this query are shown in the following code:
void ***report;
int n_rows, n_cols, row, column;
const char *select_attr_list[] = { "object_name", "object_type"};
date_t aDate;
Data in the report variable can be read as report[i][j] where i is the row and j is the column.
As can be seen from the previous example, the FROM clause was not defined. It is automatically
derived from the other clauses of the query.
SELECT clause
The SELECT list of an SQL statement consists of a list of attributes and expressions. The operator of
the expression must be a valid one supported by the RDBMS in use. The following are supported:
+, -, /, *, substr, upper, lower, contact, to_number, to_date, cpid_of, uid_of, max, min, avg,
count all, count distinct, and sum.
Refer to the Supported operators table for valid token names supported by the enquiry system.
For example, if you want to find all the objects created by the users Ahmed, Hewat, and James,
the SQL statement would be:
SELECT pa.object_name,pr.user_name FROM pom_application_object pa, user ur,
person pr WHERE ur.person = pr.puid and pr.puid = pa.puid and
pr.user_name in ('Ahmed','Hewat','James')
The POM enquiry APIs required to create this query are as follows:
As can be seen from the previous SQL statement, the object_name and user_name attributes do
not belong to the same class. Therefore you need to call the POM_enquiry_add_select_attrs
function twice, as shown in the following code:
/*create variables for POM_enquiry_execute*/
int rows,cols;
void *** report;
/* Create a query*/
POM_enquiry_create ("a_unique_query_id")
/*create two variables called select_attr_list1 and select_attr_list2 */
const char * select_attr_list1[] = { "object_name"};
const char * select_attr_list2[] = { "user_name" };
/*create a variable value_list */
const char * value_list[] = {"Ahmed","Hewat", "James"};
POM_enquiry_add_select_attrs ( "a_unique_query_id", "pom_application_object", 1,
select_attr_list1 );
POM_enquiry_add_select_attrs ( "a_unique_query_id", "person", 1, select_attr_list2
);
/*Create the join expr ur.rpersonu = pr.puid*/
POM_enquiry_set_join_expr ("a_unique_query_id","auniqueExprId_1","user","person",
POM_enquiry_equal,"person","puid"
);
/*Create the join expr pr.puid = pa.puid*/
POM_enquiry_set_join_expr ("a_unique_query_id","auniqueExprId_2","person","puid",
POM_enquiry_equal,"pom_application_object","puid"
);
/*Create a tag value.*/
POM_enquiry_set_string_value ( "a_unique_query_id", "aunique_value_id", 3, value_list,
POM_enquiry_bind_value );
/*Create the expression user_name in ('Ahmed','Hewat','James').*/
POM_enquiry_create_attr_expr ( "a_unique_query_id","auniqueExprId_3", "person",
"user_name",
POM_enquiry_in, "aunique_value_id" );
/*Combine the expr_1 and expr_2 using AND.*/
POM_enquiry_set_expr ( "a_unique_query_id","auniqueExprId_4", "auniqueExprId_1",
POM_enquiry_and, "auniqueExprId_2" );
Notice that the POM_enquiry_bind_value token specifies that the query can be rerun with different
values without the need to re-evaluate the query. Also by using bind variables, you are telling the
database server to cache the plan of the execution of the SQL statement which saves time.
FROM clause
The FROM clause of the query is automatically generated.
WHERE clause
The WHERE clause of a query restricts the result to only those rows that verify the condition supplied.
For example, if you want to find the name and type of all objects created between date1 and date2,
the SQL statement is:
SELECT object_name,object_type FROM pom_application_object WHERE creation_date
BETWEEN date1 and date2
The POM enquiry APIs required to create this query are shown in the following code:
/*create variables for POM_enquiry_execute*/
int rows,cols;
void *** report;
/*create a query*/
POM_enquiry_create ("a unique query id")
/*create a variable called select-attr_list*/
const char * select_attr_list[] = { "object_name","object_type"};
/*add the list to the select clause of the query*/
POM_enquiry_add_select_attrs ("a unique query id" , "pom_application_object", 2 ,
select_attr_list );
/*create two date_t variables.*/
const date_t aDate1;
const date_t aDate2;
const date_t date_list[2];
/*initialize adate1. */
aDate1.day = day1; aDate1.month = month1; aDate1.year = year1; aDate1.hours = 0;
aDate1.minutes = 0;
aDate1.seconds = 0;
/*initialize adate2. */
aDate2.day = day2; aDate2.month = month2; aDate2.year = year2; aDate2.hours = 0;
aDate2.minutes = 0;
aDate2.seconds = 0;
date_list[0]=aDate1;
date_list[1]=aDate2;
/*create a date value object.*/
POM_enquiry_set_date_value ("a unique query id" , "auniquevalueId", 2 , date_list
, POM_enquiry_bind_value );
/*create an expression for pcreation_date between aDate1 and aDate2 using the value
"auniquevalueId"*/
POM_enquiry_set_attr_expr ("a unique query id","auniqueExprId","pom_application_object",
"creation_date",POM_enquiry_between,"auniquevalueId");
/*set the where clause search condition.*/
POM_enquiry_set_where_expr ( "a unique query id","auniqueExprId");
/*execute the query*/
POM_enquiry_execute ( "a unique query id",&row,&cols,&report);
/*Delete the query*/
POM_enquiry_delete ( "a_unique_query_id" );
ORDER BY clause
The ORDER BY list of an SQL statement consists of a list of attributes and expressions. The operator
of the expression must be a valid one supported by the RDBMS in use.
Refer to the Supported operators table for valid token names supported by the enquiry system.
For example, if you want to find all the objects created by the users Ahmed, Hewat and James
ordered by the user_name attribute, the SQL statement is:
SELECT pa.pobject_name,pr.user_name FROM ppom_application_object pa, puser ur,
pperson pr WHERE ur.rpersonu = pr.puid and pr.puid = pa.puid and pr.puser_name
in ('Ahmed','Hewat','James') ORDER BY user_name
As can be seen from the previous SQL statement, the object_name and user_name attributes
do not belong to the same class, therefore you need to call the POM_enquiry_add_select_attrs
function twice. The POM enquiry APIs required to create this query are shown in the following code:
/*create variables for POM_enquiry_execute*/
int rows,cols;
void *** report;
/* Create a query*/
POM_enquiry_create ("a_unique_query_id")
/*create two variables called select_attr_list1 and select_attr_list2 */
const char * select_attr_list1[] = { "object_name"};
const char * select_attr_list2[] = { "user_name" };
/*create a variable value_list */
const char * value_list[] = {"Ahmed","Hewat", "James"};
POM_enquiry_add_select_attrs ( "a_unique_query_id", "pom_application_object", 1,
select_attr_list1 );
POM_enquiry_add_select_attrs ( "a_unique_query_id", "person", 1, select_attr_list2
);
/*Create the join expr ur.rpersonu = pr.puid*/
POM_enquiry_set_join_expr ("a_unique_query_id","auniqueExprId_1","user","person",
POM_enquiry_equal,"person","puid"
);
/*Create the join expr pr.puid = pa.puid*/
POM_enquiry_set_join_expr ("a_unique_query_id","auniqueExprId_2","person","puid",
POM_enquiry_equal,"pom_application_object","puid"
);
/*Create a tag value.*/
POM_enquiry_set_string_value ( "a_unique_query_id", "aunique_value_id", 3, value_list,
POM_enquiry_bind_value );
/*Create the expression user_name in ('Ahmed','Hewat','James').*/
POM_enquiry_create_attr_expr ( "a_unique_query_id","auniqueExprId_3", "person",
"user_name",
POM_enquiry_in, "aunique_value_id" );
/*Combine the expr_1 and expr_2 using AND.*/
POM_enquiry_set_expr ( "a_unique_query_id","auniqueExprId_4", "auniqueExprId_1",
POM_enquiry_and, "auniqueExprId_2" );
/*Combine the expr_3 and expr_4 using AND.*/
POM_enquiry_set_expr ( "a_unique_query_id","auniqueExprId_5", "auniqueExprId_3",
POM_enquiry_and, "auniqueExprId_4" );
/*Set the where clause of the query*/
POM_enquiry_set_where_expr ( "a_unique_query_id","auniqueExprId_5" );
/*set the order by clause.*/
POM_enquiry_add_order_attr ( "a_unique_query_id","Person","user_name",
POM_enquiry_asc_order
);
/*Execute the query.*/
POM_enquiry_execute ( "a_unique_query_id",&row,&cols,&report);
/*Delete the query*/
POM_enquiry_delete ( "a_unique_query_id" );
GROUP BY clause
This the same as the ORDER BY clause except for the sorting order. Just replace the
POM_enquiry_add_order_attr function with the POM_enquiry_add_group_attr function and the
POM_enquiry_add_order_expr function with the POM_enquiry_add_group_expr function.
HAVING clause
This is the same as the WHERE clause. Replace the POM_enquiry_set_where_expr function with
the POM_enquiry_set_having_expr function.
Subqueries
For example, if you want to find the list of folders whose contents attribute contains a UGMASTER
type dataset, the SQL statement is:
SELECT f1.puid FROM folder f1 WHERE f1.contents in ( SELECT ds.puid FROM
dataset ds WHERE ds.object_type = 'UGMASTER' );
The POM enquiry APIs required to create this query are shown in the following code:
/*create variables for POM_enquiry_execute*/
int rows,cols;
void *** report;
/*create a variable select_attr_list */
const char * select_attr_list[] = {"puid"};
/* Create a query*/
POM_enquiry_create ("a_unique_query_id")
As can be seen from the above pseudo-SQL statement contains a subquery.
Hence the need for creating one.
/*Create a subquery.*/
POM_enquiry_set_sub_enquiry ( "a_unique_query_id", "subquery_unique_id" );
/* select the puid from Folder class.*/
POM_enquiry_add_select_attrs ( "a_unique_query_id", "Folder", 1, select_attr_list
);
/* select the puid from Dataset class.*/
POM_enquiry_add_select_attrs ( "subquery_unique_id", "Dataset", 1, select_attr_list
);
Note
The POM_enquiry_in function can be replaced by the POM_enquiry_not_in,
POM_enquiry_exists, or POM_enquiry_not_exists function.
For IN, NOT IN, EXISTS and NOT EXISTS operators, see your RDBMS reference manual.
Use of class_alias
For example, if you want to find the list of folders whose contents attribute contains a folder of type
'override' and its contents has an item revision X, the pseudo-SQL is:
select f1.puid from folder f1,folder f2, item_revision itr where f1.contents
= f2.puid and f2.type = 'override' and f2.contents = itr.puid and
itr.object_name = 'X';
The folder class appears twice in the from clause and therefore needs a class alias.
The POM enquiry APIs required to create this query are shown in the following code:
/*create variables for POM_enquiry_execute*/
int rows,cols;
void *** report;
/*create a variable select_attr_list */
const char * select_attr_list[] = {"puid"};
/* Create a query*/
POM_enquiry_create ("a_unique_query_id")
/*Create a class alias for the Folder class.*/
POM_enquiry_create_class_alias ( "a_unique_query_id", "Folder",1,"Folder_alias");
/* select the puid from Folder class.*/
POM_enquiry_add_select_attrs ( "a_unique_query_id", "Folder", 1, select_attr_list
);
/*Create the join expr f1.contents = f2.puid*/
POM_enquiry_set_join_expr ("a_unique_query_id","auniqueExprId_1","Folder","contents",
POM_enquiry_equal,"Folder_alias","puid"
);
/*Create the expr f2.type = 'override'.*/
POM_enquiry_set_string_expr ("a_unique_query_id","auniqueExprId_2","Folder_alias","type",
POM_enquiry_equal,"override"
);
/*Create the join expr f2.contents = itr.puid*/
POM_enquiry_set_join_expr ("a_unique_query_id","auniqueExprId_3","Folder_alias",
"contents",POM_enquiry_equal,"Item_revision","puid"
);
/*Create the expr itr.object_name = 'X'.*/
POM_enquiry_set_string_expr ("a_unique_query_id","auniqueExprId_4","Folder_alias",
"object_name",POM_enquiry_equal,"X"
);
/*Combine the expr_1 and expr_2 using AND.*/
POM_enquiry_set_expr ( "a_unique_query_id","auniqueExprId_5", "auniqueExprId_1",
POM_enquiry_and, "auniqueExprId_2" );
/*Combine the expr_3 and expr_4 using AND.*/
POM_enquiry_set_expr ( "a_unique_query_id","auniqueExprId_6", "auniqueExprId_3",
POM_enquiry_and, "auniqueExprId_4" );
/*Combine the expr_5 and expr_6 using AND.*/
POM_enquiry_set_expr ( "a_unique_query_id","auniqueExprId_7", "auniqueExprId_5",
POM_enquiry_and, "auniqueExprId_6" );
/*Set the where clause of the query*/
POM_enquiry_set_where_expr ( "a_unique_query_id","auniqueExprId_7" );
/*Execute the query.*/
POM_enquiry_execute ( "a_unique_query_id",&row,&cols,&report);
/*Delete the query*/
POM_enquiry_delete ( "a_unique_query_id" );
Use of pseudo_class
For example, if you want to find all versions of the dataset 'X', the pseudo-SQL is:
select ds.puid from dataset ds, revisionAnchor rva where ds.rev_chain_anchor
= rva.puid and rva.revisions.PSEQ = 0 and ds.object_name = 'X'
The PSEQ attribute in the where clause is not a POM_attribute and the 'revisions' is not a
POM_class. Since the only classes you are allowed to query on are POM_classes, you have
to convert the revisions attribute of the revisionAnchor into a POM_class. To do this, create a
pseudo-class on the attribute revisions of the revisionAnchor.
Note
You can only create pseudo-classes for VLA or LA attributes.
The POM enquiry APIs required to create this query are shown in the following code:
/* Create variables for POM_enquiry_execute*/
int rows,cols;
void *** report;
/* Create a variable select_attr_list */
const char * select_attr_list[] = {"puid"};
/* Create a variable int_val=0*/
const int int_val = 0;
/* Create a query*/
POM_enquiry_create ("a_unique_query_id")
/* Create a pseudo-class for the revisionAnchor.revisions attribute.*/
POM_enquiry_set_pseudo_calias ( "a_unique_query_id", "RevisionAnchor","revisions",
"class_revisions");
/* Select the puid from Dataset class.*/
POM_enquiry_add_select_attrs ( "a_unique_query_id", "Dataset", 1, select_attr_list);
/* Create the expr rva.revisons.pseq = 0.*/
/* First create an int value.*/
POM_enquiry_set_int_value ( "a_unique_query_id","auniqueValueId", 1, &int_val,
POM_enquiry_bind_value
);
POM_enquiry_set_attr_expr ("a_unique_query_id","auniqueExprId_1","class_revisions",
"pseq",POM_enquiry_equal,"auniqueValueId"
);
/* NOTICE: the 'pseq' attribute is reserved keyword in TC/POM, it can be used as an
attribute of the pseudo-class. The same
apply to pval and puid. */
/* Create the expr ds.object_name = 'X'.*/
POM_enquiry_set_string_expr ("a_unique_query_id","auniqueExprId_2","Dataset",
"object_name",POM_enquiry_equal,"X"
);
/* Create the join expr ds.rev_chain_anchor = rva.puid */
POM_enquiry_set_join_expr ("a_unique_query_id","auniqueExprId_3","Dataset",
"rev_chain_anchor",POM_enquiry_equal,"revisionAnchor","puid");
/* Combine the expr_1 and expr_2 using AND.*/
POM_enquiry_set_expr ( "a_unique_query_id","auniqueExprId_4", "auniqueExprId_1",
POM_enquiry_and, "auniqueExprId_2" );
/* Combine the expr_3 and expr_4 using AND.*/
POM_enquiry_set_expr ( "a_unique_query_id","auniqueExprId_5", "auniqueExprId_3",
POM_enquiry_and, "auniqueExprId_4" );
/* Set the where clause of the query*/
POM_enquiry_set_where_expr ( "a_unique_query_id","auniqueExprId_5" );
/* Execute the query.*/
POM_enquiry_execute ( "a_unique_query_id",&row,&cols,&report);
/* Delete the query*/
POM_enquiry_delete ( "a_unique_query_id" );
Use of Set-Expression
INTERSECTION
For example, if you want to find all the objects that are common to two folders, the pseudo SQL is
as follows:
select f1.contents from folder f1 where f1.object_name=’Compare’ INTERSECTION
select f2.contents from folder f2 where f2.object_name=’Reference’
The pseudo SQL consists of two queries that are combined by the INTERSECTION set operator.
In this case, one of the queries must be the outer query and the other one must be scoped to it.
Therefore, the non-outer query must be created as a subquery of the outer one.
The POM enquiry APIs required to create this query are shown in the following code:
/* Create variables for POM_enquiry_execute*/
int rows,cols;
void *** report;
/* Create a variable select_attr_list */
const char * select_attr_list[] = {"contents"};
/* Create a query*/
POM_enquiry_create ("a_unique_query_id")
/* Select the contents from Folder class.*/
POM_enquiry_add_select_attrs ( "a_unique_query_id", "Folder", 1, select_attr_list );
/* Now limit it to just the folder we want to see */
/* Folder names are not always unique so for a real example you should */
/* use the tag of the folder */
POM_enquiry_set_string_expr ("a_unique_query_id","auniqueExprId_1","Folder",
"object_name", POM_enquiry_equal, "Compare" );
/* Set the outer query where clause.*/
POM_enquiry_set_where_expr ( "a_unique_query_id","auniqueExprId_1" );
/* POM implements set queries as subqueries and so we need to create one */
POM_enquiry_set_sub_enquiry ( "a_unique_query_id", "subquery_unique_id" );
/* Create a class alias for the Folder class. The folder class appears twice - once */
/* in the outer query and the other time in the subquery. */
POM_enquiry_create_class_alias ( "subquery_unique_id", "Folder",1,"Folder_alias");
/* Select the contents from other folder.*/
POM_enquiry_add_select_attrs ( "subquery_unique_id", "Folder_alias", 1,
select_attr_list);
POM_enquiry_set_string_expr ("a_unique_query_id","auniqueExprId_2","Folder_alias",
"object_name", POM_enquiry_equal, "Reference" );
/* Set the subquery where clause.*/
POM_enquiry_set_where_expr ( "subquery_unique_id","auniqueExprId_2" );
/* Add the set expression */
POM_enquiry_set_setexpr ("a_unique_query_id”, POM_enquiry_intersection,
"subquery_unique_id");
/* Execute the outer query.*/
POM_enquiry_execute ( "a_unique_query_id",&row, &cols, &report);
/* Delete the query*/
POM_enquiry_delete ( "a_unique_query_id" );
DIFFERENCE
For example, if you want to find all the objects that are in a reference folder that do not exist in
a named folder, the pseudo SQL is:
select f1.contents from folder f1 where f1.object_name=’Reference’ DIFFERENCE
select f2.contents from folder f2 where f2.object_name=’Compare’
The pseudo SQL consists of two queries that are combined by the DIFFERENCE set operator. In this
case, one of the queries must be the outer query and the other one must be scoped to it. Therefore,
the non-outer query must be created as a subquery of the outer one.
The POM enquiry APIs required to create this query are shown in the following code:
/*create variables for POM_enquiry_execute*/
int rows,cols;
void *** report;
/*create a variable select_attr_list */
const char * select_attr_list[] = {"contents"};
/* Create a query*/
POM_enquiry_create ("a_unique_query_id")
/* select the contents from Folder class.*/
POM_enquiry_add_select_attrs ( "a_unique_query_id", "Folder", 1, select_attr_list );
/* Now limit it to just the folder we want to see */
/* Folder names are not always unique so for a real example you should */
/* use the tag of the folder */
POM_enquiry_set_string_expr ("a_unique_query_id","auniqueExprId_1","Folder",
UNION
For example, you have two forms: FORM1 and FORM2. FORM1 is an engineering type form that
has two attributes: weight and material. FORM2 is a production type form and has two attributes:
cost and effectivity_date.
If you want to find all engineering type objects that weigh more than wt1 and all production type
objects that cost less than cost1, the pseudo SQL is:
Select f1.puid From FORM1 Where f1.type = 'engineering' And f1.weight > wt1
UNION Select f2.puid From FORM2 Where f2.type = 'Production' And
f2.cost < cost1
The POM enquiry APIs required to create this query are shown in the following code:
/* create variables for POM_enquiry_execute*/
int rows,cols;
void *** report;
/* Create a variable select_attr_list */
const char * select_attr_list[] = {"puid"};
/* Create a query*/
POM_enquiry_create ("a_unique_query_id")
/* Select the puid from FORM1 class.*/
POM_enquiry_add_select_attrs ( "a_unique_query_id", "form1", 1, select_attr_list);
/* Create the expr f1.type = 'engineering'*/
POM_enquiry_set_string_expr ( "a_unique_query_id","auniqueExprId_1","form1","object_type"
POM_enquiry_equal,"engineering");
/* Create the expr f1.weight > wt1*/
POM_enquiry_set_double_expr ( "a_unique_query_id","auniqueExprId_2","form1","weight",
POM_enquiry_greater_than,wt1);
/* Combine expr1 and expr2 using AND operator.*/
POM_enquiry_set_expr ( "a_unique_query_id", "auniqueExprId_3", "auniqueExprId_1",
POM_enquiry_and,"auniqueExprId_2" );
/* Set the where clause of the outer query*/
POM_enquiry_set_where_expr ( "a_unique_query_id","auniqueExprId_3" );
/* Create a subquery*/
POM_enquiry_set_sub_enquiry ("a_unique_query_id", "subquery_unique_id" )
/* Select the puid from FORM2 class.*/
POM_enquiry_add_select_attrs ( "subquery_unique_id", "form2", 1, select_attr_list
);
/* Create the expr f2.cost <cost1*/
POM_enquiry_set_double_expr ( "subquery_unique_id","auniqueExprId_5","form2",
"cost",POM_enquiry_less_than,cost1 );
/* Combine expr4 and expr5 using AND operator.*/
POM_enquiry_set_expr ( "subquery_unique_id", "auniqueExprId_6", "auniqueExprId_4",
POM_enquiry_and, "auniqueExprId_5" );
/* Set the where clause of the subquery*/
POM_enquiry_set_where_expr ( "subquery_unique_id","auniqueExprId_6" );
/* Now we need to set the set-expression of the outer query. */
POM_enquiry_set_setexpr ( "a_unique_query_id" ,"setexpr",POM_enquiry_union ,
"subquery_unique_id" );
/* Execute the query.*/
POM_enquiry_execute ( "a_unique_query_id",&row,&cols,&report);
/* Delete the query*/
POM_enquiry_delete ( "a_unique_query_id" );
POM_enquiry_substr
If you want to create the expression SUBSTR (item.desc,1,5)='blob', you can use the enquiry
APIs as shown in the following code:
/*Create a variable int_list*/
const int int_list[]={1,5};
/*Create a string variable*/
const char *str_list[]={"blob"};
/* First we nned to create an int value:*/
POM_enquiry_set_int_value ( "a_unique_query_id","auniqueValueId_1",1,int_list,
POM_enquiry_const_value);
/* Create the expression SUBSTR ( item.desc, 1,5) using the int value.*/
POM_enquiry_set_attr_expr ( "a_unique_query_id", "auniqueExprId_1","item","desc",
POM_enquiry_substr,"auniqueValueId_1")
/* Create a string value = 'blob'.*/
POM_enquiry_set_int_value ( "a_unique_query_id","auniqueValueId_2",1,str_list,
POM_enquiry_bind_value);
/* Create the expression SUBSTR ( item.desc, 1,5) = 'blob'.*/
POM_enquiry_set_expr ( "a_unique_query_id","auniqueExprId_2", "auniqueExprId_1",
POM_enquiry_equal, "auniqueValueId_2");
POM_enquiry_cpid_of
For example, if you want to find all the dataset objects of a folder, the pseudo SQL is:
Select contents From Folder Where cpid_of ( contents ) = dataset cpid
The way to implement this query in the old query system is to use a join between dataset and folder
contents. To avoid the performance penalty of using the join, you can use the POM_enquiry_cpid_of
special operator.
Note
POM_enquiry_cpid_of can only be used with typed or untyped_reference type attributes.
If you want to create the expression UPPER(item.desc )= UPPER('blob'), you can use the enquiry
APIs as shown in the following code:
/*Create a string variable*/
const char *str_list[]={'blob'};
/* Create the expression UPPER( item.desc).*/
POM_enquiry_set_attr_expr ( "a_unique_query_id", "auniqueExprId_1","item","desc",
POM_enquiry_upper,"")
/* Create a string value = 'blob'.*/
POM_enquiry_set_int_value ( "a_unique_query_id","auniqueValueId_2",1,str_list,
POM_enquiry_bind_value);
/* Create the expression UPPER( item.desc) = UPPER('blob').*/
POM_enquiry_set_expr ( "a_unique_query_id","auniqueExprId_2", "auniqueExprId_1",
POM_enquiry_equal, "auniqueValueId_2");
Note
When you apply the upper or lower operator to the left hand side of an expression, the
operator is applied automatically to the right hand side of the expression.
POM_enquiry_countdist
If you want to create the expression COUNT(DISTINCT item.puid ), you can use the enquiry APIs as
shown in the following code:
/* Create the expression COUNT(DISTINCT item.desc).*/
POM_enquiry_set_attr_expr ( "a_unique_query_id", "auniqueExprId_1","item","puid",
POM_enquiry_countdist,"")
Note
You cannot use count ( * ). Instead use COUNT ( ALL class.puid ) by using the
POM_enquiry_countall. To return only the number of distinct rows, use COUNT
(DISTINCT class.puid ), by using the POM_enquiry_countdist.
The same apply to MAX, MIN, AVG, and SUM except that these apply to different attributes of
the class, not the puid.
• If you create a query with more than MAX_BIND_VALUE bind variables, the new query system
converts the bind values into constants and construct a query that can be successfully executed.
However, the SQL statement is not cached and therefore risks flooding the System Global Area
(SGA).
Supported operators
The following table shows the operators that are supported in the POM enquiry module.
Where it can
be used
Operator Left Right
POM_enquiry_or expr expr WH
POM_enquiry_and expr expr WH
POM_enquiry_equal attr/expr expr/value WH
POM_enquiry_not_equal attr/expr expr/value WH
POM_enquiry_greater_than attr/expr expr/value WH
POM_enquiry_greater_than_ attr/expr expr/value WH
or_eq
POM_enquiry_less_than attr/expr expr/value WH
POM_enquiry_less_than_or_eq attr/expr expr/value WH
POM_enquiry_between attr/expr expr/value WH
POM_enquiry_not_between attr/expr expr/value WH
Where it can
be used
Operator Left Right
POM_enquiry_in attr/expr value/ WH
subquery
POM_enquiry_not_in attr/expr value/ WH
subquery
POM_enquiry_exists attr/expr subquery WH
POM_enquiry_not_exists attr/expr subquery WH
POM_enquiry_like attr/expr value WH
POM_enquiry_not_like attr/expr value WH
POM_enquiry_union query query -
POM_enquiry_difference query query -
POM_enquiry_intersection query query -
POM_enquiry_countdist attr null SH
POM_enquiry_countall attr null SH
POM_enquiry_max attr null SH
POM_enquiry_min attr null SH
POM_enquiry_avg attr null SH
POM_enquiry_sum attr null SH
POM_enquiry_plus attr/expr attr/expr/ SW
value
POM_enquiry_minus attr/expr attr/expr/ SW
value
POM_enquiry_divide attr/expr attr/expr/ SW
value
POM_enquiry_multiply attr/expr attr/expr/ SW
value
POM_enquiry_substr attr/expr value SW
POM_enquiry_upper attr/expr null SW
POM_enquiry_lower attr/expr null SW
POM_enquiry_ascii attr/expr null SW
POM_enquiry_concat attr/expr S
POM_enquiry_ltrim attr/expr null SW
POM_enquiry_rtrim attr/expr null SW
POM_enquiry_length attr/expr null SW
POM_enquiry_is_null attr/expr null W
POM_enquiry_is_not_null attr/expr null W
POM_enquiry_uid_of attr null W
Where it can
be used
Operator Left Right
POM_enquiry_cpid_of attr null W
POM_enquiry_tonumber attr null W
POM_enquiry_todate attr null W
POM_array_length_equals attr null W
POM_array_length_not_equals attr null W
Note
SWH means [S]elect [W]here [H]aving clauses and [-] means the operator is applied
to the outer query.
The POM can load the objects returned by a query. This has a number of benefits:
• If you want to load all the objects returned by a query, you can make just one call which both
executes the query and loads the objects.
• The loading of objects is more efficient than a normal POM_load_instances (or loadInstances
in CXPOM) and results in fewer SQL trips.
1. Make sure all the objects you want returned are of the same class. The POM does not support
loading objects of different classes using this method.
2. Create the enquiry as described above ensuring you have only one SELECT attribute, the puid
of the class you want to load. You can order the results as long as you are ordering by one
of the attributes of the class you are loading.
Examples
Below are some examples of how you might use custom, localizable text messages:
• Error messages for custom code using the following TC_API functions:
EMH_store_error_s1() through EMH_store_error_s7()
• Text messages with custom code using the following TEXTSRV_API function:
TXTSRV_get_substituted_text_resource()
• Business object type and property names using the Business Modeler IDE.
Note
Deploying a Business Modeler IDE package automatically updates the TextServer
XML files.
• TC_USER_MSG_DIR
o You must create a directory to contain your custom XML files. You must modify the
tc_profilevars script to add this environment variable and point to your new directory.
o Any text message resource contained in this location which duplicates an OOTB ID overrides
the OOTB message.
Note
Each time the tcserver process is started, the TextServer automatically loads the XML
files, picking up any changes made. There is no need to regenerate a cache; however, a
process detects changes made only when it starts, not while it is running.
Procedure
Using the Business Modeler IDE:
1. Create a new Extension for your custom code, available for:
Business Object: SearchCursor
Operation Name: BMF_SEARCHCURSOR_process_search_results
Extension Point: BaseAction
2. Write your custom extension code to filter the cacheless search results.
Your custom extension code will now be executed when a cacheless search is performed.
Classification
• include statements
• Main program
This program is used when creating the Business Modeler IDE extension definition so that
it becomes the entry point for the computation logic. This program registers all the attribute
dependencies and computing functions.
• Computing functions
These functions contain the logic for manipulating attributes. Each function is registered against
an autocomputed attribute, and once registered, is called directly from Teamcenter during normal
execution.
• Computing function
Value Description
class-id Specifies the class ID for the attribute.
view-id Specifies the view ID for the attribute. (If a view exists,
it may be left blank.)
num-attrs Specifies the number of attributes on which this attribute
depends.
auto-compute-attrs Specifies the dependency structure containing the
dependency attribute IDs and a pointer to the compute
function. (The structure is explained below.)
override-flag Specifies the flag to indicate overriding of the parent
class registration.
An attribute dependency structure is a new data structure specifically developed for the
autocomputation of attributes:
typedef struct ICS_auto_compute_attr_s
{
int auto_compute_attr_id;
ICS_auto_compute_function_t auto_compute_function;
int num_attr_deps;
int attr_deps[ICS_MAX_CLASS_ATTR_SIZE];
}ICS_auto_compute_attr_t;
c. Use the manipulated values and properties for the autocomputed attributes.
To allow this interaction with the attributes, use the following ITK APIs:
• ICS_auto_compute_get_attr_value
This API retrieves the values for the given attribute.
Parameters Description
attributeId Specifies the attribute ID to retrieve value for.
valueCount Specifies the number of values for this attribute.
values Specifies the array of values for this attributes.
formats Specifies the array of formats for values of this
attribute.
unit Specifies the array of units for values of this attribute.
• ICS_auto_compute_set_attr_value
This API sets the values for the given attribute.
Parameters Description
attributeId Specifies the attribute ID to set value for.
valueCount Specifies the number of values for this attribute.
values Specifies the values for this attributes.
unit Specifies the units for values of this attribute.
• ICS_auto_compute_get_attr_prop
This API retrieves a specific property for the given attribute.
Parameters Description
attributeId Specifies the attribute ID to retrieve property for.
property_name Specifies the property name to retrieve.
values Specifies the property value.
• ICS_auto_compute_set_attr_prop
This API sets a specific property for the given attribute.
Parameters Description
attributeId Specifies the attribute ID to set property for.
property_name Specifies the property name to set.
values Specifies the property value.
If you install sample files using the Teamcenter Environment Manager (TEM), a sample PLM XML
file is available for import in the TC_ROOT\sample\in-CLASS\auto_compute directory of your
installation. This file provides an example of creating custom logic for a Classification attribute.
A detailed description of APIs is available in the Integration Toolkit Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Teamcenter manages company proprietary information that is represented in metadata and in files.
Teamcenter provides protection of the metadata and access to the related files. However, the files
that are downloaded to the user’s client machine are not protected because they are no longer in
Teamcenter. Data loss prevention (DLP) solutions can control the flow of sensitive information from
the client machines. For example, DLP can control how the files are emailed, burned to a CD or
DVD, encrypted on external device, or uploaded to the Internet. Digital Guardian is an example of a
data loss prevention solution.
You identify the desired protection for the files managed in Teamcenter. When the file transfer is
completed, the DLP application is called to protect the file on the client machine. The DLP client
agent enforces the specified protection. A typical example of a classified document is a CAD
drawing; depending on the classification, this document may never be sent to a customer or may
be sent to an internal manufacturer.
Teamcenter File Management System (FMS) supports the transfer of files from the volumes to the
client machines. Support for a DLP solution is limited to the rich client, because it uses the FMS client
cache (FCC) for downloading files. Any client which does not use FCC and is not supported for
integration with a DLP. An extension to FMS is required to support the integration of DLP solutions
into the FMS environment.
In this implementation, you define a custom character attribute on dataset objects that represents
a classification code. If this attribute is set to any value (or any particular value of choice), then
it indicates a dataset that needs to be protected using DLP software. Setting the value of this
attribute typically happens when the dataset is created, but the mechanics of this are up to individual
customizations. The custom code that hooks into the user exit (BMF_IMF_dlp_classification_code)
needs to check whether the dataset associated with the file has this classification code set, and if
so, takes appropriate action. The BMF_IMF_dlp_classification_code user exit is called for all
file downloads.
Note
The following procedure describes how to customize your environment to enable and use
DLP. This procedure does not describe how to set up any specific DLP software, but rather
describes how to set up the environment so that DLP hooks are enabled. In addition to
performing the following steps, you must write code specific to your DLP solution, compile
it, and deploy it to invoke the actual DLP software APIs.
1. In a custom template, add a property to the Dataset business object or any subtype:
Name d4ClassCode
The prefix depends on the template. You can name the
property anything you want. This name is provided to
match the sample code.
Display name Classification Code
Attribute type Character
Modifiable property constant setting Write
• Custom exit
Only use this approach when you cannot use the Business Modeler IDE to customize the
user exit.
Following is sample code for this approach:
//@<COPYRIGHT>@
//==================================================
//Copyright $2014.
//Siemens Product Lifecycle Management Software Inc.
{
CALL( POM_enquiry_set_tag_value ( qry_name, "filetag", 1, &file_tag,
POM_enquiry_bind_value ) );
}
CALL( POM_enquiry_execute ( qry_name, &n_rows, &n_cols, &values ) );
if ( n_rows >= 1 )
{
if( values[0][0] )
{
*dataset_tag = *((tag_t *)values[0][0]);
}
if( values[0][1] )
{
//*class_code = *((char )values[0][1]);
*class_code = ( (*(char *) values[0][1]));
}
}
MEM_free ( values );
return ifail;
}
3. Deploy the template changes and the binaries to the appropriate locations.
Component Objects
Electrical components Functionality
FunctionalityRevision
Signals Signal
ProcessVariable
Electrical interfaces GDE
GDEOccurrence
Network_Port
Connection_Terminal
Electrical connections PSConnection
Network
Connection
GDELink
Routes RouteNode
RouteSegment
RouteCurve
RoutePath
RouteLocation
RouteLocationRev
Allocations Allocation
Component Objects
AllocationMap
• Realized By
• Connected To
• Routed By
• Device To Connector
• Assigned Location
• Associated System
• Redundant Signal
• Process Variable
To control the behavior of objects and actions on those objects, you can set the following preferences:
• Connected_ToRules
• Implemented_ByRules
• Realized_ByRules
• APN_absolute_path_name_separator
• GDEOcc_display_instance_num_for_types
• TC_releasable_logical_types
• HRN_node_referenced_component_relation_secondary
• HRN_associated_part_relation_secondary
Object model
The Mechatronics Process Management object model provides capabilities for representing all
aspects of an electromechanical product. In the context of wire harness modeling, Mechatronics
Process Management specifically provides objects to work with the functional representation,
logical representation, and electrical representation of electromechanical products. Modeling an
electromechanical product in Teamcenter involves working with the objects highlighted in the
Teamcenter mechatronics process management object model. This model is depicted in the following
figure.
The classes implemented to support wire harness data include routing and topology related classes
such as Route, Node, Segment, and b_curve to capture the path information associated with
connections and devices implementing the connections.
Routes, segments and nodes contain a definition of the 2D or 3D routing topology associated with
connections, devices or other domain specific physical objects such as wires. A route is defined as a
course taken from a starting point to a destination. A route defines the course either by using one
or more segments or a set of nodes in combination with curve objects, which define the shape of
the route.
All route objects, such as segments, nodes, and curves, do not have any meaning outside the context
of the top level assembly (in other words, a BOM view revision).
There are five ITK modules that you can use to customize objects to fit your needs:
• PSCONN
This module provides functions that allow you to manipulate connectivity data. The functions are
defined in the psconnection.h header file.
• GDE
This module provides functions that allow you to manipulate item elements. The functions are
defined in the gde.h header file.
• SIGNAL
This module provides functions that allow you to manipulate signals. The functions are defined in
the pssignal.h header file.
• ROUTE
This module provides functions that allow you to manipulate routes. The functions are defined
in the route.h header file.
• ALLOC
This module provides functions that allow you to manipulate allocations. The functions are
defined in the allocation.h header file.
There are example using some of the functions in the modules in the Orientation to Mechatronics
Process Management API use examples section. You can also find sample programs using these
modules in the \samples\mechatronics directory. Details about the module's functions can be found
in the Integration Toolkit Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
You can use PLM XML to export and import data used by the Mechatronics Process Management
API modules. You can also use the modules in a Multi-Site Collaboration environment.
Consider the climate control system illustrated in the following figure. This system can be used in
several contexts, such as an automobile or home. The system and its contents have particular
meaning when used in specific contexts. Similarly, all Mechatronics Process Management objects
have real meaning when used in specific contexts.
Once interface ports are created, you can define ports for a functionality by attaching them to a
revision of a functionality for a given context (view). All ports under a given item revision are tracked
using an item element BOM view revision (BVR) associated with the item revision.
The following example creates the item element BVR and attaches the interface port to the
functionality revision:
int ifail = ITK_ok;
tag_t gde_bvr = NULLTAG;
tag_t gde_occ = NULLTAG;
/* Create GDEBVR under a functionality revision */
ifail = ITEM_rev_create_gde_bvr ( rev, view_type_tag, &gde_bvr);
if (ifail != ITK_ok)
return ifail;
/* Attach the interface port to functionality revision by creating GDEOccurrence
*/
ifail = GDE_create_occurrence ( gde_bvr,
gde_obj,
view_type_tag,
occurrence_type, /* can use NULLTAG */
quantity,
instance_no,
&gde_occ);
if (ifail != ITK_ok)
return ifail;
else
{
ifail = AOM_save (gde_bvr);
if (ifail != ITK_ok)
return ifail;
else
ifail = AOM_refresh(gde_bvr, false);
}
Creating connections
Assume that you have used the APIs described above to create Functionality_0 containing
Functionality_1 and Functionality_2. Functionality_1 has Port1 and Functionality_2 has Port2 (as
shown in the following figure). You must connect Port1 and Port2 in a given context.
Note
Connections are only valid for the given context. A separate connection call is required to
associate a connection object with ports for each usage of a connection occurrence.
Functionality connections
The following example creates connections for Port1 and Port2:
int ifail = ITK_ok;
tag_t window = NULLTAG;
/* Create BOMWindow to display the product structure of Functionality_0 */
ifail = BOM_create_window ( &window );
if (ifail != ITK_ok)
return ifail;
ifail = BOM_set_window_pack_all ( window, TRUE );
if (ifail != ITK_ok)
return ifail;
ifail = BOM_set_window_top_line ( window,
functionality_1_item,
functionality_1_rev,
NULLTAG,
&topline );
/* Each of the components that used in the structure will be represented as a BOMLine.
We need to get BOMLines for Port1 and Port2 so that we can use them in making connection.
*/
tag_t *childs = NULL;
int count = 0;
ifail = BOM_line_ask_all_child_lines(topline, &count, &childs);
if (ifail != ITK_ok)
return ifail;
/* Look for BOMLines for Port1 and Port2 */
tag_t gdelineType = NULLTAG;
ifail = TCTYPE_find_type (“GDELine”, (const char*)0, &gdelineType);
if (ifail != ITK_ok)
return ifail;
tag_t line_type_tag = NULLTAG;
tag_t line_for_port1_port2[2];
int line_found = 0;
char* object_name = NULL;
for ( int inx =0; inx < count; inx++ )
{
ifail = TCTYPE_ask_object_type(childs[inx], &line_type_tag);
if (ifail != ITK_ok)
return ifail;
if (line_type_tag == gdelineType)
{
ifail = AOM_ask_value_string ( childs[inx], "bl_line_name", &obj_name );
/* Get the BOMLine for Port1 and Port2 */
if (! strcmp(obj_name, “Port1”) || ! strcmp(obj_name, “Port2”))
{
line_for_port1_port2[line_found] = childs[inx];
line_found++;
if (line_found >= 2)
break;
}
}
}
/* Create connection */
if (line_found > 0)
{
ifail = PSCONN_connect (connection_id,
connection_name,
connection_type,
connection_rev,
line_found,
line_for_port1_port2,
&connection_line_tag);
}
Listing connections
The following example lists ports that are connected by a connection:
Ifail = PSCONN_list_connected_gdes( connection_line_tag,
&gde_line_count,
&gde_line_tags);
if (ifail != ITK_ok)
return ifail;
else
{
ifail = AOM_save (process_variable_obj);
if (ifail != ITK_ok)
return ifail;
else
ifail = AOM_refresh(process_variable_obj, false);
}
Creating signals
Signal objects are created using the SIG_create_signal API as shown in the following example:
int ifail = ITK_ok;
tag_t rev = NULLTAG;
tag_t signal = NULLTAG;
ifail = SIG_create_signal( signal_id,
signal _name,
signal _type,
signal _revision_id,
&signal,
&rev );
if (ifail != ITK_ok)
return ifail;
else
{
ifail = ITEM_save_item (signal);
if (ifail != ITK_ok)
return ifail;
else
ifail = AOM_refresh(signal, false);
}
Signals are sometimes generated due to a variation of the system variable, such as temperature or
pressure. In the climate control example, the signal is generated due to a variation in car temperature
compared to a preset value.
To represent this variation, a signal can be associated with a process variable of the current context
using the SIG_set_signal_pvariable function, as shown in the following example:
ifail = ITK_ok;
ifail = SIG_set_signal_pvariable( signal_line_tag, process_variable_line_tag );
if( ifail != ITK_ok )
return ifail;
In this example, the signal_line_tag and process_variable_line_tag tags represent the signal line
and process variable line in the current context.
The SIG_set_signal_pvariable function requires that you set the SIG_pvariable_rules preference
to the object type that is being passed in as secondary.
• Target
• Transmitter
To set a source, target, or transmitter associated system programmatically, set the corresponding
preference for that associated system to include the type of the secondary object being passed in
to the function.
In the climate control example, the electrical signal is generated by the TempMeasure functionality
and consumed by the TempRegulator functionality.
Note
The association between signal and associated system is only valid in the specified context.
Note
Similar to signals, process variables also have associated systems that generate, consume,
or transmit the process variable. The association between process variable and associated
system in a given context can be set using the SIG_set_associated_system function.
is passed in to the function. If one or more of the secondary BOM lines passed in is invalid, or if
an association does not exist between the signal or process variable line and one or more of the
secondary BOM lines, that secondary line is added to a failedLines array output variable, and the
function processes the next secondary BOM line. You must pass a valid non-null role in the role
argument (for example, source, target, or transmitter).
The following example removes all associated system relations between a signal and associated
system:
int ifail = ITK_ok;
ifail = SIG_unset_associated_system( signal_line_tag, asystem_line_tag );
if( ifail != ITK_ok )
return ifail;
In this example, the signal_line_tag and asystem_line_tag arguments represent the tags of the
signal and associated system lines in a specified context.
The following example removes a specific associated system relation corresponding to the source
role between a signal and secondary lines:
int ifail = ITK_ok;
logical hasFailures = false;
int numFailedLines = 0;
tag_t *failedLines = 0;
ifail = SIG_unset_associated_systems( priLineTag, numSecondaries,
secondaries, “source”, &hasFailures, &numFailedLines, &failedLines );
if( ( ifail != ITK_ok ) || ( hasFailures ) )
{
// process the failures
}
In this example, the priLineTag and secondaries arguments represent the tags of the primary signal
or process variable line and the secondary associated system lines, respectively. The failedLines
argument is an output array that holds the failed secondary lines. The hasFailures argument is a
logical argument that is set to true if failures exist. The numFailedLines argument is an integer that
holds the number of failed removals.
Note
There can be multiple redundant signals for a given signal based on the criticality of the
function being regulated.
In this example, redundant_signal_line_tag is the line tag of the redundant signal in the given
context.
In this example, signal_revision is the tag of the signal revision object and value is the double
value to be set on the signal revision.
Note
Teamcenter does not interpret signal characteristics on a signal revision. This is assumed
to be part of the Teamcenter implementation at a given site.
Values on signal revision objects represent a ranges or expected values that the signal is expected to
generate. The actual value of a signal exists within a specific context. For example, in the climate
control example, the value on the signal generated by the TempMeasure functionality depends on
the exact condition (temperature) inside a car. In addition, the value generated by the TempMeasure
functionality is different if the climate control system is used in a room or in a truck, rather than in a car.
The following example sets signal line values:
int ifail = ITK_ok;
ifail = SIG_set_signal_line_value( signal_line_tag, value );
if( ifail != ITK_ok )
return ifail;
In this example, signal_revision represents the tag of the signal revision object and value represents
the double value to be set on the signal revision.
Limitations
Allocations
Introduction to allocations
Allocations represent a mapping from one view in a product structure to another view. You use
allocations to create separate views of a product, then map the views to each other. You can model a
purely functional structure that is independent of the parts that eventually realize the functions. With
careful analysis, you can use independent groups of parts to design different products with the same
functional and logical models. Allocations allow you to link the actual parameters on physical parts to
the requirements on the functional and logical models. This permits you to verify the components in
the product are valid for their proposed purpose.
Object model
There are two allocation object models. The first, shown in the following figure, is for persistent
allocations stored in the database.
Preferences
You can carry forward all allocations created in the context of an allocation map revision when it either
revised, copied, or saved as a different revision by setting the ALLOC_map_copy_allocations
preference.
You can use a preference to control the cardinality of source and targets to be associated. For
example, you can enforce a Network_Port object to participate in one-to-one relations only with the
ALLOC_source_target_cardinality preference.
The ALLOC_Product_Representation_Types preference limits the view subtypes that can be used
for creating allocations for a given allocation map type. The ALLOC_Group_Allocation_Types
preference limits the type of allocations that can be used for creating allocations for a given allocation
map type. For example, you can specify that the Allocation_One subtype of Allocation be created
only in association with the AllocationMap_one subtype of AllocationMap.
API functions
There are two kinds of allocation API functions: persistent and run-time. The allocation run-time
model provides a simplified run-time abstraction of the allocation-related persistent object model. The
run-time model includes an allocation window that contains allocation lines that associate source
BOM lines with target BOM lines. Each allocation line shows attributes derived from a persistent
allocation object.
Additional information about the available API functions can be found in the Integration Toolkit
Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Example
You can write a user exit to check if an allocation is complete and correct. The criteria for that are
dictated by the business logic of your enterprise. Through these user exits, you can perform the
validity checks from the user interface.
You can register a method against the following user exit messages:
• ALLOC_is_allocation_complete_msg
• ALLOC_is_allocation_correct_msg
For example, to register a custom method that defines the business logic against the user exit
messages:
METHOD_register_method("AllocationLine", ALLOCATION_is_allocation_complete_msg,
&my_complete_allocation, NULL, &my_method))
This custom method returns TRUE if the custom conditions are satisfied, otherwise it returns
FALSE. The conditions you implement are based on your requirements. For example, to define the
correctness of an allocation if and only if the source components are of a particular type:
static int my_configure_allocations (METHOD_message_t *message, va_list args)
{
int iFail = ITK_ok;
// Extract the Object.
Customer specific business logic
//return ITK_ok;
}
manager itself, embedded software explorer, the signal manager, and the signal explorer. The ITK
functions corresponding to these four components are broadly divided into the ESM_ functions
and the SIG_ functions.
The ESM is installed as an additional component during the Teamcenter installation process and
requires a separate license.
Before using ITK to customize the ESM, ensure its related preferences are set to the correct software
categories, types, and dataset types.
The following table lists the ESM module functions you can use to customize the Embedded Software
Manager. These functions support the management of software, processor, and their dependencies.
Additional information about these functions can be found in the Integration Toolkit Function
Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Function Definition
ESM_is_processor Given a BOM line tag, checks whether it is a processor
BOM line.
ESM_is_gateway Given a processor BOM line tag, checks whether it is a
gateway processor BOM line.
ESM_is_software Given a BOM line tag, checks whether it is a software
BOM line.
ESM_associate_processor_to_ Given a processor BOM line and an array of software
software BOM lines, associates processor and software using the
Embeds relation. To save the associations, save the BOM
window.
ESM_associate_processor_to_ Given a gateway processor BOM line and an array of
processor other processor BOM lines that are accessed through the
gateway processor, associates the processors in the array
with the gateway processor using the GatewayOf relation.
To save the associations, save the BOM window.
ESM_associate_software_to_ Given a software BOM line and an array of software
software BOM lines, associates software using the Dependent On
relation. To save the associations, save the BOM window.
ESM_remove_processor_to_ Given a processor BOM line and an array of software BOM
software_association lines, removes the Embeds relation association of the
processor and software lines. To save the associations,
save the BOM window.
ESM_remove_processor_to_ Given a gateway processor BOM line and an array of
processor_association associated processor BOM lines, removes the GatewayOf
relation association between the processor lines. To save
the associations, save the BOM window.
Function Definition
ESM_remove_software_to_ Given a software BOM line and an array of software BOM
software_association lines, removes the Dependent On relation association
of the software lines. To save the associations, save the
BOM window.
ESM_ask_embedded_ Given a processor BOM line, gets an array of software
software_of_processor BOM lines that are associated to processor with the
Embeds relation.
ESM_ask_gateway_of_ Given a processor BOM line, gets an array of gateway
processor processor BOM lines that are associated to it with the
GatewayOf relation.
ESM_ask_processors_ Given a gateway processor BOM line, gets an array of
accessedby_processor processor BOM lines that are associated to it with the
GatewayOf relation.
ESM_ask_dependent_ Given a primary software BOM line, gets an array of
software_of_software secondary software BOM lines that are dependent on the
primary software.
ESM_ask_software_used_by_ Given a secondary software BOM line, gets an array
software of primary software BOM lines that are used by the
secondary software.
The following table lists the SIG module functions you can use to customize the Embedded Software
Manager. These functions support the management of signals and their dependencies.
Additional information about these functions can be found in the Integration Toolkit Function
Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Function Definition
SIG_set_associated_systems Sets the secondary lines as an associated system based
on the input role to the primary signal or process variable
line. This function requires you to set the appropriate
preference to the correct type for the association to be
created for that type. If the preference exists, but does not
have a value, the corresponding association is not created.
Function Definition
SIG_unset_associated_ Removes associations between a signal or process
systems variable line tag and one or more secondary BOM lines for
a specific role passed in to the function. If one or more
of the secondary BOM lines passed in is invalid, or if an
association does not exist between the signal or process
variable line and one or more secondary BOM lines, that
secondary line is added to a failedLines array output
variable, and the function processes the next secondary
BOM line. You must pass a valid non-null role in the role
string argument.
SIG_ask_signal_sources Finds all the sources of the signal or message line tag
using the associated_system relation.
SIG_ask_signal_targets Finds all the targets of the signal or message line tag using
the associated_system relation.
SIG_ask_signal_transmitters Finds all the transmitters of the signal or message line tag
using the associated_system relation.
SIG_ask_device_sources Finds all the source devices that transmit messages or
signals to the target device, represented by linetag. This
function uses the underlying associated_system relation
between source and message or signal.
SIG_ask_device_targets Finds all the targets devices that the device, represented
by linetag, transmits messages or signals to. This function
uses the underlying associated_system relation between
target and message or signal.
SIG_ask_device_transmitted_ Finds all the messages or signals that are transmitted by
signals a source device, represented by linetag. This function
uses the underlying associated_system relation between
source and message or signal.
SIG_ask_device_received_ Finds all the messages or signals that are received by
signals a target device, represented by linetag. This function
uses the underlying associated_system relation between
target and message or signal.
• Gets the processors that are accessed through this gateway processor.
Note
All other create, ask, and removal methods follow a similar coding pattern.
// Get the gateway and the processor lines from the list of lines passed in
tag_t gateway_processor_line = NULLTAG;
for( jnx = 0 ; jnx < numBOMLines; inx++ )
{
If ( ESM_is_gateway( inputbomlines[inx] )
gateway_processor_line = inputbomlines[inx];
else
{
If ( ESM_is_processor( inputbomlines[inx] )
{
processorLines[num_processor_lines] = inputbomlines[inx[;
num_processor_lines++;
}
}
}
/// associate the gateway and the processor lines
int stat = ESM_associate_processor_to_processor(
gateway_processor_line, num_processor_lines,
processorLines, &hasFailures, &numFailedLines, &failedLines);
for( inx = 0; inx < numFailedLines; inx++)
{
//Perform any custom error reporting
}
….
// next, we query for all processors accessed through this gateway-
// in this example, if numFailedLines was 0, the lines in
// “processorLines” above, will match the lines in
// accessed_processor_lines. Note that the array is not a sorted list
stat = ESM_ask_processors_accessedby_processor(
gateway_processor_line, &num_accessed_lines,
&accessed_processor_lines);
// Now we remove the associations between the processors and gateway
stat = ESM_remove_processor_to_processor_association(
gateway_processor_line, num_processor_lines, processorLines,
&hasFailures, &numFailedLines, &failedLines);
Producing a report
When producing a report, you must follow this procedure:
1. Create a BOM window.
The first thing you have to do is create a BOM window with a call to the BOM_create_window
function. The window may need some default settings. For example, default lines are not
packed and the configuration rule is the user's default one (if defined); otherwise it is the latest
with any status.
Note
To change the configuration rule, you have to ask for the configuration rule and then
use the CFM ITK routines to modify it.
If the BOM view is non-null, it is used. Otherwise, the oldest one is used.
Note
Until multiple views are used, there is never more than one BOM view in an item.
Note
Because an attribute is not read-only does not mean you can set that attribute on any
particular line. If the line shows released data, the set_attribute fails.
The internal/external flag is intended to say whether the attribute was defined internally (and
might be expected to be available the next time you run the program) or externally (by virtue of
reflecting an occurrence note type; so it may not exist in some other run of the program).
If you are looking for data that is not available as an attribute (for example, a dataset in the item
revision of some BOM line), you can use the tag attributes (item revision tag, in this case) to find
the underlying object. Next, use ITK queries on the object
Note
A simple program displaying all attributes should ignore tag attributes as being
meaningless to display.
Occurrence sequencing
The single-level structure of an item revision is made up of a collection of links from a BOM view
revision (the parent) to the items which are components of the assembly (the children). These links
are known as occurrences.
An occurrence represents the usage of an item or an item revision within the product structure
of a parent. You can distinguish between different kinds of occurrences in the BOM by referring
to occurrence sequencing.
Detailed information about the BOM and product structure (PS) functions is available in the Integration
Toolkit Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Note
The BOM_save_window function calls underlying PS routines to save whatever data
the BOM module knows needs saving. It is possible to mix BOM and PS calls within
the same program, but you must ensure that you call PS to save any change you make
that the BOM does not know of.
The other functions do not affect the stored data; they only affect the display (in this case, the results
of the BOM_line_ask_child_lines function).
BOM line, use the BOM_line_split_occurrence function. The new level initially copies all associated
data from the original level, including notes and variant conditions. The total quantity after the split
equals the original quantity. The BOM_line_move_to and BOM_line_copy functions move and copy
the BOM line to a new place in the structure, respectively. Absolute occurrences are preserved during
all restructuring operations.
• ITEM_add_related_global_alternates
Adds global alternates.
• ITEM_remove_related_global_alternates
Removes global alternates.
• ITEM_prefer_global_alternate
Makes the specified global alternate the preferred global alternate for the specified item.
• ITEM_ask_has_global_alternates
Checks if the specified item has any global alternates.
Note
The interactive Structure Manager assumes that all packed lines show the same find
number. If the user changes the find number, it sets all of the lines to that number. If you
change this function, Structure Manager might not work correctly.
Sort functions
The BOM_line_ask_child_lines function lists the lines in some order determined by a
sort function. The default function sorts by find number and then by item name. The
BOM_set_window_sort_compare_fn function can be used to change this function on a per window
basis.
The following code shows how the BOM ITK functions can be used:
/* An example program to show a list of a bill of materials using the BOM module
@<DEL>*/
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unidefs.h>
#include <itk/mem.h>
#include <tc/tc.h>
#include <tccore/item.h>
#include <bom/bom.h>
#include <cfm/cfm.h>
#include <ps/ps_errors.h>
/* this sequence is very common */
#define CHECK_FAIL if (ifail != 0) { printf ("line %d (ifail %d)\n", __LINE__, ifail);
exit (0);}
static int name_attribute, seqno_attribute, parent_attribute, item_tag_attribute;
static void initialise (void);
static void initialise_attribute (char *name, int *attribute);
static void print_bom (tag_t line, int depth);
static void print_average (tag_t top_line);
static void find_revision_count (tag_t line, int *count, int *total);
static void double_find_nos (tag_t line);
static int my_compare_function (tag_t line_1, tag_t line_2, void *client_data);
/*-------------------------------------------------------------------------------*/
/* This program lists the bill under some given item (configuring it by latest+working).
It then doubles all the find numbers and re-lists it by descending find number
order. It then counts how many lines there are in the bill and the average number of
revisions of the items in that bill.
It never actually modifies the database.
Run this program by:
<program name> -u<user> —p<password>
*/
extern int ITK_user_main (int argc, char ** argv )
{
int ifail;
char *req_item;
tag_t window, window2, rule, item_tag, top_line;
initialise();
req_item = ITK_ask_cli_argument("-i=");
ifail = BOM_create_window (&window);
CHECK_FAIL;
ifail = CFM_find( "Latest Working", &rule );
CHECK_FAIL;
ifail = BOM_set_window_config_rule( window, rule );
CHECK_FAIL;
ifail = BOM_set_window_pack_all (window, true);
CHECK_FAIL;
ifail = ITEM_find_item (req_item, &item_tag);
CHECK_FAIL;
if (item_tag == null_tag)
{ printf ("ITEM_find_item returns success, but did not find %s\n", req_item);
exit (0);
}
ifail = BOM_set_window_top_line (window, item_tag, null_tag, null_tag, &top_line);
CHECK_FAIL;
print_bom (top_line, 0);
double_find_nos (top_line);
/* we will not use
ifail = BOM_save_window (window);
because that would modify the database
*/
/* now let's list the results sorted the other way */
ifail = BOM_create_window (&window2);
CHECK_FAIL;
ifail = BOM_set_window_config_rule( window2, rule );
CHECK_FAIL;
ifail = BOM_set_window_pack_all (window, false);
/* the default, but let's be explicit */
CHECK_FAIL;
ifail = BOM_set_window_sort_compare_fn (window2, (void *)my_compare_function, NULL);
CHECK_FAIL;
ifail = BOM_set_window_top_line (window2, item_tag, null_tag, null_tag, &top_line);
CHECK_FAIL;
printf ("================================================\n");
print_bom (top_line, 0);
print_average (top_line);
ITK_exit_module(true);
return 0;
}
/*-------------------------------------------------------------------------------*/
static void print_bom (tag_t line, int depth)
{
int ifail;
char *name, *find_no;
int i, n;
tag_t *children;
depth ++;
ifail = BOM_line_ask_attribute_string (line, name_attribute, &name);
CHECK_FAIL;
/* note that I know name is always defined, but sometimes find number is unset.
If that happens it returns NULL, not an error.
*/
ifail = BOM_line_ask_attribute_string (line, seqno_attribute, &find_no);
CHECK_FAIL;
printf ("%3d", depth);
for (i = 0; i < depth; i++)
printf (" ");
printf ("%-20s %s\n", name, find_no == NULL ? "<null>” : find_no);
ifail = BOM_line_ask_child_lines (line, &n, &children);
CHECK_FAIL;
for (i = 0; i < n; i++)
print_bom (children[i], depth);
MEM_free (children);
MEM_free (name);
MEM_free (find_no);
}
/*-------------------------------------------------------------------------------*/
static void double_find_nos (tag_t line)
{
/* just to demonstrate modifying the bill */
int ifail;
char *find_no;
int i, n;
tag_t *children;
ifail = BOM_line_ask_attribute_string (line, seqno_attribute, &find_no);
CHECK_FAIL;
/* Top bom lines have no find number, and others may also not */
if (find_no[0] != '\0')
{
char buffer[100];
sprintf (buffer, "%d", 2 * atoi (find_no));
ifail = BOM_line_set_attribute_string (line, seqno_attribute, buffer);
if (ifail != ITK_ok)
{
char *child_name;
ifail = BOM_line_ask_attribute_string (line, name_attribute, &child_name);
CHECK_FAIL;
printf ("==> No write access to %s\n", child_name);
MEM_free (child_name);
}
else
{
CHECK_FAIL;
}
MEM_free (find_no);
}
ifail = BOM_line_ask_child_lines (line, &n, &children);
CHECK_FAIL;
for (i = 0; i < n; i++)
double_find_nos (children[i]);
MEM_free (children);
}
/*-------------------------------------------------------------------------------*/
static void print_average (tag_t top_line)
{
int count = 0, total = 0;
find_revision_count (top_line, &count, &total);
if (count <= 0)
{
printf ("impossible error has happened!\n");
}
else
{
printf ("lines in bill : %d\naverage revisions of each item : %d\n", count, total
/ count);
}
}
/*-------------------------------------------------------------------------------*/
static void find_revision_count (tag_t line, int *count, int *total)
{
/* A function to demonstrate going from BOM tags to other classes */
/* In this case, we get the Item tag for the BOM line, and then use it for an */
/* ITEM call. For a complete list of standard attributes see bom_attr.h, or */
/* use a BOM_list_attributes call to find all current attributes (new note */
/* types define new attributes) */
int ifail;
tag_t item_tag, *revisions, *children;
int i, n, revision_count;
char* item_id;
ifail = BOM_line_ask_attribute_tag(line, item_tag_attribute, &item_tag );
CHECK_FAIL;
/* the simplest example call I can think of: */
/* count how many revisions of this Item there are */
ifail = ITEM_list_all_revs (item_tag, &revision_count, &revisions);
CHECK_FAIL;
MEM_free (revisions);
(*count)++;
(*total) += revision_count;
ifail = BOM_line_ask_child_lines (line, &n, &children);
CHECK_FAIL;
for (i = 0; i < n; i++)
find_revision_count (children[i], count, total);
MEM_free (children);
}
/*-------------------------------------------------------------------------------*/
static int my_compare_function (tag_t line_1, tag_t line_2, void *client_data)
{
/* returns strcmp style -1/0/+1 according to whether line_1 and line_2 sort <, = or > */
char *seq1, *seq2;
int ifail, result;
ifail = BOM_line_ask_attribute_string (line_1, seqno_attribute, &seq1);
CHECK_FAIL;
ifail = BOM_line_ask_attribute_string (line_2, seqno_attribute, &seq2);
CHECK_FAIL;
if (seq1 == NULL || seq2 == NULL)
result = 0;
else
result = strcmp (seq2, seq1); /* Doing a reverse sort */
/* note: the default sort function compares Item names if the find numbers sort
equal but we will not show that here
*/
MEM_free (seq1);
MEM_free (seq2);
return result;
}
/*-------------------------------------------------------------------------------*/
static void initialise (void)
{
int ifail;
/* <kc> exit if autologin() fail */
if ((ifail = ITK_auto_login()) != ITK_ok)
fprintf(stderr,"Login fail !!: Error code = %d \n\n",ifail);
CHECK_FAIL;
/* these tokens come from bom_attr.h */
initialise_attribute (bomAttr_lineName, &name_attribute);
initialise_attribute (bomAttr_occSeqNo, &seqno_attribute);
ifail = BOM_line_look_up_attribute (bomAttr_lineParentTag, &parent_attribute);
CHECK_FAIL;
ifail = BOM_line_look_up_attribute (bomAttr_lineItemTag, &item_tag_attribute);
CHECK_FAIL;
}
/*-------------------------------------------------------------------------------*/
static void initialise_attribute (char *name, int *attribute)
{
int ifail, mode;
ifail = BOM_line_look_up_attribute (name, attribute);
CHECK_FAIL;
ifail = BOM_line_ask_attribute_mode (*attribute, &mode);
CHECK_FAIL;
if (mode != BOM_attribute_mode_string)
{ printf ("Help, attribute %s has mode %d, I want a string\n", name, mode);
exit(0);
}
}
Occurrences
An occurrence represents the usage of an item or an item revision within the product structure of a
parent. You can distinguish between different kinds of occurrences in the BOM and product structure.
Additional information about occurrence functions can be found in the Integration Toolkit Function
Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
The basic concepts behind options and variations are that options are used to customize the BOM
and variant expressions that reference these options at run time.
• Expressions
Expressions are built as a parse-tree of variant expressions. A collection of these expressions is
held in a variant expression block. The same mechanism is used to attach variant conditions
to an occurrence and attach variant declarations to an item revision. The restriction is that only
appropriate types of expressions may be attached to each parent. With an occurrence, the
variant expression block may only contain one expression and must be a LOAD-IF type of
expression. With an item revision, many types of expressions may be used to declare an option,
set a value or apply an error check.
• Restrictions
Options are only enumerated values. Expression trees only evaluate to logical or integer values.
If you want to print a report showing variant conditions on a BOM or ignore BOM lines that are not
selected by the current option values, use ITK enquiries for BOM line values rather than trying to
evaluate variant expression trees. The ITK routines documented here are very low-level routines that
are intended to be used to create variant conditions when building a BOM.
Variant operator values are defined in the bom_tokens.h file. The BOM_variant_op_rhs_is_string
value can be added to other operators to allow a string to be used as the right operand. In most cases
the right operand should be an integer value that is stored as a text string containing decimal digits.
Operators
The following table lists each variant expression operator, the type of left and right operands allowed
in that expression, and a brief description.
Note
The operator names in the following table are truncated for convenience. The full name
is the operator name in the following table prefixed with BOM_variant_operator_. For
example, declare in the table is BOM_variant_operator_declare.
Run-time options
Once the assembly has been built to include variations data, option values can be used to determine
what parts of the assembly to display at run time. Options can be listed and set from a specified
window using the BOM_window_ask_options and BOM_window_set_option_value functions,
respectively. If the desired option is known, but it is not known whether it has been previously
referenced, use the BOM_window_find_option function instead of the BOM_window_ask_options
function. The following code shows an example of listing and setting option values:
#include <string.h>
#include <unidefs.h>
#include <tc/tc.h>
#include <bom/bom.h>
void panic();
void error(const char* message);
void set_option(tag_t w, const char *option, const char *value)
{
/* Sets an option to the specified value in Structure Manager window w. */
tag_t *options, *option_revs;
tag_t found_option, found_option_rev;
int *indexes;
int i, n;
if (BOM_window_ask_options(w, &n, &options, &option_revs) != ITK_ok) panic();
for (i = 0; i < n; i++)
{
char *name, *description;
tag_t owning_item;
logical found;
if (BOM_ask_option_data(options[i], &owning_item, &name, &description)!= ITK_ok)
panic();
found = (strcmp(name, option) == 0);
MEM_free(name);
MEM_free(description);
if (found) break;
}
if (i == n) error("option not used in this window");
found_option = options[i];
found_option_rev = option_revs[i];
MEM_free(options);
MEM_free(option_revs);
if (BOM_list_option_rev_values(found_option_rev, &n, &indexes) != ITK_ok) panic();
for (i = 0; i < n; i++)
{
char *opt_value;
logical found;
if (BOM_ask_option_rev_value(found_option_rev, indexes[i], &opt_value)!= ITK_ok)
panic();
found = (strcmp(opt_value, value) == 0);
MEM_free(opt_value);
if (found) break;
}
if (i == n) error("option not allowed to have that value");
/* now do the work */
if (BOM_window_set_option_value(w, found_option, indexes[i]) != ITK_ok) panic();
MEM_free(indexes);
}
When creating a new option with the BOM_create_option function, it is necessary to declare the
option revision against an item revision. You can both create and declare an option with a single call
to the BOM_new_option function. To declare an existing option against a different item revision, call
the BOM_declare_option function.
The BOM variant rule is a run-time object that represents the list of options and their values in a BOM
window. In general, you can ignore this object and use the BOM_window_ variant functions to
get and set option values.
However, this object is for creating multiple BOM variant rules against a BOM window; although only
one is actually applied to the BOM window at any one time. This can be used to set multiple option
values without reconfiguring the entire BOM after each set operation, although option defaults and
rule checks are still triggered. Then, once all options have been set, the BOM variant rule can be
applied to the BOM window.
The BOM variant rule supports an ITK interface that is almost identical to the BOM window variant
functions for getting and setting options and their values. A BOM window's current BOM variant rule
can be obtained by a call to the BOM_window_ask_variant_rule function.
In an attempt to simplify Variant Expression interaction at the ITK level, some high-level concepts
have been introduced. A Variant Clause List can be used to create and edit conditional variant
expressions (for example, Engine = 1.6 AND Gearbox = manual). These conditional expressions
can then be used to create IF type Variant Expressions (for derived defaults, rule checks, and
occurrence configuration) using the simple BOM_variant_expr_ ITK calls.
Variant clause lists are used to store conditional variant expressions in a more easily understood
(and edited) format. For example, the condition:
"Engine = 1.6 OR Engine = 1.8 AND Gearbox = manual"
In other words, each clause is a simple condition and a join operator (AND or OR).
Clauses may be added (anywhere in the list), deleted, moved, or queried. Individual clauses within the
list are identified by their position. The first clause is at position 0, the second at position 1, and so on.
Brackets are supported by variant clause lists. To add brackets around a section of a clause list, call
the BOM_variant_clause_toggle_brackets function and pass it the array of clause positions to be
contained within the brackets. For example, to place brackets around the Engine = 1.8 AND Gearbox
= manual section of the previous example, pass in an array containing the positions 1 and 2.
Note
To add brackets around a clause list section that contains more than two clauses, pass in
the positions of the first and last clauses in the section. The other positions are ignored.
The array mechanism is used, in preference to only supplying first and last position
arguments, to simplify user interface implementations by allowing you to simply pass in the
list of currently selected clauses, rather than expecting you to filter the selection list down
to just the first and last.
Modular variants
If you work with modular variants, you must take a different approach in your ITK code. For example,
if you want to create a BOM selected option set, set the option values, and then apply the set, you:
1. Ask for the selected option set from a BOM line:
BOM_line_ask_sos( bl, &sos )
Note
There are two kinds of option sets: a run-time selected option set and a persistent
stored option set kept in the database.
3. Set the value for each option. In this call, the function sets option1 with the value of Economy:
BOM_sos_set_entry_string( sos, options[ 0 ], "", "Economy", 4 )
If you want to create a stored option set in the database and then read it, you:
1. Ask for the selected option set from the BOM line:
BOM_line_ask_sos ( bomline1, bomsos )
2. Create a variant configuration object. The first variable is given if you have a classic variant rule:
BOM_create_variant_config ( NULLTAG, 1, { bomsos }, bom_variant_config )
Then create a stored option set in the database from the BOM variant configuration:
4. Read the saved stored option set contents from the database:
BOM_sos_db_contents(sos1, count, items, options, optionTypes, valueTypes,
howSet, values)
If you want to read and load the contents of the stored option set saved in the database, you:
1. Ask for the selected option set from the BOM line:
BOM_line_ask_sos ( bomline1, bomsos2 )
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Scope of compare
BOM Compare operates on the contents of two BOM windows. This means that it picks up the
configuration rules of the BOM windows from which the compared BOM lines were taken. If you
want to compare different configurations of the same BOM, first set up two BOM windows with
the required configurations.
By default, the BOM_compare function compares everything below the supplied BOM lines right
down to the bottom of the BOM. It automatically loads any required structures.
If a you want to prevent a compare from loading the components of a specific structure, set the stop
flag of the structure's BOM line, using the BOM_line_set_stop function. The stop flag can be
cleared using the BOM_line_clear_stop function. In Structure Manager-based compares, the stop
flags are set on all visible leaf nodes and collapsed structure nodes.
Compare descriptor
The compare descriptor is a list of attributes (referred to as compare elements to differentiate them
from POM attributes) that are used as the basis of the compare. Although most compare elements
are simply be properties (and are handled automatically by the compare engine), you can compare
based on nonproperty elements by defining methods. You can also use methods to customize the
compare behavior for awkward properties (for example, quantity, which is described in this section).
Compare elements can be generally divided into two distinct groups: primary elements and
aggregate elements (there's a third group: display elements, which is described in this section).
The key difference between these two groups is that primary elements define different entities,
while aggregate elements are combined for multiple instances of the same entity. For example, the
compare functionality before version 8 (when expressed in version 8 terms) defined item ID as a
primary element and quantity as an aggregate element. This meant that BOM lines that had the same
item ID were bundled together for the purposes of compare and their quantities were added together.
In other words, 1 BOMLine with qty 2 was equivalent to 2 BOMLines with qty 1.
In the following code example, the compare descriptor uses the bl_occurrence (occurrence thread
tag) property, the bl_item property, the bl_quantity property, and the Torque (an occurrence
note) property.
For simple examples, defining a compare mode is relatively easy. You create your descriptor and
create a number of property elements against it. Then define your mode, combining the descriptor
with a traversal mode.
There are a number of features that are described as follows:
• UI suppression
In the call to the BOM_compare_define_mode function, the fourth argument is the mode UI
display flag. This controls whether the UI should present this compare mode in the Structure
Manager BOM Compare dialog box. This is a legacy setting and is now irrelevant because even
if this was set to true, the UI still would not show this mode in the dialog box. The UI dialog box is
currently hard coded to our standard pre-version 8 compare modes.
• Auto pack
In the call to the BOM_compare_define_mode function, the fifth argument defines whether
multilevel compares should pack BOM lines as it goes along. Auto pack is an attempt to allow the
simplification of the presentation of the results of multilevel compares where the primary compare
elements match the pack rule attributes. By default, the BOM line pack attributes are item ID and
find number. These are the primary attributes for the standard multilevel compare as currently
supported by the Structure Manager compare dialog box.
If you set this argument, set it to false.
• Virtual unpack
In the call to the BOM_compare_define_mode function, the sixth argument defines whether
BOM lines should be temporarily unpacked during the compare. If you are using bl_occurrence
as one of your compare elements, set this flag to true because packed BOM lines represent
multiple occurrences. To gain access to the individual occurrences, the compare must unpack the
BOM line, pull out the required data, and then repack, which is transparent to the end user. Why
is not this always switched on? The main limitation of virtual pack lies in the BOM line property
compare output. Normally, BOM compare sets a BOM line property called Changes to display the
actual changes that have happened to the BOM line (for example, Qty 2→3 or Rev A→B). If you
are performing an occurrence-based compare, you might have two occurrences behind a packed
BOM line. One occurrence might have a change (for example, Qty 2→3). The other occurrence
might not have any change. In this situation, what should compare put in the Changes property
of the BOMLine? Should it say Qty 2→3? Should it be empty? In reality, it says PACKED
CHANGES, and expects the user to manually unpack the BOM line to see to individual changes.
For non-UI compares, Siemens PLM Software recommends that the BOM windows being
compared have packing switched off.
• Caching
In the calls to the CMP_create_prop_element function, the third argument tells BOM Compare
whether and how to cache property values. The CMP_cache_sync setting means that the type of
the cache should be synchronized to the property value type. Setting this to CMP_cache_none
would disable caching of that property. The CMP_cache_auto setting is similar to
CMP_cache_sync, but allows BOM Compare to make the final decision on the cache type. This
is important if you plan to use certain properties as aggregate elements. For example, multiple
tag elements cannot be aggregated into a tag cache. Instead, they need a tag array cache type.
The CMP_cache_auto setting makes that decision for you. You can also manually specify the
type of the cache, but it is up to you to make sure that it is suitable for storing the property.
• Element naming
In the calls to the CMP_create_prop_element function, the fourth argument allows a user to
specify the (internationalized) name by which this element is known. If this is set to NULL, then
the property name is used. Element naming is primarily used for nonproperty compare elements,
but can still be useful for property elements where the property name is considered to be too
verbose for display in compare output. For example, with the bl_quantity property you might
want compare to output the more compact Qty rather than property display name Quantity.
while you could use bl_item_item_id as your compare element, the use of bl_item is much more
efficient for two reasons: it is an integer compare rather than a string compare, and bl_item is a
directly defined BOM line property while bl_item_item_id is indirected through the item.
If you want the changes written to the BOM line Changes property, just add the
BOM_compare_output_bomline argument to the BOM_compare_output_report argument in the
call to the BOM_compare_execute function.
Display elements
In addition to primary and aggregate elements, BOM Compare can also support display elements.
These are elements that have no impact on the actual comparison. Display elements are extra
context data that is written as part of compare output. This is just an advanced version of the element
display aliasing concept. Aliasing is limited in that it only works with property elements, and only
a single alias property can be defined. Display elements can be defined with methods, allowing
nonproperty data to be used. You can also define as many display properties as you want.
You can use the bl_item_item_id alias as a display alias for bl_item. If you want to display the
item name as well as the item ID, define a property display element for the bl_item_object_name
property in the descriptor, as shown in the following code:
CMP_create_prop_element(new_desc,
CMP_display_element,
CMP_cache_sync,
NULL,
false,
bomline_type,
"bl_item_object_name",
NULL,
&new_element);
Note that display elements are not aggregated. Teamcenter assumes that the display element
property value is constant across all BOM lines in a set. As such, the display element value is simply
pulled out of the first BOM line that the compare engine finds.
• BOM_compare_stop_if_diff
• BOM_compare_dont_report_adds
• item=widgetX, seqno=20
• item=widgetX, seqno=30
If you define your elements in the order (Item Id, Seq No), compare outputs the lines in the order 2,
3, 1. This order is clearly not ideal. You normally would really like the output to be in the same order
in which it appears in Structure Manager (in other words, 1, 2, then 3). To do this, simply reverse
the order of element definition to be (Seq No, Item Id). However, this affects the column output
order, described in the next section.
The use of find numbers introduces another feature. Consider this pair of BOM lines:
• item=widgetX, seqno=20
• item=widgetX, seqno=100
The order output should be 1 first and 2 second. Because find numbers are strings and not
integers, a string compare is performed and alphabetically 100 comes before 20. To get around
this, mark the find number element (and any others that store numbers as strings) as special with
the CMP_set_prop_element_order function:
CMP_create_prop_element(new_desc, ..., &seqno_element);
CMP_set_prop_element_order(seqno_element, CMP_order_by_length_and_value);
This call tells the compare engine to sort by length, before sorting by value.
Methods
When using property-based compare elements, the system provides default behavior for comparing
the property values. There are also some simple modifiers that can be applied to handle certain
special cases. If none of the modifiers do what you need, you can write code that does exactly what
you want and then register it against the element.
There are six method types supported by the generic compare engine:
• Compare object
This method must compare a given object against a given compare set and return its opinion
on whether that object belongs in that set. Note that this is just one primary compare element's
opinion. Other primary elements may disagree. All primary elements must say yes for the object
to be allowed into the set.
• Compare aggregate
This method must compare the left and right sides of the given compare set and return its opinion
on whether the two sides are equivalent. Other aggregate elements may disagree.
• Cache
This method must update the element's cache when the supplied object is added to the given
side of the compare set.
• UIF name
This method returns the display name for this element.
• UIF value
This method returns the display value for this element.
• Free cache
One of the supported cache types is CMP_cache_pointer. This cache type means that the
user is taking responsibility for creating and managing the cache. The generic compare engine
simply maintains a void * pointer to the cache. When using this cache type, you must supply
a Cache method (to create and update the cache), a UIF Value method (to return the display
value for the cache), and a Free Cache method. This Free Cache method is used to release
the memory allocated by the Cache method.
Nonproperty elements
This is the logical conclusion of the use of compare methods. A nonproperty element is an element
that is entirely driven through methods. It has no standard behavior. This requires more effort, but
allows maximum flexibility. For example, you could perform a compare using elements that live
entirely outside Teamcenter. It could pull in data from a different database or from the Internet,
which may take a significant amount of time. If you plan to do this, Siemens PLM Software highly
recommends caching.
For an example of a nonproperty element, see the quantity elements in the smp_user_bom_cmp.c
file. While quantity is a BOM line property, Compare cannot handle it directly because of the need to
cope with special concepts like as required and undefined (Structure Manager's default quantity, in
other words, one unit).
In single and lowest-level compare traversals, the BOM Compare object has a single BOM Compare
engine object. In multi-level compare traversals, the BOM Compare has a tree of nested BOM
Compare engines.
A BOM Compare engine contains a list of compare sets. Each compare set contains two lists of BOM
lines (one list for each side of the compare). Every BOM line in a compare set has the same values
for their primary elements. Compare sets are capable of caching compared values. Primary and
display elements are stored in a single cache (per element) on the compare set. Aggregate elements
require a cache for each side of the compare.
To traverse this data structure, use the BOM_compare_visit_engine function, as shown in the
following code. This function traverses the internal data structure invoking callbacks as it goes:
{
tag_t compare;
BOM_compare_create(&compare);
BOM_compare_execute(compare,
bomline1, bomline2,
"example mode",
0 /* No output - we're going to do it ourselves */ );
BOM_compare_visit_engine(compare,
enter_engine,
leave_engine,
visit_set,
NULL /* User data pointer */ );
}
int enter_engine(tag_t bomcompareengine, tag_t compareset, int depth,
void *user_data)
{
tag_t root1, root2;
BOM_compare_ask_engine_root_bomlines(bomcompareengine, &root1, &root2);
printf("Entering compare engine %x, comparing BOMs %x and %x\n",
bomcompareengine, root1, root2);
/* You could use the CMP ITK to query the contents of the engine */
return ITK_ok;
}
int leave_engine(tag_t bomcompareengine, tag_t compareset, int depth,
void *user_data)
{
printf("Leaving compare engine %x\n", bomcompareengine);
/* You could use the CMP ITK to query the contents of the engine */
return ITK_ok;
}int visit_set(tag_t compareset, int depth, void *user_data)
{
/* You can use the CMP ITK to query the contents of the set */
printf("Compare set %x\n", compareset);
/* List which compare elements changed in this set */
int n;
tag_t *elements;
tag_t *bomlines;
int i;
CMP_ask_diff_elements(compareset, &n, &elements);
printf("Changed elements: ");
for (i=0; i<n; i++)
{
int element_type,
int priority,
int cache_type,
char *uif_name,
logical suppress_value;
char *lvalue;
char *rvalue;
CMP_ask_element_info(elements[i],
&element_type, &priority, &cache_type,
&uif_name, &suppress_value);
CMP_ask_element_value(compareset, CMP_LHS, elements[i], &lvalue);
CMP_ask_element_value(compareset, CMP_LHS, elements[i], &rvalue);
Performing a compare
The BOM Compare function BOM_compare_execute supersedes the older BOM_compare call.
The BOM_compare_execute function takes the tags of the two BOM lines to be compared, along
with the compare mode, and the output type, and may optionally be given a compare context.
A compare context (gained from the BOM_compare_create function) is a slot into which the
results of a BOM Compare are put. Each one can only hold the results of one compare. The
BOM_compare_execute function can be told to use the default compare context by passing in
a null tag.
The default compare context is the one used by the user interface when running BOM Compare from
Structure Manager. If you use the default context while the user interface might be using it, you clear
any BOM Compare-related BOM line highlighting and the ability to click on a report line and have
the Structure Manager select the correlate BOM lines. Merely using the default context causes this,
even if you do not ask for BOM line highlighting or text reports.
The standard mode names (BOM_std_compare_*_name) are available as manifest constants from
the bom_tokens.h file; the output selectors (BOM_compare_output_*) are also available from the
bom_tokens.h file. These identifiers are used thus:
BOM_compare_execute( bomcomparecontext,
bomline1,
bomline2,
BOM_std_compare_single_level_name,
BOM_compare_output_bomline | BOM_compare_output_report );
For your own modes, you should establish and use your own manifest constants. The standard
compare modes are described in Structure Manager, but you can also declare new ones using
the BOM_compare_define_mode function. To define your own modes, you need to understand
compare descriptors, which are part of the generic compare engine.
The generic compare engine provides a way to compare two sets of objects, and obtain the
differences between the sets. Each set is divided into subsets containing objects that match each
other according to some user-defined criteria (primary keys). If both sets each have a subset whose
objects match by their primary keys, then the objects within each subset are compared according to
further user-defined criteria (aggregate keys), and the results for this second compare between each
correlate pair of subsets are made available. A subset with no correlate subset is an addition (or
a deletion). The keys are specified with a compare descriptor.
For example, the functionality of BOM Compare has been re-implemented on top of this generic
engine. The two sets are the two sets of BOM lines for comparison; the BOM lines are gathered into
subsets according to item ID and optionally find number, the primary keys, and where both sides of
the compare has correlate subsets, they are compared for quantity and revisions, the aggregate keys.
User exits
BOM Compare can call a number of user exit functions for both ITK- and Structure Manager
window-invoked compares. The user exits are:
• USER_bom_cmp_start_report
• USER_bom_cmp_enter_report
• USER_bom_cmp_item_report
• USER_bom_cmp_parent_report
• USER_bom_cmp_exit_report
• USER_bom_cmp_end_report
These user exit functions are always called by the Structure Manager window-invoked compare.
User exit output is optional for the ITK compare.
A number of ITK functions are available to allow the user exit functions to query the compare results:
• BOM_compare_list_bomlines
• BOM_compare_ask_differences
• BOM_compare_ask_qty
• BOM_compare_ask_rev
• BOM_compare_ask_seqno
This list of functions has been augmented by the generic compare functions that can be applied
to compare engines.
The following two properties are additionally supported when the BOM_compare_legacy_properties
preference is true:
bl_quantity_change
bl_revision_change
If no differences are found for a particular BOM line, these properties are all blank.
If the BOM_compare_legacy_properties preference is false or unset:
• bl_compare_change
Contains the type of change for this line, either Added or one or more of the Aggregate keys and
key values for the BOM Compare Mode specified. For the standard compare modes, the keys
are Qty or Rev for quantity changes and revision changes, respectively, so this property might
look like, Qty:1→2 or Rev:A→B. This is a comma-separated list if more than one change occurs
(for example, both quantity and revision changes might look like Qty:1→2,Rev:A→B).
• bl_quantity_change
Contains details of a quantity change (in other words, if Qty is a substring of the
bl_compare_change attribute). The change is stored as the two quantities separated by an
arrow (for example, 1→2).
• bl_revision_change
Contains details of the revision change (in other words, if Rev is a substring of the
bl_compare_change attribute). The change is stored as the two revisions separated by an
arrow (for example, A→B). If more than one revision of an item exists in one of the BOMs then all
the revisions are listed (for example, A,B→C).
Note
If another compare is started (in a particular compare context) before the previous one has
been cleared, the old properties are automatically cleared.
Report output
Report output can be retrieved with the BOM_compare_report function. Note that you must request
report output when you call the BOM_compare_report function, otherwise no report is generated.
The BOM_compare_report function returns the report as an array of formatted character strings. The
first string contains the column headers for the report. The function also returns an array of compare
items that match the report lines. These compare items can be queried with the following functions:
• BOM_compare_list_bomlines
• BOM_compare_ask_differences
• BOM_compare_ask_seqno
These functions have been augmented by the generic compare functions, as for the user exits.
These functions are available only if the BOM_compare_legacy_report preference is true:
• BOM_compare_ask_qty
• BOM_compare_ask_rev
• BOM_compare_ask_seqno
Report lines that do not have matching compare items (for example, the column header line) have
their compare item listed as NULLTAG.
BOM Compare output suppression allows the BOM Compare user exit functions to switch off certain
forms of compare output. For example, the default user exit functions do not perform any useful
actions. To prevent the Structure Manager window-based compare from calling unnecessary user
exit functions, the very first user exit function (USER_bom_cmp_start_report) suppresses all other
user exit output, using the following line of code:
BOM_compare_suppress (line1, BOM_compare_output_userexit);
This suppression lasts for the duration of the current compare. The primary intended use for this
facility is if you wish to replace the BOM Compare's internal report window with your own report. To
achieve this, you must write the user exit functions to generate the report and display it in a window.
Once the report generation code has been written, the problem is how to control which report window
is displayed and when. You must first suppress the user exits when report output was not selected in
the BOM Compare dialog box:
If:
( (output & BOM_compare_output_report) == 0)
{
BOM_compare_suppress (line1, BOM_compare_output_userexit );
return ITK_ok;
}
There are many other potential uses for output suppression. Some further examples are given in the
USER_bom_cmp_start_report user exit source code.
• The leave_engine routine is called when all descendants of the latest active parent announced
through the enter_engine function have been visited. All ordering is depth-first recursive, so
each parent node is only reported once immediately before all of its descendants, and once
immediately after all of its descendants. The following code counts the number of BOM lines (on
the left-hand or right-hand side) determined to be different by the compare:
/* Count BOM Lines in a compare, declares
extern int count_bomlines_in_compare ( tag_t compare_context, int cmp_side );
*/
#include <bom/bom.h>
#include <fclasses/cmp.h>
#include <tc/tc.h>
/* Structure for passing nominated side and current count around */
typedef struct
{
int count;
int side;
} count_bomlines_t;
/* Callback to add the number of objects for each set to user_data-count. */
/* Note, it would be trivial to put a loop in here to examine/modify
the BOM Lines in bomlines[], instead of merely counting them */
static int count_bomlines_visit_set ( tag_t compareset, int depth, void * user_data )
{
count_bomlines_t *data = (count_bomlines_t*) user_data;
int count;
tag_t *bomlines;
{
/* Ask compareset for the bomlines on the nominated side. (I’m
only interested in how many there are, not what they are.) */
int itk_ret;
if ( ( itk_ret = CMP_ask_objects_in_set( compareset, data->side, &count,
&bomlines ) ) != ITK_ok )
return itk_ret;
}
data->count += count;
MEM_free( bomlines );
return ITK_ok;
}
/* Run over the BOM compare results passed in, and return the number of
BOM lines, or -1 on error. */
extern int count_bomlines_in_compare ( tag_t compare_context, int cmp_side )
{
count_bomlines_t results = { 0, cmp_side };
/* Use NULL for parent (engine) entry/exit callbacks, because
they’re not interesting here */
if ( BOM_compare_visit_engine( compare_context, NULL, NULL, count_bomlines_visit_set,
The Product Structure (PS) ITK module manages the creation, manipulation and storage of product
structure data within Teamcenter.
The PS module handles product structure operations at a lower level. It is also responsible for the
creation and manipulation of the basic structure objects (in other words, BOM views and occurrences).
This module is most useful for large scale import of structure data into Teamcenter or batch operation
to update specific structure attributes where the overhead of building up the full presentation of
BOM-related information is not required.
Product structure in Teamcenter is represented by assembly items containing links to component
items which make up the assembly. The structure of items is represented in BOM view objects. The
links to component items are known as occurrences.
The PS ITK module covers the creation of BOM views and BOM view revisions for items and item
revisions, the creation and modification of occurrences to reference the component parts of an
assembly, and the attachment of structure-related attributes to occurrences.
BOM view
When an item is an assembly of other items, its assembly structure is represented by a BOM
view. A BOM view is a workspace object. It is an object distinct from the item in order to support
multiple views functionality, when a separate BOM view is required to represent each different view
of an item's structure.
The class attributes and methods for BOM view in the PS module are shown in the following figure:
• BOM view revisions are all of the revisions of this BOM view. They can be listed using the
PS_list_bvrs_of_bom_view function.
• Viewtype specifies the type of this BOM view with the tag of a View Type object. Use the
PS_ask/set_bom_view_type functions to inquire or modify its value, respectively.
• BOM view also inherits name and description from the workspace object, which can be inquired
and changed using the appropriate WSOM functions.
A BOM view is created by the PS_create_bom_view function. The name of the BOM view must
be unique for all BOM views of the parent item. Be aware that no initial revision is created, this is
done by a call to the PS_create_bvr function.
Note
A new or modified BOM view is not saved to the database until you make a call to
the AOM_save function. The item it is placed in is also modified and must also be
saved with a separate call to AOM_save.
The structure of an assembly item may change between successive revisions of the item. Therefore
the actual structure information of an item revision is stored in a BOM view revision referenced by that
item revision. A revision of BOM view "A" must specify the structure of a revision of the item that
owns BOM view "A".
There is no one-to-one correspondence between item revisions and BOM view revisions, because
where two or more revisions of an item have the same structure they may share references to the
same BOM view revision. A new working item revision based on a previous released revision retains
a reference to the released revision's BOM view revision until such time as the structure of the
working item revision needs to be modified. An item revision may reference only one revision of a
BOM view at a time.
BOM view revisions are workspace objects. They are presented in the workspace separately from
item revisions for three reasons:
• To allow access controls to be placed on the structure of an item revision independently of access
controls on the rest of the item revision's data. For example, to protect the structure from further
modification while detailed changes are still being made to a drawing.
• To allow the structure of an item revision to be put through a release procedure independently of
the rest of the data for an item revision.
• To allow for multiple views functionality, when an item revision is able to reference BOM view
revisions of different BOM views, each of which represents a distinct view of the structure of the
item.
By default a BOM view revision is shown in the workspace as a specification of an item revision.
The class attributes and methods for BOM view revision in the PS module are as shown in the
following figure.
Note
The PS_set_bvr_precise function does not set view pseudo folder contents to
ItemRevisions. You need to use the BOM_line_set_precise function instead. If you
have the following code in your ITK program:
PS_set_bvr_precise(bvr);
AOM_save(bvr);
The BOM view of which it is a revision is obtained by calling the PS_ask_bom_view_of_bvr function.
A new BOM view revision is created using the PS_create_bvr function, giving the tag of the parent
BOM view and the item revision in which it is to be placed.
The PS_revise_bvr function creates a new BOM view revision based on an existing revision of
the same BOM view. The PS_copy_bvr function creates a new BOM view revision based on an
existing revision of a different BOM view.
Note
A new or modified BOM view revision is not saved to the database until you make a call to
the AOM_save function. This also applies when occurrences of a BOM view revision are
modified. Additionally, when a new BOM view revision is created, the item revision it is
placed in is also modified and must also be saved with a separate call to the AOM_save
function.
Occurrences
The single-level structure of an item revision is comprised of a collection of links from a BOM view
revision (the parent) to the items that are components of the assembly (the children). These links are
known as occurrences. An occurrence represents the usage of an item or an item revision within
the product structure of a parent. You can distinguish between different kinds of occurrences in the
product structure by referring to occurrence types and occurrence sequencing in the Integration
Toolkit Function Reference. (The Integration Toolkit Function Reference is not available in PDF
format. It is available only in the Teamcenter HTML Help Collection.)
Note
Occurrence is not presented as a distinct class within Teamcenter, rather as an attribute of
BOM view revision. In the PS ITK module, occurrences are always addressed through
their parent BOM view revisions.
• Precise/imprecise
If the child (component part) of an occurrence is a specific item revision, the occurrence is
precise. If the occurrence is a generic reference to an item, it is imprecise. If you have an
imprecise occurrence, the item it references may be resolved at run time to an exact item revision
by the application of a configuration rule.
• Substitutes
Rather than referencing a single component item, an occurrence can contain links to several
substitute items. Substitutes are interchangeable with one other in the assembly. However, one
of these substitute is specified as the preferred substitute. All substitute of a single occurrence
share the same attributes, the attributes specified for the preferred substitute.
Any modifications the occurrence substitutes is a modification to the BOM view revision.
Permanent modifications require write access to the BOM view revision. Temporary modifications
may be made during a Teamcenter session but cannot be saved to the database.
The class attributes and methods for occurrence in the PS module are shown in the following figure.
Note
An occurrence quantity must be expressed in the unit of measure attached to the child
(component) item referenced by the occurrence. Where a component item has no unit of
measure, the quantity value is interpreted as the number of component items referenced
by the occurrence. An occurrence which specifies more than one usage of a component
item is known as an aggregate occurrence.
Additional information about defining units of measure can be found in the Integration
Toolkit Function Reference. (The Integration Toolkit Function Reference is not available in
PDF format. It is available only in the Teamcenter HTML Help Collection).
An occurrence also carries a set of flags for storing additional data. Only one flag attribute
is implemented, PS_qty_as_required. When set, it indicates the child item is to be used as
required. Flags are inquired using the PS_ask_occurrence_flag function and changed using the
PS_set/clear_occurrence_flag functions.
A 4x4 transformation matrix can be stored on an occurrence for passing positional information to a
CAD system and is accessed using the PS_ask/set_plmxml_transform functions.
By default, the occurrences of a BOM view revision are listed in the order in which they were created.
This order can be altered using the PS_move_occurrence_to/up/down functions.
Note
A change of ordering is treated as a modification to the parent BOM view revision. Call
the AOM_save function to store the change in the database. Additionally, this ordering
functionality is not available through the Structure Manager user interface. Occurrences
are sorted by find number unless a different custom sort algorithm is supplied for a
Teamcenter installation.
You can create a substitute with the PS_add_substitute function. It takes an existing item and
makes it a substitute to the existing child of an occurrence. If the occurrence previously had no
substitutes, the existing child is used as the preferred substitute.
You can delete a nonpreferred substitute with the PS_delete_substitute function. The preferred
substitute cannot be deleted.
You can make a nonpreferred substitute the preferred substitute using the PS_prefer_substitute
function. If the BOM view revision is write-protected or frozen, the change to the preferred
substitute succeeds, but the change cannot be saved to the database. This case is flagged by the
is_temporary return value.
You can find the occurrence substitutes using the PS_ask_has_substitutes and PS_list_substitutes
functions. The PS_ask_has_substitutes function returns true if the occurrence has two or more
substitutes defined. The PS_list_substitutes function returns a list of the occurrence substitute.
View type
Each BOM view in Teamcenter has a type attribute. A View type is an object that defines an
identifying string for a type of view (for example, design view or manufacturing). The system
administrator defines View types for a Teamcenter installation.
Teamcenter provides a default View type. All BOM views created via the Teamcenter user interface
are of this type. The name of this View type is defined by the PS_default_view_type_name token in
the ps_tokens.h header file .
The class attributes and methods for the View type in the PS module are shown in the following figure.
A View type is a site-defined classification of BOM views. You can get a list of all View types defined
for an Teamcenter installation from the PS_view_type_extent function. You can search for a
particular View type by name using the PS_find_view_type function. You can find the name of a
View type object from its tag by calling the PS_ask_view_type_name function.
Note
Only the system administrator can alter the list of available View types, using the
PS_create_view_type and PS_delete_view_type functions.
Note type
You can attach text notes to occurrences. Each note has a type (for example, usage or adjustment),
which defines the purpose of the note. The possible types of notes are determined by the set of Note
type objects defined for a Teamcenter installation by the system administrator.
The class attributes and methods for Note Type in the PS module are shown in the following figure:
Note type is a list of types of occurrence note, as defined for a Teamcenter installation by the
system administrator. Each Note type has a short name used as an identifier (for example, at the
head of an attribute column in Structure Manager) and a description which can be used for a longer
explanation of the purpose of the Note type.
Call the PS_note_type_extent function to get a list of available Note types. To find individual types
by name, use the PS_find_note_type function.
Loading of PS objects
When a call to a PS ITK function needs to access the attributes of an object, that object (if not already
loaded) is loaded automatically. When a call to a PS ITK function modifies the attributes of an object,
that object is locked for modification automatically if it is not already locked.
Adding attributes
You can add attributes to PS objects using the PS ITK function. Each class within the PS module can
have a number of client data attributes defined for it. Each instance of such a class may then have
each of these client data attributes set to reference a POM object. This POM object must be of the
class specified in the definition of the appropriate client data attribute.
When Teamcenter displays the product structure of an assembly, you see a tree of item revisions.
The following figure shows an example:
1. Teamcenter does not store references to exact revisions of items. This means the assembly
would have to be modified every time you wanted to use a new revision of a component. Instead
Teamcenter stores a reference to an item and turns this into an exact item revision at run time by
application of a configuration rule.
The PS module itself does not deal with the use of configuration rules, but configuration impacts
the PS module with the requirement to store references to imprecise items as well as precise
item revisions.
2. PS has been designed to facilitate multiple views functionality. This is done by using the BOM
view object, each of which represents one view of the structure of the item.
In an imprecise case, a structure references an item as well as a BOM view of that item. At run
time, a configuration rule is applied to find the appropriate revision of the item. If this component
item revision references a revision of the specified BOM view, then this component is a subassembly
whose structure is defined by the BOM view revision found. Otherwise, this component is treated as
a piece part (it is not further divisible into a substructure). In the precise case, the item revision is
referenced directly by the occurrence, therefore the configuration step is eliminated.
Where used
The PS ITK module supplies three functions to produce where used reports on item revisions. Each
reports parent item revisions having a BOM view revision with occurrences referencing the given
item revision up to a specified number of levels.
• PS_where_used_all
This function reports all usages regardless of precise/imprecise status and configuration rules.
• PS_where_used_configured
This function applies a configuration rule (from the CFM module) to report usage in context.
• PS_where_used_precise
This function looks at precise occurrences.
• Release assemblies
The Traversal Engine manages the traversal, while you are only required to develop simple ITK
programs. The Traversal Engine allows you to register these functions at different stages of the
traversal. These functions are executed at each node and contain the information of the nodes, which
can be processed. The core module is programmed in object-oriented programing (OOP), so the
framework can be expanded by adding new classes at a later date, and can also be extended to
traverse any Teamcenter structure such as product structure or folders. Presently the core module
relates to product structure only. The ps_traverse utility, located in the TC_ROOT/bin directory has
also been developed to demonstrate the use of TE.
The features of the Traversal Engine are:
• Represents the Teamcenter structure as a tree data structure with each node representing the
Teamcenter object.
• Provides user exits at different stages and allows the registration of user-defined functions known
as handlers and selectors.
• Allows the handlers and selectors to return statuses that determine the creation of the tree
and the traversal direction.
• Allows product structure configuration through revision rules and saved variant rules during the
creation of the tree.
• Allows you to store and retrieve a float value with a key as a string, typically used for summing
up functionality.
User exits
Teamcenter provides user exits to register selectors and handlers that are executed at different
stages of the traversal activity, such as creation of the tree, traversing in the forward direction,
traversing in the reverse direction, and so on.
Selectors and handlers are user-defined functions that need to be developed following a template.
These functions return the decisions that determine the traversal direction. These are registered at
different stages of the traversal and are executed during the creation and the traversal of the tree.
The selectors are run during the creation of the tree nodes. These determine the values stored in
each node and decide whether a particular branch needs to be traversed or not. The handlers are
run during the tree traversal. The handlers have the information of the node and the information
pertaining to the Teamcenter object that the node represents.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Minimum requirements
You need the following minimum requirements to run the Traversal Engine:
Installation
To confirm that you have installed TE successfully, check if the following have been added to their
respective directories:
• The libitk.libraryextension library in the TC_ROOT/lib directory.
Once you custom develop your selectors and handlers by using the ITK functions and the TE API
calls, you can compile them into an executable by using the compile and linkitk scripts in the
TC_ROOT/sample directory.
API calls are classified into:
• Initializing and terminating calls. TE modules have to be initialized to use any TE call.
• Configure PS. These configure the PS using revision rule or saved variant. View type to traverse
also can be specified.
• Register handlers and selectors. These register the provided the functions as selector or handler.
• User interface calls. These write messages to the log file and console and receive input from the
configuration file and the command line arguments.
Sample implementation
You use the import/export object functionality to move data that is associated with Teamcenter objects
between Teamcenter sites with either read-only or read/write privileges.
Note
Use PLM XML import and export when possible.
This functionality is in addition to the IMF_export_file and IMF_import_file routines which
are used to export and import files between the Teamcenter and the operating system.
The IMF_import_file and IMF_fmsfile_import ITK APIs do not validate whether the file
is a text file or a binary file validation. When using these APIs, FMS does not attempt to
validate the type of file against the type specified on the function call. The caller must
ensure that the file for transfer is of the specified type.
o If you select a (collapsed) folder, all objects inside that folder are exported or imported that
can be exported or imported.
o If you select many objects at one time, they can be exported or imported in a single operation.
• There are command line interfaces to execute the export or import process in batch mode for
items only. They are item_export and item_import.
Note
Imported objects are in the same folder structure as they were when exported.
Object types
The object types, including all their internal supporting data, that can be imported/exported between
Teamcenter sites are:
• Folders
You can choose to export any general folder. You cannot export pseudo folders. Pseudo folders
are folders displayed in your workspace that display objects with a specialized relationship
to an item or item revision.
• Datasets
When exporting, you can choose to export either all versions, the latest version, or a specific
version.
• Forms
When exporting, you can choose to export a form. The definition of the form must be identical at
both sites.
• Item
When exporting, if you choose an item, the item and all its related data (such as the item
revisions, BOM view and BOM view revisions, associated item master and item revision master
forms, and any exportable requirement, specification, manifestation or reference objects) are
exported. Additionally, if the selected item has an assembly (structure), then all the items that
make up the assembly are exported.
You cannot choose part of an item to be exported. For example, you cannot choose an item
revision alone to be exported. You need to select the item which contains this item revision,
in order to export the item. Similarly, the same would be true with BOM view and BOM view
revision objects.
All Teamcenter files that the dataset represents are exported, including the operating system files that
these encapsulate and the dataset's revision anchor.
When you export, there is an option to transfer the ownership to another site. If the ownership is
transferred to a particular site, then when the same objects are imported at that particular site, they
are imported with a read/write privilege. If the ownership is not transferred, then the ownership of the
objects still belongs to the export site. When these objects are imported at another site, they are
imported with a read-only privilege. Any modifications attempted on them are not be allowed.
Object ownership
If the same owning-user name exists at my importing site as owned it at the exporting site, then set
that user to own it here. Only if that lookup fails does it resort to the simple rule that the user who
runs the import gets to own the object.
When an object is imported, if the user who exported the object exists at the importing site, the
object's owner is switched from the user at the exporting site to the one at the importing site. If the
user does not exist at the importing site, the owner is the user who performed the import. The group
is the group in which the owner logged into to perform the import.
In general with import, object site ownership takes precedence in determining who has the right
to update the object.
• If such an object is imported at another site, it cannot be used in a release process at that site
if it was exported with read-only privileges. If it was exported with read/write privileges, then it
can be used in a release process.
• If an object is locked (in other words, it has been exported with read-only privileges and is
therefore locked at the receiving site) or it has been exported with read/write privileges but not
re-imported (therefore locked at the sending site), it cannot be used in a release process.
Note
All released objects are exported with their status (for example, Released to Manufacture).
Export route
• It is the responsibility of the user who is exporting objects to inform the system manager which
directories need to be copied and to which site.
• It is the responsibility of the system manager to set up the list of other sites which are known
to the local site.
• It is the responsibility of the system manager to send directories of the exported objects to the
receiving sites, (for example, using tape, ftp, and so on).
Import semantics
The importing of an object is straightforward unless the same object already exists at the importing
site. In this case, the behavior of the objects is described as follows:
• Revisioned objects – datasets
Although datasets do not need to have unique names, they do have unique internal identifiers.
Therefore, when a dataset is imported, the system knows whether it already exists. If the dataset
does exist which is the same revision, no import takes place.
o It is the responsibility of the user who is exporting objects to inform the system manager
which directories need to be copied and to which site.
o If an item with the same ID does not exist in the database, then an item with that ID is created
and owned by the user doing the import.
o If an item with the same ID exists in the database, then the requirement, manifestation,
reference and specification objects of the importing item are added on to the existing item.
Import route
o Existing manifestations and references can be cut, and new manifestations and references
can be added.
o The item can be deleted from the database only if the item itself is selected. You cannot
delete any of the item revisions alone.
• When an item is imported with read/write privileges (with site ownership), then the following
rules apply:
o The item cannot be deleted from the database and none of its requirements, specifications,
manifestations and references can be removed. This is because another site may have a
reference to it.
o If the item has been previously imported, the subsequent import does not delete any
references, but may add new ones. This occurs so that the import does not wipe out any
local references you may have added. It is important to note that references deleted at the
originating site are not removed from the importing site. You may want to delete the item
before re-importing it to ensure that you get an exact copy of the original.
• Objects, when imported, are given new ACLs. For instance, the owner (as well as their group)
is updated to that of the importing user. However, if the object was exported as read-only, the
importing user is not able to modify the object.
You can add user exits to customize filter rules, actions rules, import methods, export methods, and
schema mapping. Develop your code and then use the Business Modeler IDE to define the extension
and assign it to an extension point.
The following table shows the six PLM XML extension points available for customization in the
Business Modeler IDE. Each of these user exits or extension points has one extension, in other
words, the base extension.
Extension rule
User exit name Purpose
USER Filter USER_register_plmxml_filters
Registrations Allows you to register custom filter rules used
during import/export.
USER Register USER_register_plmxml_actions
actions Allows you to register custom action rules used
during import/export.
USER Register USER_register_plmxml_export_methods
Export Methods Allows you to register custom export methods.
The export method is registered for exporting an
object out of the Teamcenter database.
Extension rule
User exit name Purpose
USER Register USER_register_plmxml_import_methods
Import Methods Allows you to register custom import methods.
The export method is registered for importing an
object into the Teamcenter database.
USER Schema USER_register_plmxml_schema_mapping
Mapping Regs Allows you to map Teamcenter and PLM XML
objects.
*n_tags = local_num_tags;
*tags = local_tags;
return ITK_ok;
}
/* Displays help messgage */
static void display_help_message(void)
{
printf( "\n\nsample_plmxml_itk_export: PLMXML export of a simple BVR" );
printf( "\n\nUSAGE: sample_plmxml_itk_export -u=username -p=password -g=groupname
-item=itemname -rev=itemrevision -file=filename");
printf( "\n -h displays this message");
}
To register and use a custom action rule for PLM XML export:
1. Register your custom action handler by defining and assigning the USER Register actions
extension in the Business Modeler IDE.
2. Once you register the custom action handler through the PIE_register_user_action function, the
name that was used to register the action handler will appear in the Action Handler list in the
PLM XML/TC XML Export Import Administration application.
Your main goal in writing a action rule is to create a C++ function for that rule. This function must
adhere to the following constraints:
• The action rule must return ITK_OK = int.
• Your function must use a parameter that is the tag of the session. The function signature must
adhere to the following:
int your-function—name (METHOD_message_t* msg, va_list args)
• Your function must perform ITK-based calls to the session using the public interface exposed
to the session
Your function can then perform calls to the exposed objects from within the session.
The Teamcenter PLM XML/PIE framework allows for the pre, during, and post functional processing
of action rules on behalf of the user. The typical functional processing that occurs for each of these
types of action rules could be:
• Pre-processing
Functions that need to verify or prepare the application, data, or Teamcenter to prepare for the
translation. This rule is executed as follows:
o For export
Before the translation occurs.
o For import
Before the XML document is loaded.
• During processing
If the translation engine verifies if you have a during action rule, then it is executed as follows:
o For export
After the translation and before saving the XML document.
o For import
After the document is loaded and before the translation.
• Post-processing
Post-processing is done after the translation is complete, but the session still has control over
the translation.
The following code shows an example of ITK code for a pre-action rule. This example adds user
information to the session prior to translation:
#include <itk/mem.h>
#include <tc/tc.h>
#include <tc/envelope.h>
#include <property/prop_errors.h>
#include <pie/pie.h>
#include <itk/bmf.h>
#include <tccore/custom.h>
#include <user_exits/user_exit_msg.h>
#include <tc/tc_macros.h>
#include <bmf/libuserext_exports.h>
extern USER_EXT_DLL_API int plmxml_user_exit_for_plmxml_actions
(METHOD_message_t* msg, va_list args);
#include <bmf/libuserext_undef.h>
extern int custom_preaction_method(tag_t pie_session_tag)
{
int ifail = ITK_ok;
int max_char_size = 80;
char *title = (char *) MEM_alloc(max_char_size * sizeof(char));
char *value = (char *) MEM_alloc(max_char_size * sizeof(char));
printf("Executing custom_preaction_method() ....\n");
strcpy(title, "PLM XML custom pre-action session title");
strcpy(value, "PLM XML custom pre-action session value");
ITKCALL( PIE_session_add_info(pie_session_tag, 1, &title, &value) );
MEM_free(title);
MEM_free(value);
return (ifail);
}
int plmxml_user_exit_for_plmxml_actions(METHOD_message_t* msg, va_list args)
{
int ifail = ITK_ok;
Filter rules allow a finer level of control over the data that gets translated along with the primary
objects by specifying that a user-written function is called to determine the operation applied against a
given object. For example, you want the following to occur: if the item revision has a UGMASTER
dataset, you want that translated and all other datasets skipped; if there is no UGMASTER dataset,
you want to export the JT dataset instead. This asks the closure rule to base its traversal on which
child element is in existence, which the closure rule is not designed for. For this kind of control,
you need a filter rule.
To register and use a custom filter rule for PLM XML:
1. Register your custom action handler by defining and assigning the USER Filter Registrations
extension in the Business Modeler IDE.
2. Once you register the custom filter through the PIE_register_user_action function, the name
that was used to register the filter rule appears in the Filter Rule list in the PLM XML/TC XML
Export Import Administration application.
The following code shows an example of ITK code for a filter rule. This rule says if the object in the
current call path is not released, do not process the object:
#include <itk/mem.h>
#include <tc/tc.h>
#include <tc/envelope.h>
#include <property/prop_errors.h>
#include <pie/pie.h>
#include <itk/bmf.h>
#include <tccore/custom.h>
#include <user_exits/user_exit_msg.h>
#include <tc/tc_macros.h>
#include <bmf/libuserext_exports.h>
extern USER_EXT_DLL_API int plmxml_user_exit_for_filter_actions
(METHOD_message_t* msg, va_list args);
#include <bmf/libuserext_undef.h>
extern int plmxml_user_exit_for_filter_actions(METHOD_message_t* msg, va_list args)
{
int stat = ITK_ok;
printf("Executing plmxml_user_exit_for_filter_actions() ....\n");
// register custom filter rule here
ITKCALL( PIE_register_user_filter( "customFilterRuleName",
(PIE_user_filter_func_t)custom_filter_rule_method ) );
return stat;
}
extern int custom_filter_rule_method (void *msg)
{
tag_t objTag = NULLTAG;
WSO_status_t* stats = NULL;
PIE_rule_type_t result = PIE_travers_no_process;
int cnt = 0;
Change your code to remove the PIE_session_set_target_pathname function and add the file
path to the file name as shown in this example:
PIE_session_set_transfer_mode(tSession, transfer_mode_tags[0]);
PIE_session_set_revision_rule(tSession,tRevRule);
PIE_session_set_file (tSession, "filepath\\file-name.xml");
PIE_session_export_objects (tSession,1,&children[k]);
If you are not exporting bulk data files, you can use the PIE_session_set_target_pathname function.
2. Site 2 passes the objects through its scoper, which applies a transfer option set. The transfer
option set contains a transfer mode with closure rules that defines the data to be passed along
with the objects.
3. The site 2 scoper then passes the data to its exporter, which exports the data in the native XML
format of site 2 to the data mapper.
4. The data mapper maps the data to the TC XML format and passes it to the site 1 importer.
5. The site 1 importer receives the data, passes it through its scoper, and the data is available for
site 1 use.
When you create ITK programs to execute an exchange, you must follow this process in the code.
There are several include files that contain the functions required for Data Exchange:
• tie/tie.h
Contains functions needed for export and import of TC XML files: TIE_export_objects,
TIE_confirm_export, and TIE_import_objects.
• pie/pie.h
Contains functions needed for handling transfer option sets:
PIE_get_available_transfer_option_sets and PIE_describe_option_set.
• gms/gms.h
Contains functions that ensure that previously replicated data is synchronized between sites:
GMS_synchronize_site and GMS_identify_outdated_replica_objects.
It also contains functions that import and export objects even if Global Services is not enabled:
GMS_import_objects_from_offline_package and GMS_export_objects_to_offline_package.
These functions can also be used when Global Services is enabled.
Briefcase
Use Briefcase to exchange data with other Teamcenter sites, either online through a network
connection or offline using an archive file. In addition to the include files used for Data Exchange
between Teamcenter and Teamcenter Enterprise, there are additional functions required specifically
for Briefcase:
• publication/gms_publication.h
Contains functions required to move objects in and out of the archive file:
GMS_request_remote_export and GMS_request_remote_import.
All export and import calls use Global Services. The GMS_request_remote_export function
requests that the selected object be exported to a given site. This function is used for both online
and offline exporting. For offline exporting, set the session_option_names parameter to offline
and its corresponding session_option_value parameter to true.
• gms/gms_res.h
Contains the functions required to check files in and out: GMS_RES _site_checkout, GMS_RES
_site_checkin, GMS_RES _cancel_site_checkout, and GMS_RES _is_site_reserved. Use
these functions to lock the object at the remote site that you want to modify and unlock it when
you are done.
TIE framework
A set of wrappers are provided that allow ITK programmers to extend the Teamcenter Import Export
(TIE) framework. Following are the extension tasks you can perform:
The input to action is the GenericSession object. The wrap is provided for the ITK programmer
to register actions.
The ITK programmer is able to supply C functions to examine each processed object based on
object type. The input to filter is TIE_message_t.
int TIE_register_user_action
(char* actionName, TIE_user_action_func_t user_action )
If your ITK program transfers Multi-Site site ownership over a network, it must call the
OBJIO_set_site_based_attribute function to use the Synchronous Site Transfer (SST) protocol.
Additional information about this function can be found in the Integration Toolkit Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
You can use the Application Reference (APPREF) ITK module to map application-specific unique
IDs to a Teamcenter unique ID. The APPREF functions populate and query the Publication Record
class managed by the Object Directory Services (ODS) with application-specific data. More than
one application-specific ID can be associated with one Teamcenter item ID. The APPREF functions
are called by the application that needs its IDs mapped to Teamcenter. For example, Teamcenter
Integration for NX I-deas can call APPREF functions to map its GUIDs to a Teamcenter item ID or UID.
Additional information about the APPREF functions can be found in the Integration Toolkit Function
Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
The functions are called at ODS client sites which are sites that are configured to communicate
with an ODS site using publication and remote search operations. The TC_appref_registry_site
preference defines the ODS site that is used for all APPREF operations.
The following table shows the basic process and data flow for the APPREF module.
2. In the Options dialog box, accessed from the Edit menu in My Teamcenter, define the
TC_default_export_sites preference with a list of the desired target sites. For example:
TC_default_export_sites=
SITE1
SITE2
Caveats
The TC_default_export_sites preference is not used during interactive export operations.
This procedure is only applicable when not transferring site ownership. If the ITK program supports
transfer of site ownership by calling the OBJIO_set_site function, this call is re-directed to call the
OBJIO_set_owning_site function, thereby making the ITK program compliant with the Teamcenter
requirements.
The OBJIO_set_site function is obsolete. Therefore, Siemens PLM Software highly recommends
that you modify any existing ITK programs to call the new OBJIO_set_target_sites (when not
transferring site ownership) or OBJIO_set_owning_site (when transferring site ownership) functions.
The USER_is_dataset_exportable user exit helps refine Multi-Site dataset exporting. The normal
Multi-Site framework allows you only to export all datasets with a particular relationship, but you
cannot further sort out the datasets to export.
For example, assume you have this structure:
Item1
ItemRevision1
Dataset1 (attached to ItemRevision1 with the Reference relation)
Dataset2 (attached to ItemRevision1 with the Reference relation)
If you attempt to export Item1 to a remote site, you can select which relationships to be exported, and
by selecting the Reference relation, you can export both Dataset1 and Dataset2. However, you
cannot export Dataset1 by itself without also exporting Dataset2.
You can use the USER_is_dataset_exportable user exit to overcome this limitation. This user
exit can be used to check if a dataset should be allowed to be exported based on its classification
type. Following is the code for this user exit:
USER_EXITS_API int USER_is_dataset_exportable ( tag_t dataset_tag,
int n_target_sites,
tag_t * target_sites,
logical is_transferring_ownership,
logical modified_objects_only,
logical * isExportable
)
Input or
Parameter output Description
dataset_tag Input Specifies the tag of the dataset object.
n_target_sites Input Specifies the number of target sites.
target_sites Input Defines the target sites list.
is_transferring_ownership Input Indicates if the Multi-Site transfer also transfers
ownership.
modified_objects_only Input Indicates if Multi-Site transfer is transferring
only modified objects or otherwise.
isExportable Output Indicates if this dataset tag should be Multi-Site
exported or not.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
5. Modify the audit definition objects. You can modify the TC_DATA\auditdefinition.dat file to
add the new handler to the audit definition object you want to change. Then either run the
TC_BIN\define_auditdefs -f=auditdefinition.dat command or interactively log on to the Audit
Manager application in the rich client and modify the audit definition objects to add the new
handler.
• Create a bid package to provide to vendors so they can develop their quotes.
Use the VMS_create_bidpackage and VMS_create_bidpackage_lineitem functions. You can
also revise your bid package with bid package revisions, similar to items in Teamcenter.
• Create vendor manufacturer parts and associated quotes that fulfill the requirements of your
commercial parts. The manufacturer part and quote information is provided by your vendors. You
can use different manufacturer parts for the commercial part in your product.
Use the VMS_create_manufacturer_part function for the vendor part and connect it to your part
information with the VMS_add_mfg_part_to_comm_part function. Use the VMS_create_quote
function to create the quote and use the VMS_add_quote function to add the quote information
to your bid package line item.
Note
In the user interface, manufacturing parts are called vendor parts.
Include the vm\vms.h file in your program and call the VMS_init_module function first to initialize
the vendor management classes. For an example that uses the vendor management functions,
open the TC_ROOT\sample\vms\vms_util.c file.
Additional information about vendor management ITK functions can be found in theIntegration Toolkit
Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
The following example code shows how to retrieve item report definitions:
static void list_reports()
{
tag_t *tags = NULL;
int n_tags=0;
const char* itms="itm";
ITK_CALL( CRF_get_report_definitions(itms,NULL,status,NULLTAG,&n_tags, &tags) );
printf("----BEGIN ITEM REPORTS ---n\n");
for (int loopFound = 0; loopFound < n_tags ; loopFound++)
{
String uidString = Tag::tagToUid(tags[loopFound]);
The following example code shows how to find and generate a summary report:
static void find_and_generate_summary_report()
{
tag_t rd_tag= NULL_TAG;
tag_t qs_tag= NULL_TAG;
const char* id="TC_2007_00_SUM_RPT_0004"; //4 for user stuff
ITK_CALL( CRF_find_report_definition(id,&rd_tag) );
printf("summary report rd Tag: %d\n", rd_tag );
int count=1;
char **entryNames = (char **)MEM_alloc(sizeof(char *) * count);
char **entryValues = (char **)MEM_alloc(sizeof(char *) * count);
int idx = 0;
char *report_path=NULL;
char *param1= "User Id";
char *value1= "*";
entryNames[idx]=param1;
entryValues[idx]=value1;
tag_t dataset = NULLTAG;
char *data_set_name=NULL;
CRF_generate_report(rd_tag,dataset,0, NULL,
count,entryNames,entryValues, ///criteri stuff
data_set_name, /* dataset name */
&report_path );
if ( report_path != NULL )
printf(" Summary report generated at %s",report_path);
}
Additional information about the Report Builder functions can be found in the Integration Toolkit
Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Customize subscriptions
You can implement a custom subscription event handler using the TcSubscriptionActionMsg ITK
method. To subscribe using a custom event and custom handler:
1. Modify the libuser_exits.dll file to include the CUSTOM_event_handler action handler.
a. Shut down all Teamcenter services.
Note
The install_event_types utility is deprecated, and therefore you can no longer
create event types using this utility. Instead, you must create custom event types
using the Business Modeler IDE.
b. Map the event to a business object type using the Business Modeler IDE.
3. Subscribe to the object and post the event using the ITK utility.
a. Create the subscribe_to_custom_event.cxx file.
/*HEAD SUBSCRIBE_TO_CUSTOM_EVENT.CXX CCC ITK */
/*
%TC_ROOT%\sample\compile -DIPLIB=none subscribe_to_custom_event.cxx
%TC_ROOT%\sample\linkitk -o
subscribe_to_custom_event subscribe_to_custom_event.obj
*/
#ifdef __cplusplus
extern "C" {
#endif
#include <stdlib.h>
#include <stdarg.h>
#include <emh.h>
#include <item.h>
#include <standardtceventtypes.h>
#include <subscription.h>
#include <tc.h>
#include <tcactionhandler.h>
#include <tceventmgr.h>
#include <tceventtype.h>
#include <tctype.h>
static void ECHO(char *format, ...)
{
char msg[1000];
va_list args;
va_start(args, format);
vsprintf(msg, format, args);
va_end(args);
printf(msg);
TC_write_syslog(msg);
}
#define IFERR_ABORT(X) (report_error( __FILE__, __LINE__, #X, X, TRUE))
#define IFERR_REPORT(X) (report_error( __FILE__, __LINE__, #X, X, FALSE))
static int report_error(char *file, int line, char *call, int status,
logical exit_on_error)
{
if (status != ITK_ok)
{
int
n_errors = 0;
const int
*severities = NULL,
*statuses = NULL;
const char
**messages;
EMH_ask_errors(&n_errors, &severities, &statuses, &messages);
if (n_errors > 0)
{
ECHO("\n%s\n", messages[n_errors-1]);
EMH_clear_errors();
}
else
{
char *error_message_string;
EMH_get_error_string (NULLTAG, status, &error_message_string);
ECHO("\n%s\n", error_message_string);
}
ECHO("error %d at line %d in %s\n", status, line, file);
ECHO("%s\n", call);
if (exit_on_error)
{
ECHO("%s", "Exiting program!\n");
exit (status);
}
}
return status;
}
static void do_it(void)
{
tag_t event_type = NULLTAG;
IFERR_ABORT(TCEVENTTYPE_find("CustomEventName", &event_type));
if (event_type == NULLTAG)
{
ECHO("CustomEventName not found!\n Exiting Program\n");
return;
}
tag_t handler = NULLTAG;
IFERR_ABORT(TCACTIONHANDLER_find_handler("CustomHanderId", &handler));
if (handler == NULLTAG)
{
ECHO("CustomHanderId not found!\n Exiting Program\n");
return;
}
Workflow
The EPM module is a tool designed to facilitate the specification of processes (procedures),
resources, and data constrained by a company's business rules. A typical product information
management (PIM) system can be viewed as a process model consisting of procedures to be
performed in a well defined sequence; a resource model which addresses the users, groups, roles
and applications a business uses; and the data model which addresses the different types of data
used to run the business's computer-aided applications. The company's business rules determine
how the three models interact with one another. EPM addresses the procedures portion of this model.
A procedure can be viewed as a task or group of tasks which, when performed in a certain sequence
with defined resources and rules, produces a defined output. The fundamental element of a
procedure is a task. The following figure shows a model of this.
Task model
EPM is designed to support this model. EPM contains analogs to all the elements of the task model
shown. The relationships are shown in the following table.
A task is always in some state and it can change from one state to another by successfully performing
an action. An action may have pre-conditions and business rules imposed on it. Any of the imposed
business rules must be satisfied before the proposed action can be performed.
A business rule is a statement of some constraint on the performance of an action. It may be derived
from empirical data and knowledge or other procedures. Tasks consume resources to produce
outputs which, in EPM, are associated to the task by attachments. Each of the EPM elements
is discussed in more detail in the next section.
• Task
• Action
• Attachment
• Business rule
Jobs
A job is the run-time instantiation of a procedure. It is a collection of tasks just as a procedure is a
collection of task definitions. A procedure is the system definition of a business process. Procedures
are defined interactively through the Procedures dialog box.
Tasks
You should configure the process template so there is a one-to-one relationship between the number
of valid result values and the number of paths, unless you want to overload values onto a single
path so the process traverses the same path for a number of results. For example, if you want five
different valid values that can be used to set result, there must be five separate flow lines—one line
for each of the five valid result values. Each line has a value associated with it, and this value must
match one of the potential valid result values. If the value used to set the result does not match any
of the path values, the process does not advance. By default, the result value is displayed as
a label on the flow line associated with a workflow branch path. You can customized text labels
by registering the text string.
Dynamic participants
If you use dynamic participants in your workflow and want to customize the participants, you can
create participants with the EPM_create_participants function. To add or remove participants for
an item revision, use the ITEM_rev_add_participants or ITEM_rev_remove_participant function,
respectively.
Action handlers
Action handlers are routines that act as the mechanism of the action to which they are associated. The
association between an action and handlers is defined interactively in the Task Definition dialog box.
Actions can be either displayed during run time or triggered programmatically. To display the action
string of the current action applied to a task during run time, use the EPM_task_action_string
function. To trigger actions programmatically, use the EPM_trigger_action function to trigger an
action in a specific task. This is useful in triggering a parallel task to start.
After writing your own handlers, you need to register them to the system using the
EPM_register_action_handler function. This makes the handler available for use by the system
administrator in implementing EPM procedures. The system administrator can enable the system to
find your custom handler and execute it whenever the action with which the handler was associated
is invoked on a task.
Predefined actions
EPM comes with predefined actions that can occur on any task.
• Remove Attachment
• Approve
• Reject
• Promote
• Demote
• Refuse
• Assign Approver
• Notify
Tip
Some Perform sub-actions can execute a handler multiple
times. Before you place a handler on the Perform action,
be sure that you want the handler to be executed more than
once. For example, if the handler is Notify (EPM-notify),
reviewers receive the same notification at each execution.
Note
These do not change the state of the task.
State transitions
Some actions cause state transitions. For example, applying the Assign action to a task that is in
the Unassigned state causes a state transition to the Pending state. Applying the Start action
causes a state transition from Pending to Started. Applying the Complete action can cause a state
transition from Started to Completed.
As the action handlers are executed, the task keeps track of its progress by setting an attribute
called state. You can inquire about the state of a task using the EPM_ask_state function. Next use
EPM_ask_state_string to get the string value of the state. Possible states that a task can take
are as follows:
• Unassigned
• Pending
• Started
• Completed
• Skipped
• Suspended
Attachments
Attachments are other objects that are associated with the task during run time. These objects are
typically items that need approval, supporting data, forms that need to be completed for the task, and
so on. There are several built-in types of attachments.
A complete list of attachments can be found in the Enterprise Process Modeling section of the
Integration Toolkit Function Reference. (The Integration Toolkit Function Reference is not available in
PDF format. It is available only in the Teamcenter HTML Help Collection.)
For ease of maintenance, define the ECN macro in your local header file. For example:
#define ECN (EPM_user_attachment +1)
Business rule handlers are programs that check compliance to business rules before any action
handlers can be executed within an action. Like action handlers, the association between action
and rule handlers is defined interactively in the Task Definition dialog box. There are business
rule handlers available in the system that you can readily use or you can write your own. Use the
EPM_register_rule_handler function to register the rule handlers you have written.
A procedure can be customized with EPM in several ways. The simplest method is to associate
available action or rule handlers from the toolkit to specific actions to obtain the desired result. With
this method, you do not need to write a program or function to customize a procedure. If none of the
available handlers meet your specific needs, then you can write a custom handler using the ITK.
Such a custom handler can then be used along with the Teamcenter-provided handlers to specialize
the behavior of a task.
A handler is a small ITK function designed to accomplish a specific job. There are two kinds of
handlers: rule handlers and action handlers. Rule handlers enforce business rules. An action handler
causes something to happen and usually is responsible for a state transition.
If the rule handler returns an EPM_go, then the action handlers are executed and a state transition
occurs. If the rule does not return EPM_go, then the action handler does not execute and the state
transition does not occur.
Since action handler programs are essentially ITK functions, they should be written based on the
same guidelines followed in ITK programs. The epm.h header file must be included in your source
code. The standard interface for an action handler is as follows:
int sample_handler(EPM_action_message_t message)
• data
System data.
• action
Action that was triggered.
• proposed_state
State that the task changes to after actions are completed without errors.
• arguments
Argument specified in the procedure.
Note
The action handler must return an int status code that can be compared to ITK_ok to
determine if the handler was executed successfully.
2. Compile your program using the compile script in the TC_ROOT/sample directory. The sample
compile script writes the resulting object file in the current directory. Modify the compile script if
the default behavior is not sufficient.
3. Register your new handler to the system by writing a module initialization function (or modifying it
if one already exists) to call the EPM_register_action_handler function for the action handler
you have written. Compile the module initialization function.
Note
Complete step 4 only if this is the first handler you are registering. You can skip this
step for the subsequent handlers that you are going to write.
You can also register handlers using the Business Modeler IDE.
4. Modify the user_gsshell.c file in the TC_ROOT/sample directory to add a call to your module
initialization function in the USER_gs_shell_init_module function. Compile the user_gsshell.c
file using the TC_ROOT/sample/compile script.
5. Create the user exits library with the TC_ROOT/sample/link_user_exits script. This script
creates the libuser_exits sharable library from all of the object files in your current directory.
The first call to the TC_next_argument function gets the first argument, given the Arguments field
from the structure, specified in the task definition windows for this handler. Each successive call to
TC_next_argument gets the next argument on the list. Use the TC_init_argument_list function to
start over at the beginning of the list and get the first argument. All action handlers should return
either ITK_ok if no error occurred or the ITK error for the error that did occur.
2. Compile your program using the compile script in the TC_ROOT/sample directory. The sample
compile script writes the resulting object file in the current directory. Modify the compile script if
the default behavior is not sufficient.
3. Register your new handler to the system by writing a module initialization function (or modifying
it if one already exists) to call the EPM_register_action_handler function for the rule handler
you have written. Compile the module initialization function.
Note
Complete step 4 only if this is the first handler you are registering. You can skip this
step for the subsequent handlers that you are going to write.
4. Modify the user_gsshell.c file in the TC_ROOT/sample directory to add a call to your module
initialization function in the USER_gs_shell_init_module function. Compile the user_gsshell.c
file using the TC_ROOT/sample/compile script.
5. Create the user exits library with the TC_ROOT/sample/link_user_exits script. This script
creates the libuser_exits sharable library from all of the object files in your current directory.
EPM_decision_t
The EPM_decision_t is a user-defined type defined as:
typedef enum EPM_decision_e
{
EPM_nogo,
EPM_undecided,
EPM_go
} EPM_decision_t;
• EPM_undecided
Still unknown whether the decision is a go or no go because of insufficient or pending data;
action and state transitions should not occur
• EPM_go
Rule or handler passed; constraints satisfied
EPM_rule_message_t
The EPM_rule_message_t is a typedef defined as:
typedef struct EPM_rule_message_s
{
tag_t task;
EPM_action_t proposed_action;
TC_argument_list_t* arguments;
tag_t tag;
} EPM_rule_message_t;
• task
• proposed_action
• arguments
• tag
System data
All rule handlers must be registered. The following example illustrates how you register a rule handler.
Note
You can also register handlers using the Business Modeler IDE.
The following example uses the USER_epm_init_module function that registers a function
called EPM_check_responsible_party through the EPM_register_rule_handler function as the
EPM-check-responsible-party handler.
When defining tasks interactively, use the registered name of the EPM-check-responsible-party
handler to add the handler to an action instead of the full name of the function,
EPM_check_responsible_party:
int USER_epm_init_module()
{
int s;
s = EPM_register_rule_handler("EPM-check-responsible-party", "",
EPM_check_responsible_party);
return s;
}
Register the USER_epm_init_module() function through USER_gs_shell_init_module().
The USER_gs_shell_init_module() function exists in user_gsshell.c located in the
user_exits directory. The registration is shown as follows:
USER_gs_shell_init_module()
{
...
USER_epm_init_module();
}
Compile all of the above mentioned code and create a user exits library. Information about creating a
user exits library can be found in the Integration Toolkit Function Reference.
Note
The Integration Toolkit Function Reference is not available in PDF format. It is available
only in the Teamcenter HTML Help Collection.
Validation rules
• Whether the check is mandatory or optional. A mandatory check must both run and pass. An
optional check must run but does not have to pass.
• Event names where the check is applicable. When applicable event names are specified for a
validation agent (checker), the check result is verified only when these events happen. If no event
name is specified, the checker applies to all events. The event name can contain an asterisk as a
wildcard. Event names can be marked as exclusive.
• A list of parameter and value pairs. The parameter values need to be verified from the validation
result log file.
Validation rules are defined based on your business model. In a production environment, you can
define multiple validation rule files to suit different business processes.
o Mandated tag
This tag is mandatory. If a check is mandatory (the check must run and pass), type yes
between the opening and closing tag. If not, type no. When a check is not mandatory, the
check must run, but the result status is ignored.
o Datasets tag
This tag is mandatory. You must define the list of applicable dataset types.
o Events tag
This tag is optional. The events contain the event values where the check is applicable. If
you do not define an event value list, then the rule applies at all times. The event values can
contain an asterisk as a wildcard. The values listed can be exclusive when the exclude
attribute is set to yes.
If you define the class attribute for the Events tag, you set which target revision attribute
value should be tested against event values list.
o Parameter tag
Only parameters that logged in profile level (for a profile check result log file) are supported.
Child checker parameters must be promoted into the profile level to be verified by the
validation rule APIs.
• You can define rules in any order in the validation rule file.
• The same checker can be instanced in the definition of more than one rule in the same rule file.
<Rule_Set
name="Test_rule_set_1"
description="Power Train Event 0 at Milestone 3"
user="your_user_id" date="2005-12-25"
xmlns="http://www.plmxml.org/Schemas/PLMXMLSchemaValidationRule"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.plmxml.org/Schemas/
PLMXMLSchemaValidationRule validation_rule.xsd" >
<Rule>
<Checker name="mqc_check_dim_attach_solid"></Checker>
<Mandated>no</Mandated>
<Datasets>
<type>UGMASTER</type>
<type>UGPART</type>
</Datasets>
<Events class="ITEMREVISION:owning_group" exclude="yes">
<value>chassis.pe.company</value>
<value>powertrain.pe.company</value>
</Events>
</Rule>
<Rule>
<Checker name="mqc_examine_geometry"></Checker>
<Mandated>yes</Mandated>
<Datasets>
<type>UGPART</type>
<type>UGMASTER</type>
</Datasets>
</Rule>
<Rule>
<Checker name="mqc_check_dim_attach_solid"></Checker>
<Mandated>no</Mandated>
<Datasets>
<type>UGPART</type>
</Datasets>
</Rule>
<Rule>
<Checker name="mqc_exam_geom_combo"></Checker>
<Mandated>no</Mandated>
<Datasets>
<type>UGMASTER</type>
<type>UGPART</type>
<type>UGALTREP</type>
</Datasets>
<Events exclude="no">
<value>Status*3</value>
<value>Status_4</value>
</Events>
<Parameters>
<Parameter operation="=" title="Check Tiny objects"
value="TRUE></Parameter>
<Parameter operation="=" title="Check Misaligned Objects"
value="TRUE"></Parameter>
</Parameters>
</Rule>
<Rule>
<Checker name="mqc_examine_geometry"></Checker>
<Mandated>no</Mandated>
<Datasets>
<type>UGMASTER</type>
<type>UGPART</type>
</Datasets>
<Events exclude="no">
<value>S*</value>
</Events>
<Parameters>
<Parameter operation="=" title="Save Log in Part"
value="TRUE></Parameter>
</Parameters>
</Rule>
</Rule_Set>
The following sample shows how to call validation rule APIs to perform a validation result check
with the validation rules:
validation_rule_t *validation_rules = NULL;
int num_rule = 0;
candidate_validation_result_t *candidates = NULL;
int num_candidates = 0;
/**********************************************
* Get Validation Rule list
**********************************************/
ifail =
VALIDATIONRULE_get_rules (
rule_item_revision,
NULL,
current_event,
&validation_rules,
&num_rule
);
/**********************************************
* Before calling VALIDATIONRULE_get_candidate_results() to get candidate
* result list, user may want to put codes here to resolve all unresolved
* rules.
**********************************************/
ifail =
VALIDATIONRULE_get_candidate_results(
target_objects,
count,
validation_rules,
num_rule,
&candidates,
&num_candidates
);
/**********************************************
* check validation result for each candidate
*/
for( int inx=0; inx<num_candidates; inx++)
{
ifail = VALIDATIONRULE_verify_result(
&(candidates[inx])
);
}
// Before exit, free up allocated memories
VALIDATIONRULE_free_rules(validation_rules, num_rule);
VALIDATIONRULE_free_results(candidates,num_candidates);
EPM hints
The rule and action handlers associated with the Perform action get called every time an action with
an action code greater than or equal to EPM_perform_action is triggered. Clicking the Perform
button calls the EPM_trigger_action function with the action equal to EPM_perform_action.
Many existing ITK functions also call the EPM_trigger_action function to allow you to customize
Teamcenter. The following table lists actions that are triggered inside each ITK function.
You can branch logically in your handler by using message.action in an if/else block. For example:
if (message.action == EPM_add_attachment_action)
{
/* do something */
}
else
{
/* do nothing */
}
Infinite loops can occur when performing actions if logic is not included to test the values of the
actions being triggered. For example, the following action handler, my_action_handler, is executed
when a Perform action is triggered:
For example:
int my_action_handler(EPM_action_message_t msg)
{
if(msg.action > EPM_perform_action)
return ITK_ok;
......... /* initial lines of code for action */
......... /* handler initiation */
err = EPM_add_attachment (.................................);
......... /* final lines */
......... /* of code */
return err;
}
The Business Modeler IDE provides all these capabilities in a seamless manner.
For a walkthrough of how to use the Business Modeler IDE to develop an application, use the
following example:
• Business object: Comment
o Capture additional information on any WorkspaceObject
o Operation: copyToObject
• Service: getCommentsForObjects
3. Define services to expose the business logic in the enterprise tier to clients.
Active release helps in associating operations to a release and deprecating and later removing of
an operation when it is not needed anymore.
b. Right-click the extensions folder and choose Organize→Add new extension file.
Following are some example of extension files you can create to organize your work.
project-name_rule.xml Rules
b. Open the Project Files\extensions folders and select the active extension file, for example,
default.xml. A green arrow symbol appears on the active extension file.
c. Define a relation.
1. Create operations.
Add a new operation with parameters. As you enter operation information, the code preview
is updated.
• Specify parameters.
• Make overridable.
Each service library may contain one or more service interfaces. Each service interface contains
one or more service operations and data types used by those operations.
Service operations require input and output. Create data types to represent the input and output.
Data types must be defined before defining the operations. Common data types are provided.
Clients call the service operations to execute the necessary business logic. Service operations
can identify operations as published or internal. Making them set-based helps reduce client
server communication when needed.
When you set code generation properties for your template, set the following:
c. Unzip the soa_client.zip file (from the same location as the tem.bat file) to a directory. Type
this location in the Teamcenter Services client kit home box.
d. Select the required bindings. (It depends on the client code needed.)
3. Generate code.
Generate code after business object operations and services are defined. Regenerate if the
definition changes. This generates interfaces and implementation stubs for business objects and
generates the transport layer and implementation stubs for services.
The following example shows generating C++ code.
4. Add preferences.
Preferences control the application behavior. New preferences may be needed for the solution.
• solution-name_preference_merge.xml
• solution-name_preference_override.xml
• solution-name_preference_skip.xml
PREF_set_search_scope( old_scope );
The error number is equal to error_base + id (for example, 5064) and must be given an
associated C++ symbol, for example:
#define PREF_cannot_delete_preference_definition (EMH_SS_error_base + 64)
The same code area symbols are usually defined in the same file.
B. Choose your error base within the 919000-920000 range. Replace the required
information in the line:
<errors module="<enter the solution name>" error_base="<enter error base number>">
Declare errors in this file, ensuring that error numbers not exceed 920000.
c. Provide the translation for all supported languages. Ensure packaging of the files.
6. Implement utilities.
Utilities are C/C++ programs using ITK calls. They can be used to achieve very lengthy tasks in
one call. Sometimes utilities are used during installation and upgrade scripts. Utilities must be
run on the Teamcenter server host. They can also be used for testing C and C++ code.
Use Business Modeler IDE to implement the utility. Ensure packaging of the compiled executable
if needed by the installer or upgrader.
• Metamodel definitions
• Built libraries
o Enterprise tier libraries
2. Install the packaged template files using Teamcenter Environment Manager (TEM).
The packaged template can be deployed through TEM or Deployment Center to a production
database.
Headquarters
Europe
Granite Park One
Stephenson House
5800 Granite Parkway
Sir William Siemens Square
Suite 600
Frimley, Camberley
Plano, TX 75024
Surrey, GU16 8QD
USA
+44 (0) 1276 413200
+1 972 987 3000
Asia-Pacific
Americas
Suites 4301-4302, 43/F
Granite Park One
AIA Kowloon Tower, Landmark East
5800 Granite Parkway
100 How Ming Street
Suite 600
Kwun Tong, Kowloon
Plano, TX 75024
Hong Kong
USA
+852 2230 3308
+1 314 264 8499