Java Persistance
Java Persistance
Contents
1. Preface 2. What is Java persistence? 1. What is Java? 2. What is a database? 3. What is JPA? 4. What is new in JPA 2.0? 5. Other Persistence Specs 6. Why use JPA or ORM? 3. Persistence Products, Which to Use? 1. EclipseLink (Eclipse) 2. TopLink (Oracle) 3. Hibernate (RedHat) 4. TopLink Essentials (Glassfish) 5. Kodo (Oracle) 6. Open JPA (Apache) 7. Ebean (SourceForge) 4. Mapping, Round Pegs into Square Holes 1. 2. 3. 4. 5. 6. 7. Tables Identity, Primary Keys and Sequencing Inheritance Embeddables (Aggregates, Composite or Component Objects) Locking and Concurrency Basic Attributes Relationships 1. 2. 3. 4. OneToOne ManyToOne OneToMany ManyToMany
Java Persistence/Print version 5. Embedded 8. Advanced Mappings 1. ElementCollection (Embeddable Collections, Basic Collections) 2. Variable Relationships 9. Advanced Topics 1. Views 2. Stored Procedures 3. Structured Object-Relational Data Types 4. XML Data Types 5. Filters 6. History 7. Logical Deletes 8. Auditing 9. Replication 5. Runtime, Doing the Hokey Pokey (EntityManager) 1. Querying 1. JPQL 2. Persisting (Inserting, Updating, Merging) 3. Transactions 4. Caching 5. EJB 6. Security (User Authentication, Proxy Connections, VPD) 7. Servlets and JSPs 8. Spring 9. WebServices 6. Packaging and Deploying 1. Java EE 1. Oracle Weblogic 2. IBM Websphere 3. Redhat JBoss 2. Spring 3. Tomcat 7. Clustering 8. Databases 1. Oracle 2. PostgreSQL 3. MySQL 4. DB2 5. SQL Server 9. Debugging 10. Performance 11. Tools 1. Eclipse JPA (Dali) 2. TopLink Mapping Workbench 12. Testing
Preface
What is this book about?
This book is meant to cover Java persistence, that is, storing stuff in the Java programming language to a persistent storage medium. Specifically using the Java Persistence API (JPA) to store Java objects to relational databases, but I would like it to have a somewhat wider scope than just JPA and concentrate more on general persistence patterns and use cases, after all JPA is just the newest of many failed Java persistence standards, this book should be able to evolve beyond JPA when it is replaced by the next persistence standard. I do not want this to be just a regurgitation of the JPA Spec, nor a User Manual to using one of the JPA products, but more focused on real-world use cases of users and applications trying to make use of JPA (or other Java persistence solution) and the patterns they evolved and pitfalls they made.
Intended Audience
This book is intended to be useful for or to anyone learning to, or developing Java applications that require persisting data to a database. It is mainly intended for Java developers intending to persist Java objects through the Java Persistence API (JPA) standard to a relational database. Please don't just read this book, if you're learning or developing with JPA please contribute your experiences to this book.
Style
This book is meant to be written in a casual manner. The goal is avoid sounding dry, overly technical, or impersonal. The book should sound casual, like a co-worker explaining to you how to use something, or a fellow consultant relating their latest engagement to another. Please refrain for being overly critical of any product, ranting about bugs, or marketing your own product or services.
Authors
Everyone is encouraged to participate in the ongoing development of this book. You do not need to be a Java persistence superstar to contribute, many times the best advice/information for other users comes from first time users who have not yet been conditioned to think something that may be confusing is obvious. List of authors: (please contribute and sign your name) James Sutherland : Currently working on Oracle TopLink and Eclipse EclipseLink, over 12 years of experience in object persistence and ORM. Doug Clarke : Oracle TopLink and Eclipse EclipseLink, over 10 years of experience in the object persistence industry.
Java Persistence/Print version Relational databases are the standard persistence store for most corporations from banking to industrial. There are many things that can be stored in databases from Java. Java data includes strings, numbers, dates and byte arrays, images, XML and Java objects. Many Java applications use Java objects to model their application data, because Java is an Object Oriented language, storing Java objects is a natural and common approach to persisting data from Java. There are many ways to access a relational database from Java, JPA is just the latest of many different specifications, but seems to be the direction that Java persistence is heading.
What is Java?
Java is an object oriented programming language first released by Sun Microsystems in 1995. It blended concepts from existing languages such as C++ and Smalltalk into a new programming language. It achieved its success over the many rival languages of the day because it was associated with this newish thing called the "Internet" in allowing Java applets to be embedded in web pages and run using Netscape. Its other main reason for success was unlike many of its competitors it was open, "free", and not embedded with a integrated development environment (IDE). Java also included the source code to its class library. This enabled Java to be adopted by many different companies producing their own Java development environments but sharing the same language, this open model fostered the growth in the Java language and continues today with the open sourcing of Java. Java quickly moved from allowing developers to build dinky applets to being the standard server-side language running much of the Internet today. The Enterprise Edition (JEE) of Java was defined to provide an open model for server application to be written and portable across any compliant JEE platform provider. The JEE standard is basically a basket of other Java specifications brought together under one umbrella and has major providers including IBM WebSphere, RedHat JBoss, Sun Glassfish, BEA WebLogic, Oracle AS and many others.
Google Trends
Programming Languages [1][2] JEE Servers [3][4]
[1] http:/ / www. google. com/ trends?q=c%23%2C+ php%2C+ java%2C+ c%2B%2B%2C+ perl& ctab=0& geo=all& date=all& sort=0 [2] Unfortunately hard to seperate Java island from Java language. [3] http:/ / www. google. com/ trends?q=websphere%2C+ weblogic%2C+ jboss%2C+ glassfish%2C+ geronimo+ apache& ctab=0& geo=all& date=all& sort=0 [4] Could not include Oracle because of Oracle database hits.
See also
Java Programming
What is a database?
A database is a program that stores data. There are many types of databases, flat-file, hierarchical, relational, object-relational, object-oriented, xml, and others. The original databases were mainly proprietary and non-standardized. Relational database were the first databases to achieve great success and standardization, relational database are characterize by the SQL (structured query language) standard to query and modify the database, their client/server architecture, and relational table storage structure. Relational databases achieve great success because their standardization allowed many different vendors such as Oracle, IBM, and Sybase to produce interoperable products giving users the flexibility to switch their vendor and avoid vendor lock-in to a proprietary solution. Their
Java Persistence/Print version client/server architecture allows the client programming language to be decoupled from the server, allowing the database server to support interface APIs into multiple different programming languages and clients. Although relational databases are relatively old technology they still dominate the industry. There have been many attempts to replace the relational model, first with object-oriented databases, then with object-relational databases, and finally with xml databases, but none of the new database models achieved much success and relational databases remain the overwhelmingly dominant database model. The main relational databases used today are, Oracle, MySQL (Oracle), PostgreSQL, DB2 (IBM), SQL Server (Microsoft). Google trend for databases (http://www.google.com/trends?q=oracle,+sybase,+sql+server,+mysql,+db2& ctab=0&geo=all&date=all&sort=0)
What is JPA?
The Java Persistence Architecture API (JPA) is a Java specification for accessing, persisting, and managing data between Java objects / classes and a relational database. JPA was defined as part of the EJB 3.0 specification as a replacement for the EJB 2 CMP Entity Beans specification. JPA is now considered the standard industry approach for Object to Relational Mapping (ORM) in the Java Industry. JPA itself is just a specification, not a product; it cannot perform persistence or anything else by itself. JPA is just a set of interfaces, and requires an implementation. There are open-source and commercial JPA implementations to choose from and any Java EE 5 application server should provide support for its use. JPA also requires a database to persist to. JPA allows POJO (Plain Old Java Objects) to be easily persisted without requiring the classes to implement any interfaces or methods as the EJB 2 CMP specification required. JPA allows an object's object-relational mappings to be defined through standard annotations or XML defining how the Java class maps to a relational database table. JPA also defines a runtime EntityManager API for processing queries and transaction on the objects against the database. JPA defines an object-level query language, JPQL, to allow querying of the objects from the database. JPA is the latest of several Java persistence specifications. The first was the OMG persistence service Java binding, which was never very successful; I'm not sure of any commercial products supporting it. Next came EJB 1.0 CMP Entity Beans, which was very successful in being adopted by the big Java EE providers (BEA, IBM), but there was a backlash against the spec by some users who thought the spec requirements on the Entity Beans overly complex and overhead and performance poor. EJB 2.0 CMP tried to reduce some of the complexity of Entity Beans through introducing local interfaces, but the majority of the complexity remained. EJB 2.0 also lacked portability, in that the deployment descriptors defining the object-relational mapping were not specified and were all proprietary. This backlash in part led to the creation of another Java persistence specification, JDO (Java Data Objects). JDO obtained somewhat of a "cult" following of several independent vendors such as Kodo JDO, and several open-source implementations, but never had much success with the big Java EE vendors. Despite the two competing Java persistence standards of EJB CMP and JDO, the majority of users continued to prefer proprietary api solutions, mainly TopLink (which had been around for some time and had its own POJO API) and Hibernate (which was a relatively new open-source product that also had its own POJO API and was quickly becoming the open-source industry standard). The TopLink product formerly owned by WebGain was also acquired by Oracle, increasing its influence on the Java EE community. The EJB CMP backlash was only part of a backlash against all of Java EE which was seen as too complex in general and prompted such products as the Spring container. This led the EJB 3.0 specification to have a main goal of reducing the complexity, which led the spec committee down the path of JPA. JPA was meant to unify the EJB 2 CMP, JDO, Hibernate, and TopLink APIs and products, and seems to have been very successful in doing so.
Java Persistence/Print version Currently most of the persistence vendors have released implementations of JPA confirming its adoption by the industry and users. These include Hibernate (acquired by JBoss, acquired by Red Hat), TopLink (acquired by Oracle), and Kodo JDO (acquired by BEA, acquired by Oracle). Other products that have added support for JPA include Cocobase (owned by Thought Inc.) and JPOX. EJB JPA Spec (http://jcp.org/aboutJava/communityprocess/final/jsr220/index.html) JPA ORM XML Schema (http://java.sun.com/xml/ns/persistence/orm_1_0.xsd) JPA Persistence XML Schema (http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd) JPA JavaDoc (https://java.sun.com/javaee/5/docs/api/javax/persistence/package-summary.html) JPQL BNF
Resources
JPA 2.0 Spec (http://jcp.org/en/jsr/detail?id=317) JPA 2.0 Reference Implementation (EclipseLink) (http://www.eclipse.org/eclipselink) Eclipse EclipseLink to be JPA 2.0 Reference Implementation (http://www.eclipse.org/org/press-release/ 20080317_Eclipselink.php) JPA 2.0 Examples (http://wiki.eclipse.org/EclipseLink/Examples/JPA#JPA_2.0)
Spec JDBC (Java DataBase Connectivity) JDO (Java Data Objects) JPA (Java Persistence API) JCA (Java EE Connector Architecture) SDO (Service Data Objects)
EJB CMP (Enterprise Java Beans, Container Managed Persistence) 2.1 (EJB) 2003 [1] Last updated 2010-10
Reasons for JPA It is a standard and part of EJB3 and JEE. Many free and open source products with enterprise level support. Portability across application servers and persistence products (avoids vendor lock-in). A usable and functional specification. Supports both JEE and JSE.
ORM can be a hot topic for some people, and there are many ORM camps. There are those that endorse a particular standard or product. There are those that don't believe in ORM or even objects in general and prefer JDBC. There are those that still believe that object databases are the way to go. Personally I would recommend you use whatever technology you are most comfortable with, but if you have never used ORM or JPA, perhaps give it a try and see if you like it. The below list provides several discussions on why or why not to use JPA and ORM. Discussions on JPA and ORM Usage Why do we need anything other than JDBC? (Java Ranch) (http://saloon.javaranch.com/cgi-bin/ubb/ ultimatebb.cgi?ubb=get_topic&f=78&t=003738) Why JPA? (BEA) (http://edocs.bea.com/kodo/docs41/full/html/ejb3_overview_why.html) JPA Explained (The Server Side) (http://www.theserverside.com/news/thread.tss?thread_id=44526) JPA vs JDO (The Server Side) (http://www.theserverside.com/news/thread.tss?thread_id=40965)
Persistence Products
There are many persistence products to choose from. Most persistence products now support a JPA interface, although there still are some exceptions. Which product you use depends on your preference, but most people would recommend you use the JPA standard whichever product you choose. This gives you the flexibility to switch persistence providers, or port your application to another server platform which may use a different persistence provider. Determining which persistence product to use involves many criteria. Valid things to consider include: Which persistence product does your server platform support and integrate with? What is the cost of the product, is it free and open source, can you purchase enterprise level support and services? Do you have an existing relationship with the company producing the product? Is the product active and does it have a large user base? How does the product perform and scale? Does the product integrate with your database platform? Does the product have active and open forums, do questions receive useful responses? Is the product JPA compliant, what functionality does the product offer beyond the JPA specification?
450 (http:/ / forum. hibernate. org/ viewforum. php?f=1) 71 (http:/ / www. eclipse. org/ forums/ index. php?t=thread& frm_id=111& S=1b00bfd151289b297688823a00683aca) + (http:/ / www. nabble. com/ EclipseLink---Users-f26658. html) 32 (http:/ / forums. oracle. com/ forums/ forum. jspa?forumID=48)
Yes
Yes
2.0.1
2010
Yes
OracleAS (11g), Oracle Weblogic (10.3), Glassfish (v3) OracleAS (11g), Oracle Weblogic (10.3)
TopLink (Oracle)
Yes
Yes
Yes
11g (11.1.1.2.0)
2009
OpenJPA (Apache)
Yes
Yes
2.0.0
2010
Yes
Geronimo
31 (http:/ / www. nabble. com/ OpenJPA-Users-f23252. html) Not published (http:/ / www. jpox. org/ servlet/ forum/ index)
Yes DataNucleus (http:/ / www. datanucleus. org/ ) (DataNucleus) TopLink Essentials (java.net) Yes
Yes
Yes
2.0.2
2010
Yes
2.0
2007
Yes
Glassfish (v2), 5 (http:/ / www. nabble. com/ java. SunAS (9), net---glassfish-persistence-f13455. html) OracleAS (10.1.3) Oracle WebLogic (10.3) 0 (http:/ / forums. bea. com/ bea/ forum. jspa?forumID=500000029)
Kodo (Oracle)
Yes
Yes
4.1
2007
EclipseLink
EclipseLink is the open source Eclipse Persistence Services Project from the Eclipse Foundation. The product provides an extensible framework that allows Java developers to interact with various data services, including databases, XML, and Enterprise Information Systems (EIS). EclipseLink supports a number of persistence standards including the Java Persistence API (JPA), Java API for XML Binding (JAXB), Java Connector Architecture (JCA), and Service Data Objects (SDO). EclipseLink is based on the TopLink product, which Oracle contributed the source code from to create the EclipseLink project. The original contribution was from TopLink's 11g code base, and the entire code-base/feature set was contributed, with only EJB 2 CMP and some minor Oracle AS specific integration removed. This differs from the TopLink Essentials Glassfish contribution, which did not include some key enterprise features. The package names were changed and some of the code was moved around. The TopLink Mapping Workbench UI has also been contributed to the project. EclipseLink is the intended path forward for persistence for Oracle and TopLink. It is intended that the next major release of Oracle TopLink will include EclipseLink as well as the next major release of Oracle AS. EclipseLink supports usage in an OSGi environment. EclipseLink was announced to be the JPA 2.0 reference implementation, and announced to be the JPA provider for Glassfish v3. EclipseLink Home (http://www.eclipse.org/eclipselink/) EclipseLink Newsgroup (http://www.eclipse.org/newsportal/thread.php?group=eclipse.technology. eclipselink) EclipseLink Wiki (http://wiki.eclipse.org/EclipseLink)
TopLink
TopLink is one of the leading Java persistence products and JPA implementations. TopLink is produced by Oracle and part of Oracle's OracleAS, WebLogic, and OC4J servers. As of TopLink 11g, TopLink bundles the open source project EclipseLink for most of its functionality. The TopLink 11g release supports the JPA 1.0 specification. TopLink 10.1.3 also supports EJB CMP and is the persistence provider for OracleAS OC4J 10.1.3 for both JPA and EJB CMP. TopLink provides advanced object-relational mapping functionality beyond the JPA specification, as well as providing persistence for object-relational data-types, and Enterprise Information Systems (EIS/mainframes). TopLink includes sophisticated object caching and performance features. TopLink provides a Grid extension that integrate with Oracle Coherence. TopLink provides object-XML mapping support and provides a JAXB implementation and web service integration. TopLink provides a Service Data Object (SDO) implementation. TopLink provides a rich user interface through the TopLink Mapping Workbench. The Mapping Workbench allows for graphical mapping of an object model to a data model, as allows for generation of a data model from an object model, and generation of an object model from a data model, and auto-mapping of an existing object and data model. The TopLink Mapping Workbench functionality is also integrated with Oracle's JDeveloper IDE.
Java Persistence/Print version TopLink contributed part of its source code to become the JPA 1.0 reference implementation under the Sun java.net Glassfish project. This open-source product is called TopLink Essentials, and despite a different package name (oracle.toplink.essentials) it is basically a branch of the source code of the TopLink product with some advanced functionality stripped out. TopLink contributed practically its entire source code to the Eclipse Foundation EclipseLink product. This is an open source product currently in incubation that represents the path forward for TopLink. The package name is different (org.eclipse.persistence) but the source code it basically a branch of the TopLink 11g release. Oracle also contributed its Mapping Workbench source code to the project. The TopLink Mapping Workbench developers also were major contributors to the Eclipse Dali project for JPA support. TopLink was first developed in Smalltalk and ported to Java in the 90's, and has over 15 years worth of object persistence solutions. TopLink originally provided a proprietary POJO persistence API, when EJB was first released TopLink provided one of the most popular EJB CMP implementations, although it continued to recommend its POJO solution. TopLink also provided a JDO 1.0 implementation for a few releases, but this was eventually deprecated and removed once the JPA specification had been formed. Oracle and TopLink have been involved in each of the EJB, JDO and EJB3/JPA expert groups, and Oracle was the co-lead for the EJB3/JPA specification. Oracle TopLink Home (http://www.oracle.com/technology/products/ias/toplink/index.html) Oracle TopLink Forum (http://forums.oracle.com/forums/forum.jspa?forumID=48) Oracle TopLink Wiki (http://wiki.oracle.com/page/TopLink) TopLink Resources TopLink Automatic Schema Generation Options (http://docs.sun.com/app/docs/doc/819-3672/ gbwmk?a=view)
10
Hibernate
Hibernate was an open source project developed by a team of Java software developers around the world led by Gavin King. JBoss, Inc. (now part of Red Hat) later hired the lead Hibernate developers and worked with them in supporting Hibernate. The current version of Hibernate is Version 3.x. Hibernate provides both a proprietary POJO API, and JPA support.
TopLink Essentials
TopLink Essentials is an open source project from the Sun java.net Glassfish community. It is the EJB3 JPA 1.0 reference implementation, and is the JPA provider for the Sun Glassfish v1 application server. TopLink Essentials was based on the TopLink product, which Oracle contributed some of the source code from to create the TopLink Essentials project. The original contribution was from TopLink's 10.1.3 code base, only some of the TopLink product source code was contributed, which did not include some key enterprise features. The package names were changed and some of the code was moved around. TopLink Essentials has been replaced by the EclipseLink project. EclipseLink will be the JPA 2.0 reference implementation and be part of Sun Glassfish v3. TopLink Essentials Home (https://glassfish.dev.java.net/javaee5/persistence/index.html) TopLink Essentials Forum (http://www.nabble.com/java.net---glassfish-persistence-f13455.html) TopLink Essentials Wiki (http://wiki.glassfish.java.net/Wiki.jsp?page=TopLinkEssentials)
11
Kodo
Kodo (http:/ / www. bea. com/ kodo/ ), was originally developed as an Java Data Objects (JDO) implementation, by SolarMetric. BEA Systems acquired SolarMetric in 2005, where Kodo was expanded to be an implementation of both the JDO and JPA specifications. In 2006, BEA donated a large part of the Kodo source code to the Apache Software Foundation under the name OpenJPA. BEA (and Kodo) were acquired by Oracle. Kodo development is now stopped and users are referred to EclipseLink
Open JPA
OpenJPA is an Apache project for supporting the JPA specification. Its' source code was originally donated from (part of) BEA's Kodo product. Ebean (http://www.avaje.org) is a Java ORM based on the JPA specification. The goal of Ebean is to provide mapping compatible with JPA while providing a simpler API to use and learn. 1. /Why Ebean/ 2. /Example Model/ 3. /Quick Start - examples/ 4. /Mapping via JPA Annotations/ 5. /Query/ 1. /Basics/ 2. /Future - Asychronous query execution/ 3. /PagingList/ 4. /Aggregation - Group By/ /Save and Delete/ /Transactions/ /Caching/ /Using raw SQL/
6. 7. 8. 9.
Mapping
The first thing that you need to do to persist something in Java is define how it is to be persisted. This is called the mapping process ( details (http:/ / en. wikipedia. org/ wiki/ Object-Relational_impedance_mismatch)). There have been many different solutions to the mapping process over the years, including some object-databases that didn't require you map anything, just lets you persist anything directly. Object-relational mapping tools that would generate an object model for a data model that included the mapping and persistence logic in it. ORM products that provided mapping tools to allow the mapping of an existing object model to an existing data model and stored this mapping meta-data in flat files, database tables, XML and finally annotations. In JPA mappings can either be stored through Java annotations, or in XML files. One significant aspect of JPA is that only the minimal amount of mapping is required. JPA implementations are required to provide defaults for almost all aspects of mapping an object. The minimum requirement to mapping an object in JPA is to define which objects can be persisted. This is done through either marking the class with the @Entity (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ Entity. html) annotation, or adding an <entity> tag for the class in the persistence unit's ORM XML file. Also the primary key, or unique identifier attribute(s) must be defined for the class. This is done through marking one of the class' fields or properties (get method) with the @Id annotation, or adding an <id> tag for the class' attribute in the ORM XML (http://java.sun.com/xml/ns/persistence/orm_1_0.xsd) file.
Java Persistence/Print version The JPA implementation will default all other mapping information, including defaulting the table name, column names for all defined fields or properties, cardinality and mapping of relationships, all SQL and persistence logic for accessing the objects. Most JPA implementations also provide the option of generating the database tables at runtime, so very little work is required by the developer to rapidly develop a persistent JPA application.
12
13
14
Common Problems
My annotations are ignored This typically occurs when you annotate both the fields and methods (properties) of the class. You must choose either field or property access, and be consistent. Also when annotating properties you must put the annotation on the get method, not the set method. Also ensure that you have not defined the same mappings in XML, which may be overriding the annotations. You may also have a classpath issue, such as having an old version of the class on the classpath. Odd behavior There are many reasons that odd behavior can occur with persistence. One common issue that can cause odd behavior is using property access and putting side effects in your get or set methods. For this reason it is generally recommended to use field access in mapping, i.e. putting your annotations on your variables not your get methods. For example consider: public void setPhones(List<Phone> phones) { for (Phone phone : phones) { phone.setOwner(this); } this.phones = phones; } This may look innocent, but these side effects can have unexpected consequences. For example if the relationship was lazy this would have the effect of always instantiating the collection when set from the database. It could also have consequences with certain JPA implementations for persisting, merging and other operations, causing duplicate inserts, missed updates, or a corrupt object model. I have also seen simply incorrect property methods, such as a get method that always returns a new object, or a copy, or set methods that don't actually set the value. In general if you are going to use property access, ensure your property methods are free of side effects. Perhaps even use different property methods than your application uses.
15
Tables
A table is the basic persist structure of a relational database. A table contains a list of columns which define the table's structure, and a list of rows that define the table's data. Each column has a specific type and generally size. The standard set of relational types are limited to basic types including numeric, character, date-time, and binary (although most modern databases have additional types and typing systems). Tables can also have constraints that define the rules which restrict the row data, such as primary key, foreign key, and unique constraints. Tables also have other artifacts such as indexes, partitions and triggers. A typical mapping of a persist class will map the class to a single table. In JPA this is defined through the @Table (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ Table. html) annotation or <table> XML element. If no table annotation is present, the JPA implementation will auto assign a table for the class, the JPA default table name is the name of the class as uppercase (minus the package). Each attribute of the class will be stored in a column in the table.
Advanced
Although in the ideal case each class would map to a single table, this is not always possible. Other scenarios include: Multiple tables : One class maps to 2 or multiple tables. Sharing tables : 2 or multiple classes are stored in the same table. Inheritance : A class is involved in inheritance and has an inherited and local table. Views : A class maps to a view. Stored procedures : A class maps to a set of stored procedures. Partitioning : Some instances of a class map to one table, and other instances to another table. Replication : A class's data is replicated to multiple tables. History : A class has historical data.
These are all advanced cases, some are handled by the JPA Spec and many are not. The following sections investigate each of these scenarios further and include what is supported by the JPA spec, what can be done to workaround the issue within the spec, and how to use some JPA implementations extensions to handle the scenario.
16
Multiple tables
Sometimes a class maps to multiple tables. This typically occurs on legacy or existing data models where the object model and data model do not match. It can also occur in inheritance when subclass data is stored in additional tables. Multiple tables may also be used for performance, partitioning or security reasons. JPA allows multiple tables to be assigned to a single class. The @SecondaryTable (https://java.sun.com/javaee/5/ docs/ api/ javax/ persistence/ SecondaryTable. html) and SecondaryTables annotations or <secondary-table> elements can be used. By default the @Id column(s) are assumed to be in both tables, such that the secondary table's @Id column(s) are the primary key of the table and a foreign key to the first table. If the first table's @Id column(s) are not named the same the @PrimaryKeyJoinColumn (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ PrimaryKeyJoinColumn. html) or <primary-key-join-column> can be used to define the foreign key join condition. In a multiple table entity, each mapping must define which table the mapping's columns are from. This is done using the table attribute of the @Column or @JoinColumn annotations or XML elements. By default the primary table of the class is used, so you only need to set the table for secondary tables. For inheritance the default table is the primary table of the subclass being mapped.
17
With the @PrimaryKeyJoinColumn the name refers to the foreign key column in the secondary table and the referencedColumnName refers to the primary key column in the first table. If you have multiple secondary tables, they must always refer to the first table. When defining the table's schema typically you will define the join columns in the secondary table as the primary key of the table, and a foreign key to the first table. Depending how you have defined your foreign key constraints, the order of the tables can be important, the order will typically match the order that the JPA implementation will insert into the tables, so ensure the table order matches your constraint dependencies. For relationships to a class that has multiple tables the foreign key (join column) always maps to the primary table of the target. JPA does not allow having a foreign key map to a table other than the target object's primary table. Normally this is not an issue as foreign keys almost always map to the id/primary key of the primary table, but in some advanced scenarios this may be an issue. Some JPA products allow the column or join column to use the qualified name of the column (i.e. @JoinColumn(referenceColumnName="EMP_DATA.EMP_NUM"), to allow this type of relationship. Some JPA products may also support this through their own API, annotations or XML.
Java Persistence/Print version allow writes, or allow triggers to be used to handle writes. Some JPA implementations provide extensions to handle this scenarios. TopLink, EclipseLink : Provides a proprietary API for its mapping model ClassDescriptor.addForeignKeyFieldNameForMultipleTable() that allows for arbitrary complex foreign keys relationships to be defined among the secondary tables. This can be configured through using a @DescriptorCustomizer annotation and DescriptorCustomizer class.
18
19
Example mapping XML for default (entire persistence unit) table qualifier
<entity-mappings> <persistence-unit-metadata> <persistence-unit-defaults> <schema name="ACME"/> </persistence-unit-defaults> </persistence-unit-metadata> .... </entity-mappings>
20
Identity
An object id (OID) is something that uniquely identifies an object. Within a VM this is typically the object's pointer. In a relational database table a row is uniquely identified in its table by its primary key. When persisting objects to a database you need a unique identifier for the objects, this allows you to query the object, define relationships to the object, and update and delete the object. In JPA the object id is defined through the @Id (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ Id. html) annotation or <id> element and should correspond to the primary key of the object's table.
Example id annotation
... @Entity public class Employee { @Id private long id ... }
Example id XML
<entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"/> </attributes> <entity/> Common Problems Strange behavior, unique constraint violation. You must never change the id of an object. Doing so will cause errors, or strange behavior depending on your JPA provider. Also do not create two objects with the same id, or try persisting an object with the same id as an existing object. If you have an object that may be existing use the EntityManager merge() API, do not use persist() for an existing object, and avoid relating an un-managed existing object to other managed objects. No primary key. See No Primary Key.
21
Sequencing
An object id can either be a natural id or a generated id. A natural id is one that occurs in the object and has some meaning in the application. Examples of natural ids include user ids, email addresses, phone numbers, and social insurance numbers. A generated id is one that is generated by the system. A sequence number in JPA is a sequential id generated by the JPA implementation and automatically assigned to new objects. The benefits of using sequence numbers are that they are guaranteed to be unique, allow all other data of the object to change, are efficient values for querying and indexes, and can be efficiently assigned. The main issue with natural ids is that everything always changes at some point; even a person's social insurance number can change. Natural ids can also make querying, foreign keys and indexing less efficient in the database. In JPA an @Id can be easily assigned a generated sequence number through the @GeneratedValue (https:/ / java. sun.com/javaee/5/docs/api/javax/persistence/GeneratedValue.html) annotation, or <generated-value> element.
Sequence Strategies
There are several strategies for generating unique ids. Some strategies are database agnostic and others make use of built-in databases support. JPA provides support for several strategies for id generation defined through the GenerationType (https:/ / java. sun. com/javaee/5/docs/api/javax/persistence/GenerationType.html) enum, TABLE, SEQUENCE and IDENTITY. The choice of which sequence strategy to use is important as it effect performance, concurrency and portability.
Table sequencing
Table sequencing uses a table in the database to generate unique ids. The table has two columns, one stores the name of the sequence, the other stores the last id value that was assigned. There is a row in the sequence table for each sequence object. Each time a new id is required the row for that sequence is incremented and the new id value is passed back to the application to be assigned to an object. This is just one example of a sequence table schema, for other table sequencing schemas see Customizing.
Java Persistence/Print version Table sequencing is the most portable solution because it just uses a regular database table, so unlike sequence and identity can be used on any database. Table sequencing also provides good performance because it allows for sequence pre-allocation, which is extremely important to insert performance, but can have potential concurrency issues. In JPA the @TableGenerator (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ TableGenerator. html) annotation or element is used to define a sequence table. The TableGenerator defines a pkColumnName for the column used to store the name of the sequence, valueColumnName for the column used to store the last id allocated, and pkColumnValue for the value to store in the name column (normally the sequence name). Example sequence table SEQUENCE_TABLE
SEQ_NAME SEQ_COUNT EMP_SEQ PROJ_SEQ 123 550
22
Example table generator annotation ... @Entity public class Employee { @Id @GeneratedValue(strategy=GenerationType.TABLE, generator="EMP_SEQ") @TableGenerator(name="EMP_SEQ", table="SEQUENCE_TABLE", pkColumnName="SEQ_NAME", valueColumnName="SEQ_COUNT", pkColumnValue="EMP_SEQ") private long id; ... } Example table generator XML
<entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"> <generated-value strategy="TABLE" generator="EMP_SEQ"/> <table-generator name="EMP_SEQ" table="SEQUENCE_TABLE" pk-column-name="SEQ_NAME", value-column-name="SEQ_COUNT", pk-column-value="EMP_SEQ"/> </id> </attributes> <entity/>
Java Persistence/Print version Common Problems Error when allocating a sequence number. Errors such as "table not found", "invalid column" can occur if you do not have a SEQUENCE table defined in your database, or its schema does not match what your configured, or what your JPA provider is expecting by default. Ensure you create the sequence table correctly, or configure your @TableGenerator to match the table that you created, or let your JPA provider create you tables for you (most JPA provider support schema creation). You may also get an error such as "sequence not found", this means you did not create a row in the table for your sequence. You must insert an initial row in the sequence table for your sequence with the initial id (i.e. INSERT INTO SEQUENCE_TABLE (SEQ_NAME, SEQ_COUNT) VALUES ("EMP_SEQ", 0)), or let your JPA provider create your schema for you. Deadlock or poor concurrency in the sequence table. See concurrency issues.
23
Sequence objects
Sequence objects use special database objects to generate ids. Sequence objects are only supported in some databases, such as Oracle, DB2, and Postgres. Usually, a SEQUENCE object has a name, an INCREMENT, and other database object settings. Each time the <sequence>.NEXTVAL is selected the sequence is incremented by the INCREMENT. Sequence objects provide the optimal sequencing option, as they are the most efficient and have the best concurrency, however they are the least portable as most databases do not support them. Sequence objects support sequence preallocation through setting the INCREMENT on the database sequence object to the sequence preallocation size. In JPA the @SequenceGenerator (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ SequenceGenerator. html) annotation or <sequence-generator> element is used to define a sequence object. The SequenceGenerator defines a sequenceName for the name of the database sequence object, and an allocationSize for the sequence preallocation size or sequence object INCREMENT. Example sequence generator annotation ... @Entity public class Employee { @Id @GeneratedValue(strategy=GenerationType.SEQUENCE, generator="EMP_SEQ") @SequenceGenerator(name="EMP_SEQ", sequenceName="EMP_SEQ", allocationSize=100) private long id; ... }
24
Common Problems Error when allocating a sequence number. Errors such as "sequence not found", can occur if you do not have a SEQUENCE object defined in your database. Ensure you create the sequence object, or let your JPA provider create your schema for you (most JPA providers support schema creation). When creating your sequence object, ensure the sequence's INCREMENT matches your SequenceGenerator's allocationSize. The DDL to create a sequence object depends on the database, for Oracle it is, CREATE SEQUENCE EMP_SEQ INCREMENT BY 100 START WITH 100. Invalid, duplicate or negative sequence numbers. This can occur if you sequence object's INCREMENT does not match your allocationSize. This results in the JPA provider thinking it got back more sequences than it really did, and ends up duplicating values, or with negative numbers. This can also occur on some JPA providers if you sequence object's STARTS WITH is 0 instead of a value equal or greater to the allocationSize.
Identity sequencing
Identity sequencing uses special IDENTITY columns in the database to allow the database to automatically assign an id to the object when its row is inserted. Identity columns are supported in many databases, such as MySQL, DB2, SQL Server, Sybase and Postgres. Oracle does not support IDENTITY columns but they can be simulated through using sequence objects and triggers. Although identity sequencing seems like the easiest method to assign an id, they have several issues. One is that since the id is not assigned by the database until the row is inserted the id cannot be obtained in the object until after commit or after a flush call. Identity sequencing also does not allow for sequence preallocation, so can require a select for each object that is inserted, potentially causing a major performance problem, so in general are not recommended. In JPA there is no annotation or element for identity sequencing as there is no additional information to specify. Only the GeneratedValue's strategy needs to be set to IDENTITY. Example identity annotation ... @Entity public class Employee { @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private long id; ...
Java Persistence/Print version } Example identity XML <entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"> <generated-value strategy="IDENTITY"/> </id> </attributes> <entity/> Common Problems null is inserted into the database, or error on insert. This typically occurs because the @Id was not configured to use an @GeneratedValue(strategy=GenerationType.IDENTITY). Ensure it is configured correctly. It could also be that your JPA provider does not support identity sequencing on the database platform that you are using, or you have not configured your database platform. Most providers require that you set the database platform through a persistence.xml property, most provider also allow you to customize your own platform if it is not directly supported. It may also be that you did not set your primary key column in your table to be an identity type. Object's id is not assign after persist. Identity sequencing requires the insert to occur before the id can be assigned, so it is not assigned on persist like other types of sequencing. You must either call commit() on the current transaction, or call flush() on the EntityManager. It may also be that you did not set your primary key column in your table to be an identity type. Child's id is not assign from parent on persist. A common issue is that the generated Id is part of a child object's Id through a OneToOne or ManyToOne mapping. In this case, because JPA requires that the child define a duplicate Basic mapping for the Id, its Id will be inserted as null. One solution to this is to mark the Column on the Id mapping in the child as insertable=false, updateable=false, and define the OneToOne or ManyToOne using a normal JoinColumn this will ensure the foreign key field is populated by the OneToOne or ManyToOne not the Basic. Another option is to first persist the parent, then call flush() before persisting the child. Poor insert performance. Identity sequencing does not support sequence preallocation, so requires a select after each insert, in some cases doubling the insert cost. Consider using a sequence table, or sequence object to allow sequence preallocation.
25
Advanced
Composite Primary Keys
A composite primary key is one that is made up of several columns in the table. A composite primary key can be used if no single column in the table is unique. It is generally more efficient and simpler to have a one-column primary key, such as a generated sequence number, but sometimes a composite primary key is desirable and
26
Composite primary keys are common in legacy database schemas, where cascaded keys can sometimes be used. This refers to a model where dependent objects' key definitions include their parents' primary key; for example, COMPANY's primary key is COMPANY_ID, DEPARTMENT's primary key is composed of a COMPANY_ID and a DEP_ID, EMPLOYEE's primary key is composed of COMPANY_ID, DEP_ID, and EMP_ID, and so on. Although this generally does not match object-oriented design principles, some DBA's prefer this model. Difficulties with the model include the restriction that employees cannot switch departments, that foreign-key relationships become more complex, and that all primary-key operations (including queries, updates, and deletes) are less efficient. However, each department has control over its own employee IDs, and if needed the database EMPLOYEE table can be partitioned based on the COMPANY_ID or DEP_ID, as these are included in every query. Other common usages of composite primary keys include many-to-many relationships where the join table has additional columns, so the table itself is mapped to an object whose primary key consists of the pair of foreign-key columns and dependent or aggregate one-to-many relationships where the child object's primary key consists of its parent's primary key and a locally unique field. There are two methods of declaring a composite primary key in JPA, IdClass and EmbeddedId.
Id Class
An IdClass defines a separate Java class to represent the primary key. It is defined through the @IdClass (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ IdClass. html) annotation or <id-class> XML element. The IdClass must define an attribute (field/property) that mirrors each Id attribute in the entity. It must have the same attribute name and type. When using an IdClass you still require to mark each Id attribute in the entity with @Id. The main purpose of the IdClass is to be used as the structure passed to the EntityManager find() and getReference() API. Some JPA products also use the IdClass as a cache key to track an object's identity. Because of this, it is required (depending on JPA product) to implement an equals() and hashCode() method on the IdClass. Ensure that the equals() method checks each part of the primary key, and correctly uses equals for objects and == for primitives. Ensure that the hashCode() method will return the same value for two equal objects. TopLink / EclipseLink : Do not require the implementation of equals() or hashCode() in the id class.
Java Persistence/Print version Example id class annotation ... @Entity @IdClass(EmployeePK.class) public class Employee { @Id private long employeeId @Id private long companyId @Id private long departmentId ... } Example id class XML <entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id-class class="org.acme.EmployeePK"/> <id name="employeeId"/> <id name="companyId"/> <id name="departmentId"/> </attributes> <entity/> Example id class ... public class EmployeePK { private long employeeId; private long companyId; private long departmentId; public EmployeePK(long employeeId, long companyId, long departmentId) { this.employeeId = employeeId; this.companyId = companyId; this.departmentId = departmentId; } public boolean equals(Object object) { if (object instanceof EmployeePK) { EmployeePK pk = (EmployeePK)object; return employeeId == pk.employeeId && companyId == pk.companyId && departmentId == pk.departmentId;
27
Java Persistence/Print version } else { return false; } } public int hashCode() { return employeeId + companyId + departmentId; } }
28
Embedded Id
An EmbeddedId defines a separate Embeddable Java class to contain the entities primary key. It is defined through the @EmbeddedId (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EmbeddedId. html) annotation or <embedded-id> XML element. The EmbeddedId's Embeddable class must define each id attribute for the entity using Basic mappings. All attributes in the EmbeddedId's Embeddable are assumed to be part of the primary key. The EmbeddedId is also used as the structure passed to the EntityManager find() and getReference() API. Some JPA products also use the EmbeddedId as a cache key to track an object's identity. Because of this, it is required (depending on JPA product) to implement an equals() and hashCode() method on the EmbeddedId. Ensure that the equals() method checks each part of the primary key, and correctly uses equals for objects and == for primitives. Ensure that the hashCode() method will return the same value for two equal objects. TopLink / EclipseLink : Do not require the implementation of equals() or hashCode() in the id class. Example embedded id annotation ... @Entity public class Employee { @EmbeddedId private EmployeePK id ... } Example embedded id XML <entity name="Employee" class="org.acme.Employee" access="FIELD"> <embedded-id class="org.acme.EmployeePK"/> <entity/> <embeddable name="EmployeePK" class="org.acme.EmployeePK" access="FIELD"> <attributes> <basic name="employeeId"/> <basic name="companyId"/> <basic name="departmentId"/> </attributes> <embeddable/>
Java Persistence/Print version Example embedded id class ... @Embeddable public class EmployeePK { @Basic private long employeeId @Basic private long companyId @Basic private long departmentId public EmployeePK(long employeeId, long companyId, long departmentId) { this.employeeId = employeeId; this.companyId = companyId; this.departmentId = departmentId; } public boolean equals(Object object) { if (object instanceof EmployeePK) { EmployeePK pk = (EmployeePK)object; return employeeId == pk.employeeId && companyId == pk.companyId && departmentId == pk.departmentId; } else { return false; } } public int hashCode() { return employeeId + companyId + departmentId; } }
29
JPA 1.0
Unfortunately JPA 1.0 does not handle this model well, and things become complicated, so to make your life a little easier you may consider defining a generated unique id for the child. JPA 1.0 requires that all @Id mappings be Basic mappings, so if your Id comes from a foreign key column through a OneToOne or ManyToOne mapping, you must also define a Basic @Id mapping for the foreign key column. The reason for this is in part that the Id must be a
Java Persistence/Print version simple object for identity and caching purposes, and for use in the IdClass or the EntityManager find() API. Because you now have two mappings for the same foreign key column you must define which one will be written to the database (it must be the Basic one), so the OneToOne or ManyToOne foreign key must be defined to be read-only. This is done through setting the JoinColumn attributes insertable and updatable to false, or by using the @PrimaryKeyJoinColumn instead of the @JoinColumn. A side effect of having two mappings for the same column is that you now have to keep the two in synch. This is typically done through having the set method for the OneToOne attribute also set the Basic attribute value to the target object's id. This can become very complicated if the target object's primary key is a GeneratedValue, in this case you must ensure that the target object's id has been assigned before relating the two objects. Some times I think that JPA primary keys would be much simpler if they were just defined on the entity using a collection of Columns instead of mixing them up with the attribute mapping. This would leave you free to map the primary key field in any manner you desired. A generic List could be used to pass the primary key to find() methods, and it would be the JPA provider's responsibility for hashing and comparing the primary key correctly instead of the user's IdClass. But perhaps for simple singleton primary key models the JPA model is more straight forward. TopLink / EclipseLink : Allow the primary key to be specified as a list of columns instead of using Id mappings. This allows OneToOne and ManyToOne mapping foreign keys to be used as the primary key without requiring a duplicate mapping. It also allows the primary key to be defined through any other mapping type. This is set through using a DescriptorCustomizer and the ClassDescriptor addPrimaryKeyFieldName API. Hibernate / Open JPA / EclipseLink (as of 1.2): Allows the @Id annotation to be used on a OneToOne or ManyToOne mapping. Example OneToOne id annotation ... @Entity public class Address { @Id @Column(name="OWNER_ID") private long ownerId; @OneToOne @PrimaryKeyJoinColumn(name="OWNER_ID", referencedColumnName="EMP_ID") private Employee owner; ... public void setOwner(Employee owner) { this.owner = owner; this.ownerId = owner.getId(); } ... }
30
31
Example ManyToOne id annotation ... @Entity @IdClass(PhonePK.class) public class Phone { @Id @Column(name="OWNER_ID") private long ownerId; @Id private String type; @ManyToOne @PrimaryKeyJoinColumn(name="OWNER_ID", referencedColumnName="EMP_ID") private Employee owner; ... public void setOwner(Employee owner) { this.owner = owner; this.ownerId = owner.getId(); } ... } Example ManyToOne id XML
<entity name="Address" class="org.acme.Address" access="FIELD"> <id-class class="org.acme.PhonePK"/> <attributes> <id name="ownerId"> <column name="OWNER_ID"/> </id> <id name="type"/> <many-to-one name="owner"> <primary-key-join-column name="OWNER_ID" referencedColumnName="EMP_ID"/>
32
JPA 2.0
Defining an Id for a OneToOne or ManyToOne in JPA 2.0 is much simpler. The @Id annotation or id XML attribute can be added to a OneToOne or ManyToOne mapping. The Id used for the object will be derived from the target object's Id. If the Id is a single value, then the source object's Id is the same as the target object's Id. If it is a composite Id, then the IdClass will contain the Basic Id attributes, and the target object's Id as the relationship value. If the target object also has a composite Id, then the source object's IdClass will contain the target object's IdClass. Example JPA 2.0 ManyToOne id annotation ... @Entity @IdClass(PhonePK.class) public class Phone { @Id private String type; @ManyToOne @Id @JoinColumn(name="OWNER_ID", referencedColumnName="EMP_ID") private Employee owner; ... } Example JPA 2.0 ManyToOne id XML <entity name="Address" class="org.acme.Address" access="FIELD"> <id-class class="org.acme.PhonePK"/> <attributes> <id name="type"/> <many-to-one name="owner" id="true"> <join-column name="OWNER_ID" referencedColumnName="EMP_ID"/> </many-to-one> </attributes> <entity/> Example JPA 2.0 id class ... public class PhonePK { private String type; private long owner; public PhonePK() {} public PhonePK(String type, long owner) {
Java Persistence/Print version this.type = type; this.owner = owner; } public boolean equals(Object object) { if (object instanceof PhonePK) { PhonePK pk = (PhonePK)object; return type.equals(pk.type) && owner == pk.owner; } else { return false; } } public int hashCode() { return type.hashCode() + owner; } }
33
Advanced Sequencing
Concurrency and Deadlocks One issue with table sequencing is that the sequence table can become a concurrency bottleneck, even causing deadlocks. If the sequence ids are allocated in the same transaction as the insert, this can cause poor concurrency, as the sequence row will be locked for the duration of the transaction, preventing any other transaction that needs to allocate a sequence id. In some cases the entire sequence table or the table page could be locked causing even transactions allocating other sequences to wait or even deadlock. If a large sequence pre-allocation size is used this becomes less of an issue, because the sequence table is rarely accessed. Some JPA providers use a separate (non-JTA) connection to allocate the sequence ids in, avoiding or limiting this issue. In this case, if you use a JTA data-source connection, it is important to also include a non-JTA data-source connection in your persistence.xml. Guaranteeing Sequential Ids Table sequencing also allows for truly sequential ids to be allocated. Sequence and identity sequencing are non-transactional and typically cache values on the database, leading to large gaps in the ids that are allocated. Typically this is not an issue and desired to have good performance, however if performance and concurrency are less of a concern, and true sequential ids are desired then a table sequence can be used. By setting the allocationSize of the sequence to 1 and ensuring the sequence ids are allocated in the same transaction of the insert, you can guarantee sequence ids without gaps (but generally it is much better to live with the gaps and have good performance).
Java Persistence/Print version this would be 99,999,999,999,999,999,999 ids, or one id each millisecond for about 3,000,000,000 years, which is pretty safe. But you also need to store this id in Java. If you store the id in a Java int, this would be a 32 bit number , which is 4,294,967,296 different ids, or one id each second for about 200 years. If you instead use a long, this would be a 64 bit number, which is 18,446,744,073,709,551,616 different ids, or one id each millisecond for about 600,000,000 years, which is pretty safe.
34
Customizing
JPA supports three different strategies for generating ids, however there are many other methods. Normally the JPA strategies are sufficient, so you would only use a different method in a legacy situation. Sometimes the application has an application specific strategy for generating ids, such as prefixing ids with the country code, or branch number. There are several ways to integrate a customize ids generation strategy, the simplest is just define the id as a normal id and have the application assign the id value when the object is created. Some JPA products provide additional sequencing and id generation options, and configuration hooks. TopLink, EclipseLink : Several additional sequencing options are provided. A UnaryTableSequence allows a single column table to be used. A QuerySequence allows for custom SQL or stored procedures to be used. An API also exists to allow a user to supply their own code for allocating ids. Hibernate : A GUID id generation options is provided through the @GenericGenerator annotation.
35
No Primary Key
Sometimes your object or table has no primary key. The best solution in this case is normally to add a generated id to the object and table. If you do not have this option, sometimes there is a column or set of columns in the table that make up a unique value. You can use this unique set of columns as your id in JPA. The JPA Id does not always have to match the database table primary key constraint, nor is a primary key or a unique constraint required. If your table truly has no unique columns, then use all of the columns as the id. Typically when this occurs the data is read-only, so even if the table allows duplicate rows with the same values, the objects will be the same anyway, so it does not matter that JPA thinks they are the same object. The issue with allowing updates and deletes is that there is no way to uniquely identify the object's row, so all of the matching rows will be updated or deleted. If your object does not have an id, but its' table does, this is fine. Make the object and Embeddable object, embeddable objects do not have ids. You will need a Entity that contains this Embeddable to persist and query it. Inheritance is a fundamental concept of object-oriented programming and Java. Relational databases have no concept of inheritance, so persisting inheritance in a database can be tricky. Because relational databases have no concept of inheritance, there is no standard way of implementing inheritance in database, so the hardest part of persisting inheritance is choosing how to represent the inheritance in the database. JPA defines several inheritance mechanisms, mainly defined though the @Inheritance (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/Inheritance.html) annotation or the <inheritance> element. There are three inheritance strategies defined from the InheritanceType (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ InheritanceType. html) enum, SINGLE_TABLE, TABLE_PER_CLASS and JOINED.
An example of inheritance. SmallProject and LargeProject inherit the properties of their common parent, Project.
Single table inheritance is the default, and table per class is an optional feature of the JPA spec, so not all providers may support it. JPA also defines a mapped superclass concept defined though the @MappedSuperclass (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ MappedSuperclass. html) annotation or the <mapped-superclass> element. A mapped superclass is not a persistent class, but allow common mappings to be define for its subclasses.
36
Java Persistence/Print version <entity name="LargeProject" class="org.acme.LargeProject" access="FIELD"> <discriminator-value>L</discriminator-value> ... <entity/> <entity name="SmallProject" class="org.acme.SmallProject" access="FIELD"> <discriminator-value>S</discriminator-value> <entity/>
37
Common Problems
No class discriminator column If you are mapping to an existing database schema, your table may not have a class discriminator column. Some JPA providers do not require a class discriminator when using a joined inheritance strategy, so this may be one solution. Otherwise you need some way to determine the class for a row. Sometimes the inherited value can be computed from several columns, or there is an discriminator but not a one to one mapping from value to class. Some JPA providers provide extended support for this. Another option is to create a database view that manufactures the discriminator column, and then map your hierarchy to this view instead of the table. In general the best solution is just to add a discriminator column to the table (truth be told, ALTER TABLE is your best friend in ORM). TopLink / EclipseLink : Support computing the inheritance discriminator through Java code. This can be done through using a DescriptorCustomizer and the ClassDescriptor's InheritancePolicy's setClassExtractor() method. Hibernate : This can be accomplished through using the Hibernate @DiscriminatorFormula annotation. This allows database specific SQL or functions to be used to compute the discriminator value. Non nullable attributes Subclasses cannot define attributes as not allowing null, as the other subclasses must insert null into those columns. A workaround to this issue is instead of defining a not null constraint on the column, define a table constraint that check the discriminator value and the not nullable value. In general the best solution is to just live without the constraint (odds are you have enough constraints in your life to deal with as it is).
38
SMALLPROJECT (table)
ID 2
LARGEPROJECT (table)
ID BUDGET 1 50000
Java Persistence/Print version ... </attributes> <entity/> <entity name="LargeProject" class="org.acme.LargeProject" access="FIELD"> <table name="LARGEPROJECT"/> <discriminator-value>L</discriminator-value> ... <entity/> <entity name="SmallProject" class="org.acme.SmallProject" access="FIELD"> <table name="SMALLPROJECT"/> <discriminator-value>S</discriminator-value> <entity/>
39
Common Problems
Poor query performance The main disadvantage to the joined model is that to query any class join queries are required. Querying the root or branch classes is even more difficult as either multiple queries are required, or outer joins or unions are required. One solution is to use single table inheritance instead, this is good if the classes have a lot in common, but if it is a big hierarchy and the subclasses have little in common this may not be desirable. Another solution is to remove the inheritance and instead use a MappedSuperclass, but this means that you can no longer query or have relationships to the class. The poorest performing queries will be those to the root or branch classes. Avoiding queries and relationships to the root and branch classes will help to alleviate this burden. If you must query the root or branch classes there are two methods that JPA providers use, one is to outer join all of the subclass tables, the second is to first query the root table, then query only the required subclass table directly. The first method has the advantage of only requiring one query, the second has the advantage of avoiding outer joins which typically have poor performance in databases. You may wish to experiment with each to determine which mechanism is more efficient in your application and see if your JPA provider supports that mechanism. Typically the multiple query mechanism is more efficient, but this generally depends on the speed of your database connection. TopLink / EclipseLink : Support both querying mechanisms. The multiple query mechanism is used by default. Outer joins can be used instead through using a DescriptorCustomizer and the ClassDescriptor's InheritancePolicy's setShouldOuterJoinSubclasses() method. Do not have/want a table for every subclass Most inheritance hierarchies do not fit with either the joined or the single table inheritance strategy. Typically the desired strategy is somewhere in between, having joined tables in some subclasses and not in others. Unfortunately JPA does not directly support this. One workaround is to map your inheritance hierarchy as single table, but them add the additional tables in the subclasses, either through defining a Table or SecondaryTable in each subclass as required. Depending on your JPA provider, this may work (don't forget to sacrifice the chicken). If it does not work, then you may need to use a JPA provider specific solution if one exists for your provider, otherwise live within the constraints of having either a single table or one per subclass. You could also change your inheritance hierarchy so it matches your data model, so if the subclass does not have a table, then collapse its' class into its' superclass.
Java Persistence/Print version No class discriminator column If you are mapping to an existing database schema, your table may not have a class discriminator column. Some JPA providers do not require a class discriminator when using a joined inheritance strategy, so this may be one solution. Otherwise you need some way to determine the class for a row. Sometimes the inherited value can be computed from several columns, or there is an discriminator but not a one to one mapping from value to class. Some JPA providers provide extended support for this. Another option is to create a database view that manufactures the discriminator column, and then map your hierarchy to this view instead of the table. TopLink / EclipseLink : Support computing the inheritance discriminator through Java code. This can be done through using a DescriptorCustomizer and the ClassDescriptor's InheritancePolicy's setClassExtractor() method. Hibernate : This can be accomplished through using the Hibernate @DiscriminatorFormula annotation. This allows database specific SQL or functions to be used to compute the discriminator value.
40
Advanced
Table Per Class Inheritance
Table per class inheritance allows inheritance to be used in the object model, when it does not exist in the data model. In table per class inheritance a table is defined for each concrete class in the inheritance hierarchy to store all the attributes of that class and all of its superclasses. Be cautious using this strategy as it is optional in the JPA spec, and querying root or branch classes can be very difficult and inefficient.
LARGEPROJECT (table)
ID NAME 1 BUDGET
Accounting 50000
Java Persistence/Print version @Entity @Table(name="SMALLPROJECT") public class SmallProject extends Project { }
41
Common Problems
Poor query performance The main disadvantage to the table per class model is queries or relationships to the root or branch classes become expensive. Querying the root or branch classes require multiple queries, or unions. One solution is to use single table inheritance instead, this is good if the classes have a lot in common, but if it is a big hierarchy and the subclasses have little in common this may not be desirable. Another solution is to remove the table per class inheritance and instead use a MappedSuperclass, but this means that you can no longer query or have relationships to the class. Issues with ordering and joins Because table per class inheritance requires multiple queries, or unions, you cannot join to, fetch join, or traverse them in queries. Also when ordering is used the results will be ordered by class, then by the ordering. These limitations depend on your JPA provider, some JPA provider may have other limitations, or not support table per class at all as it is optional in the JPA spec.
Mapped Superclasses
Mapped superclass inheritance allows inheritance to be used in the object model, when it does not exist in the data model. It is similar to table per class inheritance, but does not allow querying, persisting, or relationships to the superclass. Its' main purpose is to allow mappings information to be inherited by its' subclasses. The subclasses are responsible for defining the table, id and other information, and can modify any of the inherited mappings. A common usage of a mapped superclass is to define a common PersistentObject for your application to define common behavoir and mappings such as the id and version. A mapped superclass normally should be an abstract class. A mapped superclass is not an Entity but is instead defined though the @MappedSuperclass (https://java.sun.
Java Persistence/Print version com/ javaee/ 5/ docs/ api/ javax/ persistence/ MappedSuperclass. html) annotation or the <mapped-superclass> element.
42
LARGEPROJECT (table)
ID PROJECT_NAME BUDGET 1 Accounting 50000
Java Persistence/Print version <entity name="LargeProject" class="org.acme.LargeProject" access="FIELD"> <table name="LARGEPROJECT"/> <attribute-override> <column name="NAME"/> </attribute-override> ... <entity/> <entity name="SmallProject" class="org.acme.SmallProject" access="FIELD"> <table name="SMALLPROJECT"/> <entity/>
43
Common Problems
Cannot query, persist, or have relationships The main disadvantage of mapped superclasses is that they cannot be queried or persisted. You also cannot have a relationship to a mapped superclass. If you require any of these then you must use another inheritance model, such as table per class, which is virtually identical to a mapped superclass except it (may) not have these limitations. Another alternative is to change your model such that your classes do not have relationships to the superclass, such as changing the relationship to a subclass, or removing the relationship and instead querying for its value by querying each possible subclass and collecting the results in Java. Subclass does not want to inherit mappings Sometimes you have a subclass that needs to be mapped differently than its parent, or is similar to its' parent but does not have one of the fields, or uses it very differently. Unfortunately it is very difficult not to inherit everything from your parent in JPA, you can override a mapping, but you cannot remove one, or change the type of mapping, or the target class. If you define your mappings as properties (get methods), or through XML, you may be able to attempt to override or mark the inherited mapping as Transient, this may work depending on your JPA provider (don't forget to sacrifice a chicken). Another solution is to actually fix your inheritance in your object model. If you inherit foo from Bar but don't want to inherit it, then remove it from Bar, if the other subclasses need it, either add it to each, or create a FooBar subclass of Bar that has the foo and have the other subclasses extend this. Some JPA providers may provide ways to be less stringent on inheritance. TopLink / EclipseLink : Allow a subclass remove a mapping, redefine a mapping, or be entirely independent of its superclass. This can be done through using a DescriptorCustomizer and removing the ClassDescriptor's mapping, or adding a mapping with the same attribute name, or removing the InheritancePolicy.
44
Embeddables
In an application object model some objects are considered independent, and others are considered dependent parts of other objects. In UML a relationship to a dependent object is consider an aggregate or composite association. In a relational database this kind of relationship could be modeled in two ways, the dependent object could have its own table, or its data could be embedded in the independent objects table. In JPA a relationship where the target object's data is embedded in the source object's table is considered an embedded relationship, and the target object is considered an Embeddable object. Embeddable objects have different requirements and restrictions than Entity objects and are defined by the @Embeddable (https:/ / java. sun. com/ javaee/5/docs/api/javax/persistence/Embeddable.html) annotation or <embeddable> element. An embeddable object cannot be directly persisted, or queried, it can only be persisted or queried in the context of its parent. An embeddable object does not have an id or table. The JPA spec does not support embeddable objects having relationships or inheritance, although some JPA providers may allow this. Relationships to embeddable objects are defined through the @Embedded (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ Embedded. html) annotation or <embedded> element. The JPA spec only allows references to embeddable objects, and does not support collection relationships to embeddable objects, although some JPA providers may allow this.
45
Advanced
Sharing
An embeddable object can be shared between multiple classes. Consider a Name object, that both an Employee and a User contain. Both Employee and a User have their own tables, with different column names that they desire to store their name in. Embeddables support this through allowing each embedded mapping to override the columns used in the embeddable. This is done through the @AttributeOverride (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/AttributeOverride.html) annotation or <attribute-override> element. Note that an embeddable cannot be shared between multiple instances. If you desire to share an embeddable object instance, then you must make it an independent object with its own table.
46
Java Persistence/Print version <embedded name="name"> <attribute-override name="startDate"> <column name="SDATE"/> </attribute-override> <attribute-override name="endDate"> <column name="EDATE"/> </attribute-override> </embedded> </attributes> </entity>
47
Embedded Ids
An EmbeddedId is an embeddable object that contains the Id for an entity. See: Embedded Id
Nulls
An embeddable object's data is contained in several columns in its parent's table. Since there is no single field value, there is no way to know if a parent's reference to the embebbable is null. One could assume that if every field value of the embeddable is null, then the reference should be null, but then there is no way to represent an embeddable with all null values. JPA does not allow embeddables to be null, but some JPA providers may support this. TopLink / EclipseLink : Support an embedded reference being null. This is set through using a DescriptorCustomizer and the AggregateObjectMapping setIsNullAllowed API.
Nesting
A nested embeddable is a relationship to an embeddable object from another embeddable. The JPA 1.0 spec only allows Basic relationships in an embeddable object, so nested embeddables are not supported, however some JPA products may support them. Technically there is nothing preventing the @Emdedded annotation being used in an embeddable object, so this may just work depending on your JPA provider (cross your fingers). JPA 2.0 supports nested embeddable objects. TopLink / EclipseLink : Support embedded mappings from embeddables. The existing @Embedded annotation or <embedded> element can be used. A workaround to having a nested embeddable, and for embeddables in general is to use property access, and add get/set methods for all of the attributes of the nested embeddable object. Example of using properties to define a nested embeddable @Embeddable public class EmploymentDetails { private EmploymentPeriod period private int yearsOfService; private boolean fullTime; .... public EmploymentDetails() { this.period = new EmploymentPeriod(); } @Transient
Java Persistence/Print version public EmploymentPeriod getEmploymentPeriod() { return period; } @Basic public Date getStartDate() { return getEmploymentPeriod().getStartDate(); } public void setStartDate(Date startDate) { getEmploymentPeriod().setStartDate(startDate); } @Basic public Date getEndDate() { return getEmploymentPeriod().getEndDate(); } public void setEndDate(Date endDate) { getEmploymentPeriod().setStartDate(endDate); } .... }
48
Inheritance
Embeddable inheritance is when one embeddable class subclasses another embeddable class. The JPA spec does not allow inheritance in embeddable objects, however some JPA products may support this. Technically there is nothing preventing the @DiscriminatorColumn annotation being used in an embeddable object, so this may just work depending on your JPA provider (cross your fingers). Inheritance in embeddables is always single table as an embeddable must live within its' parent's table. Generally attempting to mix inheritance between embeddables and entities is not a good idea, but may work in some cases. TopLink / EclipseLink : Support inheritance with embeddables. This is set through using a DescriptorCustomizer and the InheritancePolicy.
Relationships
A relationship is when an embeddable has a OneToOne or other such mapping to an entity. The JPA 1.0 spec only allows Basic mappings in an embeddable object, so relationships from embeddables are not supported, however some JPA products may support them. Technically there is nothing preventing the @OneToOne annotation or other relationships from being used in an embeddable object, so this may just work depending on your JPA provider (cross your fingers). JPA 2.0 supports all relationship types from an embeddable object. TopLink / EclipseLink : Support relationship mappings from embeddables. The existing relationship annotations or XML elements can be used. Relationships to embeddable objects from entities other than the embeddable's parent are typically not a good idea, as an embeddable is a private dependent part of its parent. Generally relationships should be to the embeddable's parent, not the embeddable. Otherwise, it would normally be a good idea to make the embeddable an independent entity with its own table. If an embeddable has a bi-directional relationship, such as a OneToMany that requires an inverse ManyToOne the inverse relationship should be to the embeddable's parent.
Java Persistence/Print version A workaround to having a relationship from an embeddable is to define the relationship in the embeddable's parent, and define property get/set methods for the relationship that set the relationship into the embeddable. Example of setting a relationship in an embeddable from its parent @Entity public class Employee { .... private EmploymentDetails details; .... @Embedded public EmploymentDetails getEmploymentDetails() { return details; } @OneToOne public Address getEmploymentAddress() { return getEmploymentDetails().getAddress(); } public void setEmploymentAddress(Address address) { getEmploymentDetails().setAddress(address); } } One special relationship that is sometimes desired in an embeddable is a relationship to its parent. JPA does not support this, but some JPA providers may. Hibernate : Supports a @Parent annotation in embeddables to define a relationship to its parent. A workaround to having a parent relationship from an embeddable is to set the parent in the property set method. Example of setting a relationship in an embeddable to its parent @Entity public class Employee { .... private EmploymentDetails details; .... @Embedded public EmploymentDetails getEmploymentDetails() { return details; } public void setEmploymentDetails(EmploymentDetails details) { this.details = details; details.setParent(this); } }
49
50
Collections
A collection of embeddable objects is similar to a OneToMany except the target objects are embeddables and have no Id. This allows for a OneToMany to be defined without a inverse ManyToOne, as the parent is responsible for storing the foreign key in the target object's table. JPA 1.0 did not support collections of embeddable objects, but some JPA providers support this. JPA 2.0 does support collections of embeddable objects through the ElementCollection mapping. See, ElementCollection. EclipseLink (as of 1.2) : Supports the JPA 2.0 ElementCollection mapping. TopLink / EclipseLink : Support collections of embeddables. This is set through using a DescriptorCustomizer and the AggregateCollectionMapping. Hibernate : Supports collections of embeddables through the @CollectionOfElements annotation. Typically the primary key of the target table will be composed of the parent's primary key, and some unique field in the embeddable object. The embeddable should have a unique field within its parent's collection, but does not need to be unique for the entire class. It could still have a unique id and still use sequencing, or if it has no unique fields, its id could be composed of all of its fields. The embeddable collection object will be different than a typical embeddable object as it will not be stored in the parent's table, but in its own table. Embeddables are strictly privately owned objects, deletion of the parent will cause deletion of the embeddables, and removal from the embeddable collection should cause the embeddable to be deleted. Embeddables cannot be queried directly, and are not independent objects as they have no Id.
Querying
Embeddable objects cannot be queried directly, but they can be queried in the context of their parent. Typically it is best to select the parent, and access the embeddable from the parent. This will ensure the embeddable is registered with the persistence context. If the embeddable is selected in a query, the resulting objects will be detached, and changes will not be tracked. Example of querying an embeddable SELECT employee.period from Employee employee where employee.period.endDate = :param
Locking
Locking is perhaps the most ignored persistence consideration. Most applications tend to ignore thinking about concurrency issues during development, and then smush in a locking mechanism before going into production. Considering the large percentage of software projects that fail or are canceled, or never achieve a large user base, perhaps this is logical. However, locking and concurrency is a critical or at least a very important issue for most applications, so probably should be something considered earlier in the development cycle. If the application will have concurrent writers to the same objects, then a locking strategy is critical so that data corruption can be prevented. There are two strategies for preventing concurrent modification of the same object/row; optimistic and pessimistic locking. Technically there is a third strategy, ostrich locking, or no locking, which means put your head in the sand and ignore the issue. There are various ways to implement both optimistic and pessimistic locking. JPA has support for version optimistic locking, but some JPA providers support other methods of optimistic locking, as well as pessimistic locking. Locking and concurrency can be a confusing thing to consider, and there are a lot of misconcepts out there. Correctly implementing locking in your application typically involves more than setting some JPA or database configuration
Java Persistence/Print version option (although that is all many applications that think they are using locking do). Locking may also involve application level changes, and ensuring other applications accessing the database also do so correctly for the locking policy being used.
51
Optimistic Locking
Optimistic locking assumes that the data will not be modified between when you read the data until you write the data. This is the most common style of locking used and recommended in today's persistence solutions. The strategy involves checking that one or more values from the original object read, are still the same when updating it. This verifies that the object has not changed by another user in between the read and the write. JPA supports using an optimistic locking version field that gets updated on each update. The field can either be numeric or a timestamp value. A numeric value is recommended as a numeric value is more precise, portable, performant and easier to deal with than a timestamp. The @Version (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ Version. html) annotation or <version> element is used to define the optimistic lock version field. The annotation is defined on the version field or property for the object, similar to an Id mapping. The object must contain an attribute to store the version field. The object's version attribute is automatically updated by the JPA provider, and should not normally be modified by the application. The one exception is if the application reads the object in one transaction, sends the object to a client, and updates/merges the object in another transaction. In this case the application must ensure that the original object version is used, otherwise any changes in between the read and write will not be detected. The EntityManager merge() API will always merge the version, so the application is only responsible for this if manually merging. When a locking contention is detected an OptimisticLockException (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ OptimisticLockException. html) will be thrown. This could be wrapped insider a RollbackException, or other exceptions if using JTA, but it should be set as the cause of the exception. The application can handle the exception, but should normally report the error to the user, and let them determine what to do. Example of Version annotation @Entity public abstract class Employee{ @Id private long id; @Version private long version; ... } Example of Version XML <entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"/> <version name="version"/> ... </attributes> <entity/>
52
Java Persistence/Print version Paranoid Delusionment Locking can prevent most concurrency issues, but be careful of going overboard in over analyzing to death every possible hypothetical occurrence. Sometimes in an concurrent application (or any software application) bad stuff can happen. Users are pretty used to this by now, and I don't think anyone out there thinks computer are perfect. A good example is a source code control system. Allowing users to overwrite each other changes is a bad thing; so most systems avoid this through versioning the source files. If a user submits changes to a file that originated from a older version than the current version, the source code control system will raise a conflict and make the user merge the two files. This is essentially optimistic locking. But what if one user removes, or renames a method in one file, then another user adds a new method or call in another file to that old method. No source code control system that I know of will detect this issue, it is a conflict and will cause the build to break. The solution to this is to start locking or checking the lock on every file in the system (or at least every possible related file). Similar to using optimistic read locking on every possible related object, or pessimistically locking every possible related object. This could be done, but would probably be very expensive, and more importantly would now raise possible conflicts every time a user checked in, so would be entirely useless. So, in general be careful of being too paranoid, such that you sacrifice the usability of your system. Other applications accessing same data Any form of locking that is going to work requires that all applications accessing the same data follow the same rules. If you use optimistic locking in one application, but no locking in another accessing the same data, they will still conflict. One fake solution is to configure an update trigger to always increment the version value (unless incremented in the update). This will allow the new application to avoid overwriting the old application's changes, but the old application will still be able to overwrite the new application's changes. This still may be better than no locking at all, and perhaps the old application will eventually go away. One common misconception is that if you use pessimistic locking, instead of adding a version field, you will be ok. Again pessimistic locking requires that all applications accessing the same data use the same form of locking. The old application can still read data (without locking), then update the data after the new application reads, locks, and updates the same data, overwriting its changes. Isn't database transaction isolation all I need? Possibly, but most likely not. Most databases default to read committed transaction isolation. This means that you will never see uncommitted data, but this does not prevent concurrent transactions from overwriting the same data. 1. 2. 3. 4. 5. Transaction A reads row x. Transaction B reads row x. Transaction A writes row x. Transaction B writes row x (and overwrites A's changes). Both commit successfully. This is the case with read committed, but with serializable this conflict would not occur. With serializable either Transaction B would lock on the select for B and wait (perhaps a long time) until Transaction A commits. In some databases Transaction A may not wait, but would fail on commit. However, even with serializable isolation the typical web application would still have a conflict. This is because each server request operates in a different database transaction. The web client reads the data in one transaction, then updates it in another transaction. So optimistic locking is really the only viable locking option for the typical web application. Even if the read and write occurs in the same transaction, serializable is normally not the solution
53
Java Persistence/Print version because of concurrency implications and deadlock potential. See Serializable Transaction Isolation What happens if I merge an object that was deleted by another user? What should happen is the merge should trigger an OptimisticLockException because the object has a version that is not null and greater than 0, and the object does not exist. But this is probably JPA provider specific, some may re-insert the object (this would occur without locking), or throw a different exception. If you called persist instead of merge, then the object would be re-inserted. What if my table doesn't have a version column? The best solution is probably just to add one. Field locking is another solution, as well as pessimistic locking in some cases. See Field Locking What about relationships? See Cascaded Locking Can I use a timestamp? See Timestamp Locking Do I need a version in each table for inheritance or multiple tables? The short answer is no, only in the root table. See Multiple Versions
54
Advanced
Timestamp Locking
Timestamp version locking is supported by JPA and is configured the same as numeric version locking, except the attribute type will be a java.sql.Timestamp or other date/time type. Be cautious in using timestamp locking as timestamps have different levels of precision in different databases, and some database do not store a timestamp's milliseconds, or do not store them precisely. In general timestamp locking is less efficient than numeric version locking, so numeric version locking is recommended. Timestamp locking is frequently used if the table already has a last updated timestamp column, and is also a convenient way to auto update a last updated column. The timestamp version value can be more useful than a numeric version, as it includes the relevant information on when the object was last updated. The timestamp value in timestamp version locking can either come from the database, or from Java (mid-tier). JPA does not allow this to be configured, however some JPA providers may provide this option. Using the database's current timestamp can be very expensive, as it requires a database call to the server.
Multiple Versions
An object can only have one version in JPA. Even if the object maps to multiple tables, only the primary table will have the version. If any fields in any of the tables changes, the version will be updated. If you desire multiple versions, you may need to map multiple version attributes in your object and manually maintain the duplicate versions, perhaps through events. Technically there is nothing preventing your from annotating multiple attributes with @Version, and potentially some JPA providers may support this (don't forget to sacrifice a chicken).
55
Cascaded Locking
Locking objects is different than locking rows in the database. An object can be more complex than a simple row; an object can span multiple tables, have inheritance, have relationships, and have dependent objects. So determining when an object has changed and needs to update its' version can be more difficult than determining when a row has changed. JPA does define that when any of the object's tables changes the version is updated. However it is less clear on relationships. If Basic, Embedded, or a foreign key relationship (OneToOne, ManyToOne) changes, the version will be updated. But what about OneToMany, ManyToMany, and a target foreign key OneToOne? For changes to these relationships the update to the version may depend on the JPA provider. What about changes made to dependent objects? JPA does not have a cascade option for locking, and has no direct concept of dependent objects, so this is not an option. Some JPA providers may support this. One way to simulate this is to use write locking. JPA defines the EntityManager lock() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#lock(java. lang. Object, javax. persistence. LockModeType)) API. You can define a version only in your root parent objects, and when a child (or relationship) is changed, you can call the lock API with the parent to cause a WRITE lock. This will cause the parent version to be updated. You may also be able to automate this through persistence events. Usage of cascaded locking depends on your application. If in your application you consider one user updating one dependent part of an object, and another user updating another part of the object to be a locking contention, then this is what you want. If your application does not consider this to be a problem, then you do not want cascaded locking. One of the advantages of cascaded locking is you have fewer version fields to maintain, and only the update to the root object needs to check the version. This can make a difference in optimizations such as batch writing, as the dependent objects may not be able to be batched if they have their own version that must be checked. TopLink / EclipseLink : Support cascaded locking through their @OptimisticLocking and @PrivateOwned annotations and XML.
Field Locking
If you do not have a version field in your table, optimistic field locking is another solution. Field locking involves comparing certain fields in the object when updating. If those fields have changed, then the update will fail. JPA does not support field locking, but some JPA providers do support it. Field locking can also be used when a finer level of locking is desired. For example if one user changes the object's name and another changes the objects address, you may desire for these updates to not conflict, and only desire optimistic lock errors when users change the same fields. You may also only be concerned about conflicts in changes to certain fields, and not desire lock errors from conflicts in the other fields. Field locking can also be used on legacy schemas, where you cannot add a version column, or to integrate with other applications accessing the same data which are not using optimistic locking (note if the other applications are not also using field locking, you can only detect conflicts in one direction). There are several types of field locking: All fields compared in the update - This can lead to a very big where clause, but will detect any conflicts. Selected fields compared in the update - This is useful if conflicts in only certain fields are desired. Changed fields compared in the update - This is useful if only changes to the same fields are considered to be conflicts. If your JPA provider does not support field locking, it is difficult to simulate, as it requires changes to the update SQL. Your JPA provider may allow overriding the update SQL, in which case, All or Selected field locking may be possible (if you have access to the original values), but Changed field locking is more difficult because the update must be dynamic. Another way to simulate field locking is to flush you changes, then refresh the object using a
Java Persistence/Print version separate EntityManager and connection and compare the current values with your original object. When using field locking it is important to keep the original object that was read. If you read the object in one transaction and send it to a client, then update in another, you are not really locking. Any changes made between the read and write will not be detected. You must keep the original object read managed in an EntityManager for your locking to have any effect. TopLink / EclipseLink : Support field locking through their @OptimisticLocking annotation and XML.
56
Java Persistence/Print version update all fields in the object. So in one case the first user's changes would be overridden, but in the second they would not.
57
Pessimistic Locking
Pessimistic locking means acquiring a lock on the object before you begin to edit the object, to ensure that no other users are editing the object. Pessimistic locking is typically implemented through using database row locks, such as through the SELECT ... FOR UPDATE SQL syntax. The data is read and locked, the changes are made and the transaction is committed, releasing the locks. JPA 1.0 did not support pessimistic locking, but some JPA 1.0 providers do. JPA 2.0 supports pessimistic locking. It is also possible to use JPA native SQL queries to issue SELECT ... FOR UPDATE and use pessimistic locking. When using pessimistic locking you must ensure that the object is refreshed when it is locked, locking a potentially stale object is of no use. The SQL syntax for pessimistic locking is database specific, and different databases have different syntax and levels of support, so ensure your database properly supports your locking requirements. EclipseLink (as of 1.2) : Supports JPA 2.0 pessimistic locking. TopLink / EclipseLink : Support pessimistic locking through the "eclipselink.pessimistic-lock" query hint. The main issues with pessimistic locking is they use database resources, so require a database transaction and connection to be held open for the duration of the edit. This is typically not desirable for interactive web applications. Pessimistic locking can also have concurrency issues and cause deadlocks. The main advantages of pessimistic locking is that once the lock is obtained, it is fairly certain that the edit will be successful. This can be desirable in highly concurrent applications, where optimistic locking may cause too many optimistic locking errors. There are other ways to implement pessimistic locking, it could be implemented at the application level, or through serializable transaction isolation. Application level pessimistic locking can be implemented through adding a locked field to your object. Before an edit you must update the field to locked (and commit the change). Then you can edit the object, and set the locked field back to false. To avoid conflicts in acquiring the lock, you should also use optimistic locking, to ensure the lock field is not updated to true by another user at the same time.
Java Persistence/Print version an application would only use one locking model. NONE - No lock is acquired, this is the default to any find, refresh or query operation. JPA 2.0 also adds two new standard query hints. These can be passed to any Query, NamedQuery, or find(), lock() or refresh() operation. "javax.persistence.lock.timeout" - Number of milliseconds to wait on the lock before giving up and throwing a PessimisticLockException. "javax.persistence.lock.scope" - The valid scopes are defined in PessimisticLockScope (https://java.sun.com/ javaee/6/docs/api/javax/persistence/PessimisticLockScope.html), either NORMAL or EXTENDED. EXTENDED will also lock the object's owned join tables and element collection tables.
58
Basics
A basic attribute is one where the attribute class is a simple type such as String, Number, Date or a primitive. A basic attribute's value can map directly to the column value in the database. The following table summarizes the basic types and the database types they map to.
Java type String (char, char[]) Database type VARCHAR (CHAR, VARCHAR2, CLOB, TEXT)
Number (BigDecimal, BigInteger, Integer, Double, Long, Float, Short, Byte) NUMERIC (NUMBER, INT, LONG, FLOAT, DOUBLE) int, long, float, double, short, byte byte[] boolean (Boolean) java.util.Date java.sql.Date java.sql.Time java.sql.Timestamp java.util.Calendar NUMERIC (NUMBER, INT, LONG, FLOAT, DOUBLE) VARBINARY (BINARY, BLOB) BOOLEAN (BIT, SMALLINT, INT, NUMBER) TIMESTAMP (DATE, DATETIME) DATE (TIMESTAMP, DATETIME) TIME (TIMESTAMP, DATETIME) TIMESTAMP (DATETIME, DATE) TIMESTAMP (DATETIME, DATE)
59
NUMERIC (VARCHAR, CHAR) VARBINARY (BINARY, BLOB)
java.lang.Enum java.util.Serializable
In JPA a basic attribute is mapped through the @Basic (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ Basic. html) annotation or the <basic> element. The types and conversions supported depend on the JPA implementation and database platform. Some JPA implementations may support conversion between many different data-types or additional types, or have extended type conversion support, see the advanced section for more details. Any basic attribute using a type that does not map directly to a database type can be serialized to a binary database type. The easiest way to map a basic attribute in JPA is to do nothing. Any attributes that have no other annotations and do not reference other entities will be automatically mapped as basic, and even serialized if not a basic type. The column name for the attribute will be defaulted, named the same as the attribute name, as uppercase. Sometimes auto-mapping can be unexpected if you have an attribute in your class that you did not intend to have persisted. You must mark any such non-persistent fields using the @Transient (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/Transient.html) annotation or <transient> element. Although auto-mapping makes rapid prototyping easy, you typically reach a point where you want control over your database schema. To specify the column name for a basic attribute the @Column (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ Column. html) annotation or <column> element is used. The column annotation also allows for other information to be specified such as the database type, size, and some constraints.
60
Common Problems
Translating Values
See Conversion
Truncated Data
A common issue is that data, such as Strings, written from the object are truncated when read back from the database. This is normally caused by the column length not being large enough to handle the object's data. In Java there is no maximum size for a String, but in a database VARCHAR field, there is a maximum size. You must ensure that the length you set in your column when you create the table is large enough to handle any object value. For very large Strings CLOBs can be used, but in general CLOBs should not be over used, as they are less efficient than a VARCHAR. If you use JPA to generate your database schema, you can set the column length through the Column annotation or element, see Column Definition and Schema Generation.
How to excluded fields from INSERT or UPDATE statements, or default values in triggers?
See Insertable, Updatable
61
Advanced
Temporal, Dates, Times, Timestamps and Calendars
Dates, times, and timestamps are common types both in the database and in Java, so in theory mappings these types should be simple, right? Well sometimes this is the case and just a normal Basic mapping can be used, however sometimes it becomes more complex. Some databases do not have DATE and TIME types, only TIMESTAMP fields, however some do have separate types, and some just have DATE and TIMESTAMP. Originally in Java 1.0, Java only had a java.util.Date type, which was both a date, time and milliseconds. In Java 1.1 this was expanded to support the common database types with java.sql.Date, java.sql.Time, and java.sql.Timestamp, then to support internationalization Java created the java.util.Calendar type and virtually deprecated (almost all of the methods) the old date types (which JDBC still uses). If you map a Java java.sql.Date type to a database DATE, this is just a basic mapping and you should not have any issues (ignore Oracle's DATE type that is/was a timestamp for now). You can also map java.sql.Time to TIME, and java.sql.Timestamp to TIMESTAMP. However if you have a java.util.Date or java.util.Calendar in Java and wish to map it to a DATE or TIME, you may need to indicate that the JPA provider perform some sort of conversion for this. In JPA the @Temporal (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ Temporal. html) annotation or <temporal> element is used to map this. You can indicate that just the DATE or TIME portion of the date/time value be stored to the database. You could also use Temporal to map a java.sql.Date to a TIMESTAMP field, or any other such conversion.
62
Milliseconds
The precision of milliseconds is different for different temporal classes and database types, and on different databases. The java.util.Date and Calendar classes support milliseconds. The java.sql.Date and java.sql.Time classes do not support milliseconds. The java.sql.Timestamp class supports nanoseconds. On many databases the TIMESTAMP type supports milliseconds. On Oracle prior to Oracle 9, there was only a DATE type, which was a date and a time, but had no milliseconds. Oracle 9 added a TIMESTAMP type that has milliseconds (and nanoseconds), and now treats the old DATE type as only a date, so be careful using it as a timestamp. MySQL has DATE, TIME and DATETIME types. DB2 has a DATE, TIME and TIMESTAMP types, the TIMESTAMP supports microseconds. Sybase and SQL Server just have a DATETIME type which has milliseconds, but at least on some versions has precision issues, it seems to store an estimate of the milliseconds, not the exact value. If you use timestamp version locking you need to be very careful of your milliseconds precision. Ensure your database supports milliseconds precisely otherwise you may have issues, especially if the value is assigned in Java, then differs what gets stored on the database, which will cause the next update to fail for the same object. In general I would not recommend using a timestamp and as primary key or for version locking. There are too many database compatibility issues, as well as the obvious issue of not supporting two operations in the same millisecond.
Timezones
Temporals become a lot more complex when you start to consider time zones, internationalization, eras, locals, day-light savings time, etc. In Java only Calendar supports time zones. Normally a Calendar is assumed to be in the local time zone, and is stored and retrieved from the database with that assumption. If you then read that same Calendar on another computer in another time zone, the question is if you will have the same Calendar or will you have the Calendar of what the original time would have been in the new time zone? It depends on if the Calendar is stored as the GMT time, or the local time, and if the time zone was stored in the database. Some databases support time zones, but most database types do not store the time zone. Oracle has two special types for timestamps with time zones, TIMESTAMPTZ (time zone is stored) and TIMESTAMPLTZ (local time zone is used). Some JPA providers may have extended support for storing Calendar objects and time zones. TopLink, EclipseLink : Support the Oracle TIMESTAMPTZ and TIMESTAMPLTZ types using the @TypeConverter annotation and XML. Forum Posts Investigation of storing timezones in MySQL (http://www.nabble.com/ MySQL's-datetime-and-time-zones--td21006801.html)
Enums
Java Enums (https:/ / java. sun. com/ java/ 5/ docs/ api/ java/ lang/ Enum. html) are typically used as constants in an object model. For example an Employee may have a gender of enum type Gender (MALE, FEMALE). By default in JPA an attribute of type Enum will be stored as a Basic to the database, using the integer Enum values as codes (i.e. 0, 1). JPA also defines an @Enumerated (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ Enumerated. html) annotation and <enumerated> element (on a <basic>) to define an Enum attribute. This can be used to store the Enum as the STRING value of its name (i.e. "MALE", "FEMALE"). For translating Enum types to values other than the integer or String name, such as character constants, see Translating Values.
63
Java Persistence/Print version the LOB in a separate table and class and define a OneToOne to the LOB object instead of a Basic. If the entire LOB is never desired to be read, then it should not be mapped. It is best to use direct JDBC to access and stream the LOB in this case. In may be possible to map the LOB to a java.sql.Blob/java.sql.Clob in your object to avoid reading the entire LOB, but these require a live connection, so may have issues with detached objects.
64
Lazy Fetching
The fetch attribute can be set on a Basic mapping to use LAZY fetching. By default all Basic mappings are EAGER, which means the column is selected whenever the object is selected. By setting the fetch to LAZY, the column will not be selected with the object. If the attribute is accessed, then the attribute value will be selected in a separate database select. Support for LAZY is an optional feature of JPA, so some JPA providers may not support it. Typically support for lazy on basics will require some form of byte code weaving, or dynamic byte code generation, which may have issues in certain environments or JVMs, or may require preprocessing your application's persistence unit jar. Only attributes that are rarely accessed should be marked lazy, as accessing the attribute causes a separate database select, which can hurt performance. This is especially true if a large number of objects is queried. The original query will require one database select, but if each object's lazy attribute is accessed, this will require n database selects, which can be a major performance issue. Using lazy fetching on basics is similar to the concept of fetch groups. Lazy basics is basically support for a single default fetch group. Some JPA providers support fetch groups in general, which allow more sophisticated control over what attributes are fetched per query. TopLink, EclipseLink : Support lazy basics and fetch groups. Fetch groups can be configured through the EclipseLink API using the FetchGroup class.
65
Optional
A Basic attribute can be optional if its value is allowed to be null. By default everything is assumed to be optional, except for an Id, which can not be optional. Optional is basically only a hint that applies to database schema generation, if the persistence provider is configured to generate the schema. It adds a NOT NULL constraint to the column if false. Some JPA providers also perform validation of the object for optional attributes, and will throw a validation error before writing to the database, but this is not required by the JPA specification. Optional is defined through the optional attribute of the Basic annotation or element.
66
67
Conversion
A common problem in storing values to the database is that the value desired in Java differs from the value used in the database. Common examples include using a boolean in Java and a 0, 1 or a 'T', 'F' in the database. Other examples are using a String in Java and a DATE in the database. One way to accomplish this is to translate the data through property get/set methods. @Entity public class Employee { ... private boolean isActive; ... @Transient public boolean getIsActive() { return isActive; } public void setIsActive(boolean isActive) { this.isActive = isActive; } @Basic private String getIsActiveValue() { if (isActive) { return "T"; } else { return "F"; } } private void setIsActiveValue(String isActive) { this.isActive = "T".equals(isActive); } } Also for translating date/times see, Temporals. As well some JPA providers have support for this. TopLink, EclipseLink : Support translation using the @Convert, @Converter, @ObjectTypeConverter and @TypeConverter annotations and XML.
Custom Types
JPA defines support for most common database types, however some databases and JDBC driver have additional types that may require additional support. Some custom database types include: TIMESTAMPTZ, TIMESTAMPLTZ (Oracle) TIMESTAMP WITH TIMEZONE (Postgres) XMLTYPE (Oracle) XML (DB2)
Java Persistence/Print version Array (VARRAY Oracle) BINARY_INTEGER, DEC, INT, NATURAL, NATURALN, BOOLEAN (Oracle) POSITIVE, POSITIVEN, SIGNTYPE, PLS_INTEGER (Oracle) RECORD, TABLE (Oracle) SDO_GEOMETRY (Oracle) LOBs (Oracle thin driver)
68
To handle persistence to custom database types either custom hooks are required in your JPA provider, or you need to mix raw JDBC code with your JPA objects. Some JPA provider provide custom support for many custom database types, some also provide custom hooks for adding your own JDBC code to support a custom database type. TopLink, EclipseLink : Support several custom database types including, TIMESTAMPTZ, TIMESTAMPLTZ, XMLTYPE, NCHAR, NVARCHAR, NCLOB, object-relational Struct and Array types, PLSQL types, SDO_GEOMETRY and LOBs.
Relationships
A relationship is a reference from one object to another. In Java, relationships are defined through object references (pointers) from a source object to the target object. Technically, in Java there is no difference between a relationship to another object and a "relationship" to a data attribute such as a String or Date (primitives are different), as both are pointers; however, logically and for the sake of persistence, data attributes are considered part of the object, and references to other persistent objects are considered relationships. In a relational database relationships are defined through foreign keys. The source row contains the primary key of the target row to define the relationship (and sometimes the inverse). A query must be performed to read the target objects of the relationship using the foreign key and primary key information. In Java, if a relationship is to a collection of other objects, a Collection or array type is used in Java to hold the contents of the relationship. In a relational database, collection relations are either defined by the target objects having a foreign key back to the source object's primary key, or by having an intermediate join table to store the relationship (both objects' primary keys). All relationships in Java and JPA are unidirectional, in that if a source object references a target object there is no guarantee that the target object also has a relationship to the source object. This is different than a relational database, in which relationships are defined through foreign keys and querying such that the inverse query always exists.
This covers the majority of types of relationships that exist in most object models. Each type of relationship also covers multiple different implementations, such as OneToMany allowing either a join table, or foreign key in the target, and collection mappings also allow Collection types and Map types. There are also other possible complex relationship types, see Advanced Relationships.
69
Lazy Fetching
The cost of retrieving and building an object's relationships, far exceeds the cost of selecting the object. This is especially true for relationships such as manager or managedEmployees such that if any employee were selected it would trigger the loading of every employee through the relationship hierarchy. Obviously this is a bad thing, and yet having relationships in objects is very desirable. The solution to this issue is lazy fetching (lazy loading). Lazy fetching allows the fetching of a relationship to be deferred until it is accessed. This is important not only to avoid the database access, but also to avoid the cost of building the objects if they are not needed. In JPA lazy fetching can be set on any relationship using the fetch attribute. The fetch can be set to either LAZY or EAGER as defined in the FetchType (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ FetchType. html) enum. The default fetch type is LAZY for all relationships except for OneToOne and ManyToOne, but in general it is a good idea to make every relationship LAZY. The EAGER default for OneToOne and ManyToOne is for implementation reasons (more difficult to implement), not because it is a good idea. Technically in JPA LAZY is just a hint, and a JPA provider is not required to support it, however in reality all main JPA providers support it, and they would be pretty useless if they did not.
Magic
Lazy fetching normally involves some sort of magic in the JPA provider to transparently fault in the relationships as they are accessed. The typical magic for collection relationships is for the JPA provider to set the relationships to its own Collection, List, Set or Map implementation. When any (or most) method is accessed on this collection proxy, it loads the real collection and forwards the method. This is why JPA requires that all collection relationships use one of the collection interfaces (although some JPA providers support collection implementations too).
Java Persistence/Print version For OneToOne and ManyToOne relationships the magic normally involves some sort of byte code manipulation of the entity class, or creation of a subclass. This allows the access to the field or get/set methods to be intercepted, and for the relationships to be first retrieved before allowing access to the value. Some JPA providers use different methods, such as wrapping the reference in a proxy object, although this can have issues with null values and primitive methods. To perform the byte code magic normally an agent or post-processor is required. Ensure that you correctly use your providers agent or post-processor otherwise lazy may not work. You may also notice additional variables when in a debugger, but in general debugging will still work as normal.
70
Basics
A Basic attribute can also be made LAZY, but this is normally a different mechanism than lazy relationships, and should normally be avoided unless the attribute is rarely accessed. See Basic Attributes : Lazy Fetching.
Java Persistence/Print version the cache or persistence context. Also just because you want two collection relationships loaded, does not mean you want them join fetch which would result in a very inefficient join that would return n^2 data. Join fetching is something that JPA currently only provides through JPQL, which is normally the correct place for it, as each use case has different relationship requirements. Some JPA providers also provider a join fetch option at the mapping level to always join fetch a relationship, but this is normally not the same thing as EAGER. Join fetching is not normally the most efficient way to load a relationship anyway, normally batch reading a relationship is much more efficient when supported by your JPA provider. See Join Fetching See Batch Reading
71
Cascading
Relationship mappings have a cascade option that allows the relationship to be cascaded for common operations. cascade is normally used to model dependent relationships, such as Order -> OrderLine. Cascading the orderLines relationship allows for the Order's -> OrderLines to be persisted, removed, merged along with their parent. The following operations can be cascaded, as defined in the CascadeType (https:/ / java. sun. com/ javaee/ 5/ docs/ api/javax/persistence/CascadeType.html) enum: PERSIST - Cascaded the EntityManager.persist() operation. If persist() is called on the parent, and the child is also new, it will also be persisted. If it is existing, nothing will occur, although calling persist() on an existing object will still cascade the persist operation to its dependents. If you persist an object, and it is related to a new object, and the relationship does not cascade persist, then an exception will occur. This may require that you first call persist on the related object before relating it to the parent. General it may seem odd, or be desirable to always cascade the persist operation, if a new object is related to another object, then it should probably be persisted. There is most likely not a major issue with always cascading persist on every relationship, although it may have an impact on performance. Calling persist on a related object is not required, on commit any related object whose relationship is cascade persist will automatically be persisted. The advantage of calling persist up front is that any generated ids will (unless using identity) be assigned, and the prePersist event will be raised. REMOVE - Cascaded the EntityManager.remove() operation. If remove() is called on the parent then the child will also be removed. This should only be used for dependent relationships. Note that only the remove() operation is cascaded, if you remove a dependent object from a OneToMany collection it will not be deleted, JPA requires that you explicitly call remove() on it. Some JPA providers may support an option to have objects removed from dependent collection deleted, JPA 2.0 also defines an option for this. MERGE - Cascaded the EntityManager.merge() operation. If merge() is called on the parent, then the child will also be merged. This should normally be used for dependent relationships. Note that this only effects the cascading of the merge, the relationship reference itself will always be merged. This can be a major issue if you use transient variables to limit serialization, you may need to manually merge, or reset transient relationships in this case. Some JPA providers provide additional merge operations. REFRESH - Cascaded the EntityManager.refresh() operation. If refresh() is called on the parent then the child will also be refreshed. This should normally be used for dependent relationships. Be careful enabling this for all relationships, as it could cause changes made to other objects to be reset. ALL - Cascaded all the above operations.
72
73
Target Entity
Relationship mappings have a targetEntity attribute that allows the reference class (target) of the relationship to be specified. This is normally not required to be set as it is defaulted from the field type, get method return type, or collection's generic type. This can also be used if your field uses a public interface type, for example field is interface Address, but the mapping needs to be to implementation class AddressImpl. Another usage is if your field is a superclass type, but you want to map the relationship to a subclass.
74
Collections
Collection mappings include OneToMany, ManyToMany, and in JPA 2.0 ElementCollection. JPA requires that the type of the collection field or get/set methods be one of the Java collection interfaces, Collection, List, Set, or Map.
Collection Implementations
Your field should not be of a collection implementation type, such as ArrayList. Some JPA providers may support using collection implementations, many support EAGER collection relationships to use the implementation class. You can set any implementation as the instance value of the collection, but when reading an object from the database, if it is LAZY the JPA provider will normally put in a special LAZY collection.
Duplicates
A List in Java supports duplicate entries, and a Set does not. In the database, duplicates are generally not supported. Technically it could be possible if a JoinTable is used, but JPA does not require duplicates to be supported, and most providers do not. If you require duplicate support, you may need to create an object that represents and maps to the join table. This object would still require a unique Id, such as a GeneratedValue. See Mapping a Join Table with Additional Columns.
Ordering
JPA allows the collection values to be ordered by the database when retrieved. This is done through the @OrderBy (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ OrderBy. html) annotation or <order-by> XML element. The value of the OrderBy is a JPQL ORDER BY string. The can be an attribute name followed by ASC or DESC for ascending or descending ordering. You could also use a path or nested attribute, or a "," for multiple attributes. If no OrderBy value is given it is assumed to be the Id of the target object. The OrderBy value must be a mapped attribute of the target object. If you want to have an ordered List you need to add an index attribute to your target object and an index column to it's table. You will also have to ensure you set the index values. JPA 2.0 will have extended support for an ordered List using an OrderColumn. Note that using an OrderBy does not ensure the collection is ordered in memory. You are responsible for adding to the collection in the correct order. Java does define a SortedSet interface and TreeSet collection implementation that does maintain an order. JPA does not specifically support SortedSet, but some JPA providers may allow you to use a SortedSet or TreeSet for your collection type, and maintain the correct ordering. By default these require your target object to implement the Comparable interface, or set a Comparator. You can also use the Collections.sort() method to sort a List when required. One option to sort in memory is to use property access and in your set and add methods call Collections.sort(). Example of a collection order by annotation @Entity public class Employee { @Id private long id; ... @OneToMany @OrderBy("areaCode") private List<Phone> phones;
Java Persistence/Print version ... } Example of a collection order by XML <entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"/> <one-to-many name="phones"> <order-by>areaCode</order-by> </one-to-many> </attributes> </entity>
75
EMPLOYEE_PHONE (table)
EMPLOYEE_ID PHONE_ID INDEX 1 1 2 2 1 3 2 4 0 1 0 1
PHONE(table)
76
ID AREACODE NUMBER 1 2 3 4 613 416 613 416 792-7777 798-6666 792-9999 798-5555
Example of a collection order column annotation @Entity public class Employee { @Id private long id; ... @OneToMany @OrderColumn(name="INDEX") private List<Phone> phones; ... } Example of a collection order column XML <entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"/> <one-to-many name="phones"> <order-column name="INDEX"/> </one-to-many> </attributes> </entity>
Common Problems
Object corruption, one side of the relationship is not updated after updating the other side A common problem with bi-directional relationships is the application updates one side of the relationship, but the other side does not get updated, and becomes out of synch. In JPA, as in Java in general it is the responsibility of the application, or the object model to maintain relationships. If your application adds to one side of a relationship, then it must add to the other side. This is commonly resolved through add or set methods in the object model that handle both sides of the relationships, so the application code does not need to worry about it. There are two ways to go about this, you can either only add the relationship maintenance code to one side of the relationship, and only use the setter from one side (such as making the other side protected), or add it to both sides and ensure you avoid a infinite loop. For example: public class Employee { private List phones; ... public void addPhone(Phone phone) {
Java Persistence/Print version this.phones.add(phone); if (phone.getOwner() != this) { phone.setOwner(this); } } ... } public class Phone { private Employee owner; ... public void setOwner(Employee employee) { this.owner = employee; if (!employee.getPhones().contains(this)) { employee.getPhones().add(this); } } ... } The code is similar for bi-directional OneToOne and ManyToMany relationships. Some expect the JPA provider to have magic that automatically maintains relationships. This was actually part of the EJB CMP 2 specification. However the issue is if the objects are detached or serialized to another VM, or new objects are related before being managed, or the object model is used outside the scope of JPA, then the magic is gone, and the application is left figuring things out, so in general it may be better to add the code to the object model. However some JPA providers do have support for automatically maintaining relationships. In some cases it is undesirable to instantiate a large collection when adding a child object. One solution is to not map the bi-directional relationship, and instead query for it as required. Also some JPA provides optimize their lazy collection objects to handle this case, so you can still add to the collection without instantiating it. Poor performance, excessive queries This most common issue leading to poor performance is the usage of EAGER relationships. This requires the related objects to be read when the source objects are read. So for example reading the president of the company with EAGER managedEmployees will cause every Employee in the company to be read. The solution is to always make all relationships LAZY. By default OneToMany and ManyToMany are LAZY but OneToOne and ManyToOne are not, so make sure you configure them to be. See, Lazy Fetching. Sometimes you have LAZY configured but it does not work, see Lazy is not working Another common problems is the n+1 issue. For example consider that you read all Employee objects then access their Address. Since each Address is accessed separately this will cause n+1 queries, which can be a major performance problem. This can be solved through Join Fetching and Batch Reading.
77
Java Persistence/Print version Lazy is not working Lazy OneToOne and ManyToOne relationships typically require some form of weaving or byte-code generation. Normally when running in JSE an agent option is required to allow the byte-code weaving, so ensure you have the agent configured correctly. Some JPA providers perform dynamic subclass generation, so do not require an agent. Example agent java -javaagent:eclipselink.jar ... Some JPA providers also provide static weaving instead, or in addition to dynamic weaving. For static weaving some preprocessor must be run on your JPA classes. When running in JEE lazy should normally work, as the class loader hook is required by the EJB specification. However some JEE providers may not support this, so static weaving may be required. Also ensure that you are not accessing the relationship when you shouldn't be. For example if you use property access, and in your set method access the related lazy value, this will cause it to be loaded. Either remove the set method side-effects, or use field access. Broken relationships after serialization If your relationship is marked as lazy then if it has not been instantiated before the object is serialized, then it may not get serialized. This may cause an error, or return null if it is accessed after deserialization. See, Serialization, and Detaching Dependent object removed from OneToMany collection is not deleted When you remove an object from a collection, if you also want the object deleted from the database you must call remove() on the object. In JPA 1.0 even if your relationship is cascade REMOVE, you still must call remove(), only the remove of the parent object is cascaded, not removal from the collection. JPA 2.0 will provide an option for having removes from the collection trigger deletion. Some JPA providers support an option for this in JPA 1.0. See, Cascading My relationship target is an interface If your relationship field's type is a public interface of your class, and only has a single implementer, then this is simple to solve, you just need to set a targetEntity on your mapping. See, Target Entity. If your interface has multiple implementers, then this is more complex. JPA does not directly support mapping interfaces. One solution is to convert the interface to an abstract class and use inheritance to map it. You could also keep the interface, create the abstract class an make sure each implementer extends it, and set the targetEntity to be the abstract class. Another solution is to define virtual attributes using get/set methods for each possible implementer, and map these separately, and mark the interface get/set as transient. You could also not map the attribute, and instead query for it as required. See, Variable and Heterogeneous Relationships Some JPA providers have support for interfaces and variable relationships. TopLink, EclipseLink : Support variable relationships through their @VariableOneToOne annotation and XML. Mapping to and querying interfaces are also supported through their ClassDescriptor's InterfacePolicy API.
78
79
Advanced
Advanced Relationships
JPA 2.0 Relationship Enhancements
ElementCollection - A Collection or Map of Embeddable or Basic values. Map Columns - A OneToMany or ManyToMany or ElementCollection that has a Basic, Embeddable or Entity key not part of the target object. Order Columns - A OneToMany or ManyToMany or ElementCollection can now have a OrderColumn that defines the order of the collection when a List is used. Unidirectional OneToMany - A OneToMany no longer requires the ManyToOne inverse relationship to be defined.
Maps
Java defines the Map interface to represent collections whose values are indexed on a key. There are several Map implementations, the most common is HashMap, but also Hashtable and TreeMap. JPA allows a Map to be used for any collection mapping including, OneToMany, ManyToMany and ElementCollection. JPA requires that the Map interface be used as the attribute type, although some JPA providers may also support using Map implementations. In JPA 1.0 the map key must be a mapped attribute of the collection values. The @MapKey (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ MapKey. html) annotation or <map-key> XML element is used to define a map relationship. If the MapKey is not specified it defaults to the target object's Id. Example of a map key relationship annotation @Entity public class Employee { @Id private long id; ... @OneToMany(mappedBy="owner") @MapKey(name="type") private Map<String, PhoneNumber> phoneNumbers; ... }
80
@Entity public class PhoneNumber { @Id private long id; @Basic private String type; // Either "home", "work", or "fax". ... @ManyToOne private Employee owner; ... } Example of a map key relationship XML <entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"/> <one-to-many name="phoneNumbers" mapped-by="owner"> <map-key name="type"/> </one-to-many> </attributes> </entity> <entity name="PhoneNumber" class="org.acme.PhoneNumber" access="FIELD"> <attributes> <id name="id"/> <basic name="type"/> <many-to-one name="owner"/> </attributes> </entity>
Java Persistence/Print version the element's table. The @MapKeyColumn (https:/ / java. sun. com/ javaee/ 6/ docs/ api/ javax/ persistence/ MapKeyColumn. html) annotation or <map-key-column> XML element is used to define a map relationship where the key is a Basic value, the @MapKeyEnumerated (https:/ / java. sun. com/ javaee/ 6/ docs/ api/ javax/ persistence/ MapKeyEnumerated. html) and @MapKeyTemporal (https:/ / java. sun. com/ javaee/ 6/ docs/ api/ javax/ persistence/ MapKeyTemporal. html) can also be used with this for Enum or Calendar types. The @MapKeyJoinColumn (https:/ / java. sun. com/ javaee/ 6/ docs/ api/ javax/ persistence/ MapKeyJoinColumn. html) annotation or <map-key-join-column> XML element is used to define a map relationship where the key is an Entity value, the @MapKeyJoinColumns (https:/ / java. sun. com/ javaee/ 6/ docs/ api/ javax/ persistence/ MapKeyJoinColumns. html) can also be used with this for composite foreign keys. The annotation @MapKeyClass (https:/ / java. sun. com/ javaee/ 6/ docs/ api/ javax/ persistence/ MapKeyClass. html) or <map-key-class> XML element can be used when the key is an Embeddable or to specify the target class or type if generics are not used. Example of a map key column relationship database EMPLOYEE (table)
ID FIRSTNAME LASTNAME SALARY 1 2 Bob Sarah Way Smith 50000 60000
81
PHONE(table)
ID OWNER_ID PHONE_TYPE AREACODE NUMBER 1 2 3 4 1 1 2 2 home cell home fax 613 613 416 416 792-7777 798-6666 792-9999 798-5555
Example of a map key column relationship annotation @Entity public class Employee { @Id private long id; ... @OneToMany(mappedBy="owner") @MapKeyColumn(name="PHONE_TYPE") private Map<String, Phone> phones; ... } @Entity public class Phone { @Id private long id; ... @ManyToOne
Java Persistence/Print version private Employee owner; ... } Example of a map key column relationship XML <entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"/> <one-to-many name="phones" mapped-by="owner"> <map-key-column name="PHONE_TYPE"/> </one-to-many> </attributes> </entity> <entity name="Phone" class="org.acme.Phone" access="FIELD"> <attributes> <id name="id"/> <many-to-one name="owner"/> </attributes> </entity> Example of a map key join column relationship database EMPLOYEE (table)
ID FIRSTNAME LASTNAME SALARY 1 2 Bob Sarah Way Smith 50000 60000
82
PHONE(table)
ID OWNER_ID PHONE_TYPE_ID AREACODE NUMBER 1 2 3 4 1 1 2 2 1 2 1 3 613 613 416 416 792-7777 798-6666 792-9999 798-5555
PHONETYPE(table)
83
Example of a map key join column relationship annotation @Entity public class Employee { @Id private long id; ... @OneToMany(mappedBy="owner") @MapKeyJoinColumn(name="PHONE_TYPE_ID") private Map<PhoneType, Phone> phones; ... } @Entity public class Phone { @Id private long id; ... @ManyToOne private Employee owner; ... } @Entity public class PhoneType { @Id private long id; ... @Basic private String type; ... } Example of a map key join column relationship XML <entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"/> <one-to-many name="phones" mapped-by="owner"> <map-key-join-column name="PHONE_TYPE_ID"/> </one-to-many>
Java Persistence/Print version </attributes> </entity> <entity name="Phone" class="org.acme.Phone" access="FIELD"> <attributes> <id name="id"/> <many-to-one name="owner"/> </attributes> </entity> <entity name="PhoneType" class="org.acme.PhoneType" access="FIELD"> <attributes> <id name="id"/> <basic name="type"/> </attributes> </entity> Example of a map key class embedded relationship database EMPLOYEE (table)
ID FIRSTNAME LASTNAME SALARY 1 2 Bob Sarah Way Smith 50000 60000
84
EMPLOYEE_PHONE (table)
EMPLOYEE_ID PHONE_ID TYPE 1 1 2 2 1 2 3 4 home cell home fax
PHONE (table)
ID AREACODE NUMBER 1 2 3 4 613 613 416 416 792-7777 798-6666 792-9999 798-5555
Example of a map key class embedded relationship annotation @Entity public class Employee { @Id private long id; ... @OneToMany @MapKeyClass(PhoneType.class)
Java Persistence/Print version private Map<PhoneType, Phone> phones; ... } @Entity public class Phone { @Id private long id; ... } @Embeddable public class PhoneType { @Basic private String type; ... } Example of a map key class embedded relationship XML <entity name="Employee" class="org.acme.Employee" access="FIELD"> <attributes> <id name="id"/> <one-to-many name="phones"> <map-key-class>PhoneType</map-key-class> </one-to-many> </attributes> </entity> <entity name="Phone" class="org.acme.Phone" access="FIELD"> <attributes> <id name="id"/> <many-to-one name="owner"/> </attributes> </entity> <embeddable name="PhoneType" class="org.acme.PhoneType" access="FIELD"> <attributes> <basic name="type"/> </attributes> </embeddable>
85
86
Join Fetching
Join fetching is a query optimization technique for reading multiple objects in a single database query. It involves joining the two object's tables in SQL and selecting both object's data. Join fetching is commonly used for OneToOne relationships, but also can be used for any relationship including OneToMany and ManyToMany. Join fetching is one solution to the classic ORM n+1 performance problem. The issue is if you select n Employee objects, and access each of their addresses, in basic ORM (including JPA) you will get 1 database select for the Employee objects, and then n database selects, one for each Address object. Join fetching solves this issue by only requiring one select, and selecting both the Employee and its Address. JPA supports join fetching through JPQL using the JOIN FETCH syntax. Example of JPQL Join Fetch SELECT emp FROM Employee emp JOIN FETCH emp.address This causes both the Employee and Address data to be selected in a single query.
Outer Joins
Using the JPQL JOIN FETCH syntax a normal INNER join is performed. This has the side effect of filtering any Employee from the result set that did not have an address. An OUTER join in SQL is one that does not filter absent rows on the join, but instead joins a row of all null values. If your relationship allows null or an empty collection for collection relationships, then you can use an OUTER join fetch, this is done in JPQL using the LEFT syntax. Note that OUTER joins can be less efficient on some databases, so avoid using an OUTER if it is not required. Example of JPQL Outer Join Fetch SELECT emp FROM Employee emp LEFT JOIN FETCH emp.address
87
Nested Joins
JPA 1.0 does not allow nested join fetches in JPQL, although this may be supported by some JPA providers. You can join fetch multiple relationships, but not nested relationships. Example of Multiple JPQL Join Fetch SELECT emp FROM Employee emp LEFT JOIN FETCH emp.address LEFT JOIN FETCH emp.phoneNumbers
Batch Reading
Batch Reading is a query optimization technique for reading multiple related objects in a finite set of database queries. It involves executing the query for the root objects as normal. But for the related objects the original query is joined with the query for the related objects, allowing all of the related objects to be read in a single database query. Batch Reading can be used for any type of relationship. Batch Reading is one solution to the classic ORM n+1 performance problem. The issue is if you select n Employee objects, and access each of their addresses, in basic ORM (including JPA) you will get 1 database select for the Employee objects, and then n database selects, one for each Address object. Batch Reading solves this issue by requiring one select for the Employee objects and one select for the Address objects. Batch reading is more optimal for reading collection relationships and multiple relationships as it does not require selecting duplicate data as in join fetching. JPA does not support batch reading, but some JPA providers do. TopLink, EclipseLink : Support a "eclipselink.batch" query hint to enable batch reading. Batch reading can also be configured on a relationship mapping using the API.
88
Java Persistence/Print version Example nested collection model (original) public class Employee { private long id; private Map<String, List<Project> projects; } Example nested collection model (modified) public class Employee { @Id @GeneratedValue private long id; ... @OneToMany(mappedBy="employee") @MapKey(name="type") private Map<String, ProjectType> projects; } public class ProjectType { @Id @GeneratedValue private long id; @ManyToOne private Employee employee; @Column(name="PROJ_TYPE") private String type; @ManyToMany private List<Project> projects; } Example nested collection database EMPLOYEE (table)
ID FIRSTNAME LASTNAME SALARY 1 2 Bob Sarah Way Smith 50000 60000
89
PROJECTTYPE (table)
90
PROJECTTYPE_PROJECT (table)
PROJECTTYPE_ID PROJECT_ID 1 1 2 3 1 2 3 4
OneToOne
A OneToOne relationship in Java is where the source object has an attribute that references another target object and (if) that target object had the inverse relationship back to the source object it would also be a OneToOne relationship. All relationships in Java and JPA are unidirectional, in that if a source object references a target object there is no guarantee that the target object also has a relationship to the source object. This is different than a relational database, in which relationships are defined through foreign keys and querying such that the inverse query always exists. JPA also defines a ManyToOne relationship, which is similar to a OneToOne relationship except that the inverse relationship (if it were defined) is a OneToMany relationship. The main difference between a OneToOne and a ManyToOne relationship in JPA is that a ManyToOne always contains a foreign key from the source object's table to
Java Persistence/Print version the target object's table, where as a OneToOne relationship the foreign key may either be in the source object's table or the target object's table. If the foreign key is in the target object's table JPA requires that the relationship be bi-directional (must be defined in both objects), and the source object must use the mappedBy attribute to define the mapping. In JPA a OneToOne relationship is defined through the @OneToOne (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ OneToOne. html) annotation or the <one-to-one> element. A OneToOne relationship typically requires a @JoinColumn (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ JoinColumn. html) or @JoinColumns (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ JoinColumns. html) if a composite primary key. Example of a OneToOne relationship database EMPLOYEE (table)
EMP_ID FIRSTNAME LASTNAME SALARY ADDRESS_ID 1 2 Bob Sarah Way Smith 50000 60000 6 7
91
ADDRESS (table)
ADDRESS_ID STREET 6 7 CITY PROVINCE COUNTRY P_CODE ON Canada Canada K2H7Z5 L5H2D5
17 Bank St Ottawa
22 Main St Toronto ON
92
See Also
Relationships Cascading Lazy Fetching Target Entity Join Fetching Batch Reading Common Problems
ManyToOne
93
Common Problems
Foreign key is also part of the primary key. See Primary Keys through OneToOne Relationships. Foreign key is also mapped as a basic. If you use the same field in two different mappings, you typically require to make one of them read-only using insertable, updateable = false. See Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys. Constraint error on insert. This typically occurs because you have incorrectly mapped the foreign key in a OneToOne relationship. See Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys. It can also occur if your JPA provider does not support referential integrity, or does not resolve bi-directional constraints. In this case you may either need to remove the constraint, or use EntityManager flush() to ensure the order your objects are written in. Foreign key value is null Ensure you set the value of the object's OneToOne, if the OneToOne is part of a bi-directional OneToOne relationship, ensure you set OneToOne in both object's, JPA does not maintain bi-directional relationships for you. Also check that you defined the JoinColumn correctly, ensure you did not set insertable, updateable = false or use a PrimaryKeyJoinColumn, or mappedBy.
Advanced
Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys
If a OneToOne relationship uses a target foreign key (the foreign key is in the target table, not the source table), then JPA requires that you define a OneToOne mapping in both directions, and that the target foreign key mapping use the mappedBy attribute. The reason for this, is the mapping in the source object only affects the row the JPA writes to the source table, if the foreign key is in the target table, JPA has no easy way to write this field. There are other ways around this problem however. In JPA the JoinColumn defines an insertable and updateable attribute, these can be used to instruct the JPA provider that the foreign key is actually in the target object's table. With these enabled JPA will not write anything to the source table, most JPA providers will also infer that the foreign key constraint is in the target table to preserve referential integrity on insertion. JPA also defines the @PrimaryKeyJoinColumn (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ PrimaryKeyJoinColumn. html) that can be used to define the same thing. You still must map the foreign key in the target object in some fashion though, but could just use a Basic mapping to do this. Some JPA providers may support an option for a unidirectional OneToOne mapping for target foreign keys. Target foreign keys can be tricky to understand, so you might want to read this section twice. They can get even more complex though. If you have a data model that cascades primary keys then you can end up with a single OneToOne that has both a logical foreign key, but has some fields in it that are logically target foreign keys. For example consider Company, Department, Employee. Company's id is COM_ID, Department's id is a composite primary key of COM_ID and DEPT_ID, and Employee's id is a composite primary key of COM_ID, DEP_ID, and EMP_ID. So for an Employee its relationship to company uses a normal ManyToOne with a foreign key, but its
Java Persistence/Print version relationship to department uses a ManyToOne with a foreign key, but the COM_ID uses insertable, updateable = false or PrimaryKeyJoinColumn, because it is actually mapped through the company relationship. The Employee's relationship to its address then uses a normal foreign key for ADD_ID but a target foreign key for COM_ID, DEP_ID, and EMP_ID. This may work in some JPA providers, others may require different configuration, or not support this type of data model. Example of cascaded primary keys database COMPANY(table)
COM_ID NAME 1 2 ACME Wikimedia
94
DEPARTMENT(table)
COM_ID DEP_ID NAME 1 1 2 2 1 2 1 2 Billing Research Accounting Research
EMPLOYEE (table)
COM_ID DEP_ID EMP_ID NAME 1 1 1 1 2 2 1 1 2 2 1 2 1 2 1 2 1 1 Bob Way Joe Smith Sarah Way John Doe Jane Doe MNG_ID ADD_ID null 1 null 1 null 1 2 1 2 1 1
ADDRESS(table)
COM_ID DEP_ID ADD_ID ADDRESS 1 1 1 1 2 2 1 1 2 2 1 2 1 2 1 2 1 1 17 Bank, Ottawa, ONT 22 Main, Ottawa, ONT 255 Main, Toronto, ONT 12 Main, Winnipeg, MAN 72 Riverside, Winnipeg, MAN 82 Riverside, Winnipeg, MAN
95
Example of cascaded primary keys and mixed OneToOne and ManyToOne mapping annotations
@Entity @IdClass(EmployeeId.class) public class Employee { @Id @Column(name="EMP_ID") private long employeeId; @Id @Column(name="DEP_ID" insertable=false, updateable=false) private long departmentId; @Id @Column(name="COM_ID" insertable=false, updateable=false) private long companyId; ... @ManyToOne(fetch=FetchType.LAZY) @JoinColumn(name="COM_ID") private Company company; @ManyToOne(fetch=FetchType.LAZY) @JoinColumns({ @JoinColumn(name="DEP_ID"), @JoinColumn(name="COM_ID" insertable=false, updateable=false) }) private Department department; @ManyToOne(fetch=FetchType.LAZY) @JoinColumns({ @JoinColumn(name="MNG_ID"), @JoinColumn(name="DEP_ID" insertable=false, updateable=false), @JoinColumn(name="COM_ID" insertable=false, updateable=false) }) private Employee manager; @OneToOne(fetch=FetchType.LAZY) @JoinColumns({ @JoinColumn(name="ADD_ID") @JoinColumn(name="DEP_ID" insertable=false, updateable=false), @JoinColumn(name="COM_ID" insertable=false, updateable=false) }) private Address address; ... }
96
Example of cascaded primary keys and mixed OneToOne and ManyToOne mapping annotations using PrimaryKeyJoinColumn
@Entity @IdClass(EmployeeId.class) public class Employee { @Id @Column(name="EMP_ID") private long employeeId; @Id @Column(name="DEP_ID", insertable=false, updateable=false) private long departmentId; @Id @Column(name="COM_ID", insertable=false, updateable=false) private long companyId; ... @ManyToOne(fetch=FetchType.LAZY) @JoinColumn(name="COM_ID") private Company company; @ManyToOne(fetch=FetchType.LAZY) @JoinColumn(name="DEP_ID") @PrimaryKeyJoinColumn(name="COM_ID") private Department department; @ManyToOne(fetch=FetchType.LAZY) @JoinColumn(name="MNG_ID") @PrimaryKeyJoinColumns({ @PrimaryKeyJoinColumn(name="DEP_ID") @PrimaryKeyJoinColumn(name="COM_ID") }) private Employee manager; @OneToOne(fetch=FetchType.LAZY) @JoinColumn(name="ADD_ID") @PrimaryKeyJoinColumns({ @PrimaryKeyJoinColumn(name="DEP_ID") @PrimaryKeyJoinColumn(name="COM_ID") }) private Address address; ... }
Java Persistence/Print version JPA defines a join table using the @JoinTable (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ JoinTable. html) annotation and <join-table> XML element. A JoinTable can be used on a ManyToMany or OneToMany mappings, but the JPA 1.0 specification is vague whether it can be used on a OneToOne. The JoinTable documentation does not state that it can be used in a OneToOne, but the XML schema for <one-to-one> does allow a nested <join-table> element. Some JPA providers may support this, and others may not. If your JPA provider does not support this, you can workaround the issue by instead defining a OneToMany or ManyToMany relationship and just define a get/set method that returns/sets the first element on the collection. Example of a OneToOne using a JoinTable database EMPLOYEE (table)
EMP_ID FIRSTNAME LASTNAME SALARY 1 2 Bob Sarah Way Smith 50000 60000
97
EMP_ADD(table)
EMP_ID ADDR_ID 1 2 6 7
ADDRESS (table)
ADDRESS_ID STREET 6 7 CITY PROVINCE COUNTRY P_CODE ON Canada Canada K2H7Z5 L5H2D5
17 Bank St Ottawa
22 Main St Toronto ON
Java Persistence/Print version private List<Address> addresses; ... public Address getAddress() { if (this.addresses.isEmpty()) { return null; } return this.addresses.get(0); } public void setAddress(Address address) { if (this.addresses.isEmpty()) { this.addresses.add(address); } else { this.addresses.set(1, address); } } ...
98
ManyToOne
A ManyToOne relationship in Java is where the source object has an attribute that references another target object and (if) that target object had the inverse relationship back to the source object it would be a OneToMany relationship. All relationships in Java and JPA are unidirectional, in that if a source object references a target object there is no guarantee that the target object also has a relationship to the source object. This is different than a relational database, in which relationships are defined through foreign keys and querying such that the inverse query always exists.
Java Persistence/Print version JPA also defines a OneToOne relationship, which is similar to a ManyToOne relationship except that the inverse relationship (if it were defined) is a OneToOne relationship. The main difference between a OneToOne and a ManyToOne relationship in JPA is that a ManyToOne always contains a foreign key from the source object's table to the target object's table, where as a OneToOne relationship the foreign key may either be in the source object's table or the target object's table. In JPA a ManyToOne relationship is defined through the @ManyToOne (https:/ / java. sun. com/ javaee/ 5/ docs/ api/javax/persistence/ManyToOne.html) annotation or the <many-to-one> element. In JPA a ManyToOne relationship is always (well almost always) required to define a OneToMany relationship, the ManyToOne always defines the foreign key (JoinColumn) and the OneToMany must use a mappedBy to define its inverse ManyToOne. Example of a ManyToOne relationship database EMPLOYEE (table)
EMP_ID FIRSTNAME LASTNAME SALARY MANAGER_ID 1 2 Bob Sarah Way Smith 50000 75000 2 null
99
PHONE (table)
ID TYPE AREA_CODE P_NUMBER OWNER_ID 1 2 3 home work work 613 613 416 792-0000 896-1234 123-4444 1 1 2
100
See Also
Relationships Cascading Lazy Fetching Target Entity Join Fetching Batch Reading Common Problems OneToOne Mapping a OneToOne Using a Join Table OneToMany
Common Problems
Foreign key is also part of the primary key. See Primary Keys through OneToOne Relationships. Foreign key is also mapped as a basic. If you use the same field in two different mappings, you typically require to make one of them read-only using insertable, updateable = false. See Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys. Constraint error on insert. This typically occurs because you have incorrectly mapped the foreign key in a OneToOne relationship. See Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys. It can also occur if your JPA provider does not support referential integrity, or does not resolve bi-directional constraints. In this case you may either need to remove the constraint, or use EntityManager flush() to ensure the order your objects are written in. Foreign key value is null Ensure you set the value of the object's OneToOne, if the OneToOne is part of a bi-directional OneToMany relationship, ensure you set the object's OneToOne when adding an object to the OneToMany, JPA does not maintain bi-directional relationships for you. Also check that you defined the JoinColumn correctly, ensure you did not set insertable, updateable = false or use a PrimaryKeyJoinColumn.
101
Advanced
Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys
In complex data models it may be required to use a target foreign key, or read-only JoinColumn in mapping a ManyToOne if the foreign key/JoinColumn is shared with other ManyToOne or Basic mappings. See, Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys
OneToMany
A OneToMany relationship in Java is where the source object has an attribute that stores a collection of target objects and (if) those target objects had the inverse relationship back to the source object it would be a ManyToOne relationship. All relationships in Java and JPA are unidirectional, in that if a source object references a target object there is no guarantee that the target object also has a relationship to the source object. This is different than a relational database, in which relationships are defined through foreign keys and querying such that the inverse query always exists. JPA also defines a ManyToMany relationship, which is similar to a OneToMany relationship except that the inverse relationship (if it were defined) is a ManyToMany relationship. The main difference between a OneToMany and a ManyToMany relationship in JPA is that a ManyToMany always makes use of a intermediate relational join table to store the relationship, where as a OneToMany can either use a join table, or a foreign key in target object's table referencing the source object table's primary key. If the OneToMany uses a foreign key in the target object's table JPA requires that the relationship be bi-directional (inverse ManyToOne relationship must be defined in the target object), and the source object must use the mappedBy attribute to define the mapping. In JPA a OneToMany relationship is defined through the @OneToMany (https:/ / java. sun. com/ javaee/ 5/ docs/ api/javax/persistence/OneToMany.html) annotation or the <one-to-many> element.
102
PHONE (table)
ID TYPE AREA_CODE P_NUMBER OWNER_ID 1 2 3 home work work 613 613 416 792-0000 896-1234 123-4444 1 1 2
Java Persistence/Print version <id name="id"/> <many-to-one name="owner" fetch="LAZY"> <join-column name="OWNER_ID"/> </many-to-one> </attributes> </entity> Note this @OneToMany mapping requires an inverse @ManyToOne mapping to be complete, see ManyToOne.
103
Java Persistence/Print version However some JPA providers do have support for automatically maintaining relationships. In some cases it is undesirable to instantiate a large collection when adding a child object. One solution is to not map the bi-directional relationship, and instead query for it as required. Also some JPA provides optimize their lazy collection objects to handle this case, so you can still add to the collection without instantiating it.
104
Join Table
A common mismatch between objects and relational tables is that a OneToMany does not require a back reference in Java, but requires a back reference foreign key in the database. Normally it is best to define the ManyToOne back reference in Java, if you cannot or don't want to do this, then you can use a intermediate join table to store the relationship. This is similar to a ManyToMany relationship, but if you add a unique constraint to the target foreign key you can enforce that it is OneToMany. JPA defines a join table using the @JoinTable (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ JoinTable. html) annotation and <join-table> XML element. A JoinTable can be used on a ManyToMany or OneToMany mappings. See also, Undirectional OneToMany
EMP_PHONE (table)
EMP_ID PHONE_ID 1 1 2 1 2 3
PHONE (table)
ID TYPE AREA_CODE P_NUMBER 1 2 3 home work work 613 613 416 792-0000 896-1234 123-4444
Java Persistence/Print version @JoinTable ( name="EMP_PHONE", joinColumns={ @JoinColumn(name="EMP_ID", referencedColumnName="EMP_ID") }, inverseJoinColumns={ @JoinColumn(name="PHONE_ID", referencedColumnName="PHONE_ID") } ) private List<Phone> phones; ... }
105
See Also
Relationships Cascading Lazy Fetching Target Entity Collections Maps Join Fetching Batch Reading Common Problems ManyToOne ManyToMany
106
Common Problems
Object not in collection after refresh. See Object corruption.
Advanced
Undirectional OneToMany, No Inverse ManyToOne, No Join Table (JPA 2.0)
JPA 1.0 does not support a unidirectional OneToMany relationship without a JoinTable. JPA 2.0 will have support for a unidirectional OneToMany. In JPA 2.0 a @JoinColumn (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ JoinColumn. html) can be used on a OneToMany to define the foreign key, some JPA providers may support this already. The main issue with an unidirectional OneToMany is that the foreign key is owned by the target object's table, so if the target object has no knowledge of this foreign key, inserting and updating the value is difficult. In a unidirectional OneToMany the source object take ownership of the foreign key field, and is responsible for updating its value. The target object in a unidirectional OneToMany is an independent object, so it should not rely on the foreign key in any way, i.e. the foreign key cannot be part of its primary key, nor generally have a not null constraint on it. You can model a collection of objects where the target has no foreign key mapped, but uses it as its primary key, or has no primary key using a Embeddable collection mapping, see Embeddable Collections. If your JPA provider does not support unidirectional OneToMany relationships, then you will need to either add a back reference ManyToOne or a JoinTable. In general it is best to use a JoinTable if you truly want to model a unidirectional OneToMany on the database. There are some creative workarounds to defining a unidirectional OneToMany. One is to map it using a JoinTable, but make the target table the JoinTable. This will cause an extra join, but work for the most part for reads, writes of course will not work correctly, so this is only a read-only solution and a hacky one at that.
PHONE (table)
107
ID TYPE AREA_CODE P_NUMBER OWNER_ID 1 2 3 home work work 613 613 416 792-0000 896-1234 123-4444 1 1 2
ManyToMany
A ManyToMany relationship in Java is where the source object has an attribute that stores a collection of target objects and (if) those target objects had the inverse relationship back to the source object it would also be a ManyToMany relationship. All relationships in Java and JPA are unidirectional, in that if a source object references a target object there is no guarantee that the target object also has a relationship to the source object. This is different than a relational database, in which relationships are defined through foreign keys and querying such that the inverse query always exists. JPA also defines a OneToMany relationship, which is similar to a ManyToMany relationship except that the inverse relationship (if it were defined) is a ManyToOne relationship. The main difference between a OneToMany and a ManyToMany relationship in JPA is that a ManyToMany always makes use of a intermediate relational join table to store the relationship, where as a OneToMany can either use a join table, or a foreign key in target object's table referencing the source object table's primary key. In JPA a ManyToMany relationship is defined through the @ManyToMany (https:/ / java. sun. com/ javaee/ 5/ docs/ api/javax/persistence/ManyToMany.html) annotation or the <many-to-many> element. All ManyToMany relationships require a JoinTable. The JoinTable is defined using the @JoinTable (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ JoinTable. html) annotation and <join-table> XML element. The JoinTable defines a foreign key to the source object's primary key (joinColumns), and a foreign key to the target object's primary key (inverseJoinColumns). Normally the primary key of the JoinTable is the combination of both foreign keys.
108
EMP_PROJ (table)
EMP_ID PROJ_ID 1 1 2 1 2 1
PROJECT (table)
PROJ_ID NAME 1 2 GIS SIG
109
See Also
Relationships Cascading Lazy Fetching Target Entity Collections Maps Join Fetching Batch Reading
110
Common Problems
Object not in collection after refresh. If you have a bi-directional ManyToMany relationship, ensure that you add to both sides of the relationship. See Object corruption. Additional columns in join table. See Mapping a Join Table with Additional Columns Duplicate rows inserted into the join table. If you have a bidirectional ManyToMany relationship, you must use mappedBy on one side of the relationship, otherwise it will be assumed to be two difference relationships and you will get duplicate rows inserted into the join table.
Advanced
Mapping a Join Table with Additional Columns
A frequent problem is that two classes have a ManyToMany relationship, but the relational join table has additional data. For example if Employee has a ManyToMany with Project but the PROJ_EMP join table also has an IS_TEAM_LEAD column. In this case the best solution is to create a class that models the join table. So an ProjectAssociation class would be created. It would have a ManyToOne to Employee and Project, and attributes for the additional data. Employee and Project would have a OneToMany to the ProjectAssociation. Some JPA providers also provide additional support for mapping to join tables with additional data. Unfortunately mapping this type of model becomes more complicated in JPA because it requires a composite primary key. The association object's Id is composed of the Employee and Project ids. The JPA spec does not allow an Id to be used on a ManyToOne so the association class must have two duplicate attributes to also store the ids, and use an IdClass, these duplicate attributes must be kept in synch with the ManyToOne attributes. Some JPA providers may allow a ManyToOne to be part of an Id, so this may be simpler with some JPA providers. To make your life simpler, I would recommend adding a generated Id attribute to the association class. This will give the object a simpler Id and not require duplicating the Employee and Project ids. This same pattern can be used no matter what the additional data in the join table is. Another usage is if you have a Map relationship between two objects, with a third unrelated object or data representing the Map key. The JPA spec requires that the Map key be an attribute of the Map value, so the association object pattern can be used to model the relationship. If the additional data in the join table is only required on the database and not used in Java, such as auditing information, it may also be possible to use database triggers to automatically set the data.
111
PROJ_EMP (table)
EMPLOYEEID PROJECTID IS_PROJECT_LEAD 1 1 2 1 2 1 true false false
PROJECT (table)
ID NAME 1 2 GIS SIG
112
this.employees.add(association); // Also add the association object to the employee. employee.getProjects().add(association); } } @Entity @Table(name="PROJ_EMP") @IdClass(ProjectAssociationId.class) public class ProjectAssociation { @Id private long employeeId; @Id private long projectId; @Column("IS_PROJECT_LEAD") private boolean isProjectLead; @ManyToOne @PrimaryKeyJoinColumn(name="EMPLOYEEID", referencedColumnName="ID") private Employee employee; @ManyToOne @PrimaryKeyJoinColumn(name="PROJECTID", referencedColumnName="ID") private Project project; ... } public class ProjectAssociationId { private long employeeId; private long projectId; ... }
113
ElementCollection
JPA 2.0 defines an ElementCollection mapping. It is meant to handle several non-standard relationship mappings. An ElementCollection can be used to define a one-to-many relationship to an Embeddable object, or a Basic value (such as a collection of Strings). An ElementCollection can also be used in combination with a Map to define relationships were the key can be any type of object, and the value is an Embeddable object or a Basic value. In JPA a ElementCollection relationship is defined through the @ElementCollection (https:/ / java. sun. com/ javaee/ 6/docs/api/javax/persistence/ElementCollection.html) annotation or the <element-collection> element. The ElementCollection values are always stored in a separate table. The table is defined through the @CollectionTable (https:/ / java. sun. com/ javaee/ 6/ docs/ api/ javax/ persistence/ CollectionTable. html) annotation or the <collection-table> element. The CollectionTable defines the table's name and @JoinColumn (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ JoinColumn. html) or @JoinColumns (https:/ / java. sun.com/javaee/5/docs/api/javax/persistence/JoinColumns.html) if a composite primary key.
Embedded Collections
An ElementCollection mapping can be used to define a collection of Embeddable objects. This is not a typical usage of Embeddable objects as the objects are not embedded in the source object's table, but stored in a separate collection table. This is similar to a OneToMany, except the target object is an Embeddable instead of an Entity. This allows collections of simple objects to be easily defined, without requiring the simple objects to define an Id or ManyToOne inverse mapping. ElementCollection can also override the mappings, or table for their collection, so you can have multiple entities reference the same Embeddable class, but have each store their dependent objects in a separate table. The limitations of using an ElementCollection instead of a OneToMany is that the target objects cannot be queried, persisted, merged independently of their parent object. They are strictly privately-owned (dependent) objects, the same as an Embedded mapping. There is no cascade option on an ElementCollection, the target objects are always persisted, merged, removed with their parent. ElementCollection still can use a fetch type and defaults to LAZY the
114
PHONE (table)
OWNER_ID TYPE AREA_CODE P_NUMBER 1 1 2 home work work 613 613 416 792-0001 494-1234 892-0005
115
Basic Collections
An ElementCollection mapping can be used to define a collection of Basic objects. The Basic values are stored in a separate collection table. This is similar to a OneToMany, except the target is a Basic value instead of an Entity. This allows collections of simple values to be easily defined, without requiring defining a class for the value. Their is no cascade option on an ElementCollection, the target objects are always persisted, merged, removed with their parent. ElementCollection still can use a fetch type and defaults to LAZY the same as other collection mappings.
PHONE (table)
116
117
See Also
Relationships Lazy Fetching Target Entity Collections Maps Join Fetching Batch Reading Common Problems OneToMany ManyToMany Embeddables
Common Problems
Primary keys in CollectionTable The JPA 2.0 specification does not provide a way to define the Id in the Embeddable. However, to delete or update a element of the ElementCollection mapping, some unique key is normally required. Otherwise, on every update the JPA provider would need to delete everything from the CollectionTable for the Entity, and then insert the values back. So, the JPA provider will most likely assume that the combination of all of the fields in the Embeddable are unique, in combination with the foreign key (JoinColunm(s)). This however could be inefficient, or just not feasible if the Embeddable is big, or complex. Some JPA providers may allow the Id to be specified in the Embeddable, to resolve this issue. Note in this case the Id only needs to be unique for the collection, not the table, as the foreign key is included. Some may also allow the unique option on the CollectionTable to be used for this. Otherwise, if your Embeddable is complex, you may consider making it an Entity and use a OneToMany instead.
Advanced Topics
Events
A event is a hook into a system that allows the execution of some code when the event occurs. Events can be used to extend, integrate, debug, audit or monitor a system. JPA defines several events for the persistent life-cycle of Entity objects. JPA events are defined through annotations or in the orm.xml. Any method of a persistent class can be annotated with an event annotation to be called for all instances of that class. An event listener can also be configured for a class using the EntityListeners (http:/ / download. oracle. com/ javaee/ 6/ api/ javax/ persistence/ EntityListeners. html) annotation or <entity-listeners> XML element. The specified listener class does not need to implement any interface (JPA does not use the Java event model), it only needs to annotate its methods with the desired event annotation. JPA defines the following events: PostLoad (http://download.oracle.com/javaee/6/api/javax/persistence/PostLoad.html) - Invoked after an Entity is loaded into the persistence context (EntityManager), or after a refresh operation. PrePersist (http://download.oracle.com/javaee/6/api/javax/persistence/PrePersist.html) - Invoked before the persist operation is invoked on an Entity. Also invoked on merge for new instances, and on cascade of a persist operation. The Id of the object may not have been assigned, and code be assigned by the event.
Java Persistence/Print version PostPersist (http://download.oracle.com/javaee/6/api/javax/persistence/PostPersist.html) - Invoked after a new instance is persisted to the database. This occurs during a flush or commit operation after the database INSERT has occurred, but before the transaction is committed. It does not occur during the persist operation. The Id of the object should be assigned. PreUpdate (http://download.oracle.com/javaee/6/api/javax/persistence/PreUpdate.html) - Invoked before an instance is updated in the database. This occurs during a flush or commit operation after the database UPDATE has occurred, but before the transaction is committed. It does not occur during the merge operation. PostUpdate (http://download.oracle.com/javaee/6/api/javax/persistence/PostUpdate.html) - Invoked after an instance is updated in the database. This occurs during a flush or commit operation after the database UPDATE has occurred, but before the transaction is committed. It does not occur during the merge operation. PreRemove (http://download.oracle.com/javaee/6/api/javax/persistence/PreRemove.html) - Invoked before the remove operation is invoked on an Entity. Also invoked for cascade of a remove operation. It is also invoked during a flush or commit for orphanRemoval in JPA 2.0. PostRemove (http://download.oracle.com/javaee/6/api/javax/persistence/PostRemove.html) - Invoked after an instance is deleted from the database. This occurs during a flush or commit operation after the database DELETE has occurred, but before the transaction is committed. It does not occur during the remove operation.
118
Java Persistence/Print version } public class EmployeeEventListener { @PrePersist public void prePersist(Object object) { Employee employee = (Employee)object; employee.setUID(UIDGenerator.newUUI()); employee.setLastUpdated(Calendar.getInstance()); } @PreUpdate public void preUpdate(Object object) { Employee employee = (Employee)object; employee.setLastUpdated(Calendar.getInstance()); } }
119
Java Persistence/Print version Example default Entity listener xml <?xml version="1.0" encoding="UTF-8"?> <entity-mappings version="2.0" xmlns="http://java.sun.com/xml/ns/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm orm_2_0.xsd"> <persistence-unit-metadata> <persistence-unit-defaults> <entity-listeners> <entity-listener class="org.acme.ACMEEventListener"> <pre-persist method-name="prePersist"/> <pre-update method-name="preUpdate"/> </entity-listener> </entity-listeners> <persistence-unit-defaults> <persistence-unit-metadata/> <entity-mappings>
120
Extended Events
The JPA events are only defined for the Entity life-cycle. There are no EntityMangager events, or system level events. Some JPA providers may provide additional events. TopLink / EclipseLink : Provide an extended event mechanism. Additional Entity level events are defined through the DescriptorEventListener API. A session level event mechanism is also provided through the SessionEventListener API. The event objects also provide additional information including the database row and set of object changes.
Views
A database VIEW is a virtual view of a table or query. Views are useful for hiding the complexity of a table, set of tables, or data-set. In JPA you can map to a VIEW the same as a table, using the @Table annotation. You can then map each column in the view to your object's attributes. Views are normally read-only, so object's mapping to views are normally also read-only. In most databases views can also be updatable depending on how complex to query is that they encapsulate. Even for complex queries database triggers can normally be used to update into the view.
Java Persistence/Print version Views can often be used in JPA to workaround mapping limitations. For example if a table join is not supported by JPA, or a database function is desired to be called to transform data, this can normally be done inside a view, so JPA can just map to the simplified data. Using views does require database expertise, and the definition of views can be database dependent.
121
Interfaces
Interfaces can be used for two main purposes in a model. The first is as a public interface that defines the public API for the class. The second is as a common type that allows multiple distinct implementers.
Public Interfaces
If you have a public interface in JPA, you just need to map the implementation class and are fine for the most part. One issue is that you need to use the implementation class for queries, as JPA does not know about the interface. For JPQL the default alias is also the implementation class, but you could redefine this to be the public interface by setting the name of the Entity to be the public interface. Some JPA providers allow interfaces to be defined. TopLink / EclipseLink : Support defining and querying on public interfaces using a DescriptorCustomizer and the InterfacePolicy. Example public interface alias @Entity(name="Employee") public class EmployeeImpl { ... }
Interface Types
If you have a common interfaces with multiple distinct implementers, this can have some issues. If you use the interface to define a variable relationship, then this is difficult to map. JPA has no direct support for interfaces or variable relationships. You could change the interface to be a abstract class, then use TABLE_PER_CLASS inheritance to map it. Otherwise, you could split the relationship into multiple relationships, one per each implementer, or you could just remove the relationship and query for the related objects instead. Querying on an interface is also difficult, you would need to query on each implementer of the interface, and then union the results in memory. Some JPA providers have support for mapping interfaces, and for variable relationships. TopLink / EclipseLink : Support mapping interfaces using a SessionCustomizer and a RelationalDescriptor and InterfacePolicy. Variable relationships can be defined using the @VariableOneToOne annotation or XML.
122
Stored Procedures
A stored procedure is a procedure or function that resides on the database. Stored procedures are typically written in some database specific language that is similar to SQL, such as PL/SQL on Oracle. Some databases such as Oracle also support stored procedures written in Java. Stored procedures can be useful to perform batch data processing tasks. By writing the task in the database, it avoids the cost of sending the data to and from the database client, so can operate much more efficiently. Stored procedures can also be used to access database specific functionality that can only be accessed on the server. Stored procedures can also be used if strict security requirements as required, to avoid giving users access to the raw tables or unverified SQL operations. Some legacy application have also been written in database procedural languages, and need to be integrated with. The disadvantages of using stored procedures is they are less flexible that using SQL, and require developing and maintaining functionality that is often written in a different language than the application developers may be used to, and difficult to develop and debug, and typically using a limited procedural programming language. There is also a general misconception that using stored procedures will improve performance, in that if you put the same SQL the application is executing inside a stored procedure it will somehow become faster. This is a false, and normally the opposite is true, as stored procedures restrict the dynamic ability of the persistence layer to optimize data retrieval. Stored procedures only improve performance when they use more optimal SQL than the application, typically when they perform an entire task on the database. JPA does not have any direct support for stored procedures. Some types of stored procedures can be executed in JPA through using native queries. Native queries in JPA allow any SQL that returns nothing, or returns a database result set to be executed. The syntax to execute a stored procedure depends on the database. JPA does not support stored procedures that use OUTPUT or INOUT parameters. Some databases such as DB2, Sybase and SQL Server allow for stored procedures to return result sets. Oracle does not allow results sets to be returned, only OUTPUT parameters, but does define a CURSOR type that can be returned as an OUTPUT parameter. Oracle also supports stored functions, that can return a single value. A stored function can normally be executed using a native SQL query by selecting the function value from the Oracle DUAL table. Some JPA providers have extended support for stored procedures, some also support overriding any CRUD operation for an Entity with a stored procedure or custom SQL. Some JPA providers have support for CURSOR OUTPUT parameters. TopLink / EclipseLink : Support stored procedures using the @NamedStoredProcedureQuery annotation or XML, or the StoredProcedureCall class. Overriding any CRUD operation for a class or relationship are also supported using a DescriptorCustomizer and the DescriptorQueryManager class. CURSOR OUTPUT parameters are supported. Example executing a stored procedure on Oracle EntityManager em = getEntityManager(); Query query = em.createNativeQuery("BEGIN VALIDATE_EMP(P_EMP_ID=>?); END;"); query.setParameter(1, empId); query.executeUpdate();
123
Databases that support object-relational data include: Oracle DB2 PostgreSQL The basic model allows you to define Structs or Object-types to represent your data, the structures can have nested structures, arrays of basic data or other structures, and refs to other structures. You can then store a structure in a normal relational table column, or create a special table to store the structures directly. Querying is basic SQL, with a few extensions to handle traversing the special types. JPA does not support object-relational data-types, but some JPA providers may offer some support. TopLink / EclipseLink : Support object-relational data-types through there ObjectRelationalDataTypeDescriptor and mapping classes. Custom support is also offered for Oracle spatial database JGeometry structures.
Java Persistence/Print version Oracle (XDB) DB2 PostgreSQL JPA has no extended support for XML data, although it is possible to store an XML String into the database, just mapped as a Basic. Some JPA provider may offer extended XML data support. Such as query extensions, or allow mapping an XML DOM. If you wish to map the XML data into objects, you could make use of the JAXB specification. You may even be able to integrate this with your JPA objects. TopLink / EclipseLink : Support Oracle XDB XMLType columns using their DirectToXMLTypeMapping. XMLTypes can be mapped either as String or as an XML DOM (Document). Query extensions are provided for XPath queries within Expression queries. EclipseLink also includes a JAXB implementation for object-XML mapping.
124
History
A common desire in database applications is to maintain a record and history of the database changes. This can be used for tracking and auditing purposes, or to allow undoing of changes, or to keep a record of the system's data over time. Many database have auditing functionality that allows some level of tracking changes made to the data. Some databases such as Oracle's Flashback feature allow the automatic tracking of history at the row level, and even allow querying on past versions of the data. It is also possible for an application to maintain its own history through its data model. All that is required is to add a START and END timestamp column to the table. The current row is then the one in which the END timestamp is null. When a row is inserted its START timestamp will be set to the current time. When a row is updated, instead of updating the row a new row will be inserted with the same id and data, but a different START timestamp, and the old row will be updated to set its END timestamp to the current time. The primary key of the table will need to have the START timestamp added to it. History can also be used to avoid deletion. Instead of deleting a row, the END timestamp can just be set to the current time. Another solution is to add a DELETED boolean column to the table, and record when a row is deleted. The history data could either be stored in-place in the altered table, or the table could be left to only contain the current version of the data, and a mirror history table could be added to store the data. In the mirror case, database triggers could be used to write to the history table. For the in-place case a database view could be used to give a view of the table as of the current time. To query the current data from a history table, any query must include the clause where the END is NULL. To query as of a point in time, the where clause must include where the point in time is between the START and END timestamps. JPA does not define any specific history support. Oracle flashback can be used with JPA, but any queries for historical data will need to use native SQL. If a mirror history table is used with triggers, JPA can still be used to query to current data. A subclass or sibling class could also be mapped to the history table to allow querying of history data. If a database view is used, the JPA could be used by mapping the Entity to the view. If a history table is used JPA could still be used to map to the table, and a start and end attribute could be added to the object. Queries for the current data could append the current time to the query. Relationships are more difficult, as JPA requires relationships to be by primary key, and historical relationships would not be. Some JPA providers have support for history.
Java Persistence/Print version TopLink / EclipseLink : Support Oracle flashback querying as well as application specific history. Historical queries can be defined using the query hint "eclipselink.history.as-of" or Expression queries. Automatic tracking of history is also supported using the HistoryPolicy API that supports maintaining and querying a mirror history table.
125
Auditing
See, Auditing and Security.
Runtime
Once you have mapped your object model the second step in persistence development is to access and process your objects from your application, this is referred to as the runtime usage of persistence. Various persistence specifications have had various runtime models. The most common model is to have a runtime API; a runtime API typically will define API for connecting to a data-source, querying and transactions.
Entity Manager
JPA provides a runtime API defined by the javax.persistence (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ package-summary. html) package. The main runtime class is the EntityManager (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html) class. The EntityManager provides API for creating queries, accessing transactions, and finding, persisting, merging and deleting objects. The JPA API can be used in any Java environment including JSE and JEE. An EntityManager can be created through an EntityManagerFactory (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManagerFactory. html), or can be injected into an instance variable in an EJB SessionBean, or can be looked up in JNDI in a JEE server. JPA is used differently in Java Standard Edition (JSE) versus Java Enterprise Edition (JEE).
126
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence persistence_2_0.xsd" version="2.0"> <persistence-unit name="acme" transaction-type="RESOURCE_LOCAL"> <!-- EclipseLink --> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
127
<exclude-unlisted-classes>false</exclude-unlisted-classes> <properties> <property name="javax.persistence.jdbc.driver" value="org.acme.db.Driver"/> <property name="javax.persistence.jdbc.url" value="jdbc:acmedb://localhost/acme"/> <property name="javax.persistence.jdbc.user" value="wile"/> <property name="javax.persistence.jdbc.password" value="elenberry"/> </properties> </persistence-unit> </persistence>
128
Java Persistence/Print version Example of looking up an EntityManager in JNDI from a SessionBean InitialContext context = new InitialContext(); EntityManager entityManager = (EntityManager)context.lookup("java:comp/env/persistence/acme/entity-manager"); ... Example of looking up an EntityManagerFactory in JNDI from a SessionBean InitialContext context = new InitialContext(); EntityManagerFactory factory = (EntityManagerFactory)context.lookup("java:comp/env/persistence/acme/factory"); ... Example of injecting an EntityManager and EntityManagerFactory in a SessionBean @Stateless(name="EmployeeService", mappedName="acme/EmployeeService") @Remote(EmployeeService.class) public class EmployeeServiceBean implements EmployeeService { @PersistenceContext(unitName="acme") private EntityManager entityManager; @PersistenceUnit(unitName="acme") private EntityManagerFactory factory; ... } Example of lookup an EJBContext in an Entity (Useful for Audit) protected EJBContext getContext() { try { InitialContext context = new InitialContext(); return (EJBContext)context.lookup("java:comp/EJBContext"); } catch (NamingException e) { throw new EJBException(e); } }
129
Querying
Querying is a fundamental part of persistence. Being able to persist something is not very useful without being able to query it back. There are many querying languages and frameworks; the most common query language is SQL used in relational databases. JPA uses the Java Persistence Querying Language (JPQL), which is based on the SQL language and evolved from the EJB Query Language (EJBQL). It basically provides the SQL syntax at the object level instead of at the data level. Other querying languages and frameworks include: SQL EJBQL
Java Persistence/Print version JDOQL Query By Example (QBE) TopLink Expressions Hibernate Criteria Object Query Language (OQL)
130
JPQL is similar in syntax to SQL and can be defined through its BNF definition. JPA provides querying through the Query interface, and the @NamedQuery and @NamedNativeQuery annotations and the <named-query> and <named-native-query> XML elements. JPA provides several querying mechanisms: JPQL Criteria API Native SQL Queries
Named Queries
There are two main types of queries in JPA, named queries and dynamic queries. A named query is used for a static query that will be used many times in the application. The advantage of a named query is that it can be defined once, in one place, and reused in the application. Most JPA providers also pre-parse/compile named queries, so they are more optimized than dynamic queries which typically must be parsed/compiled every time they are executed. Since named queries are part of the persistence meta-data they can also be optimized or overridden in the orm.xml without changing the application code. Named queries are defined through the @NamedQuery (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ NamedQuery. html) and @NamedQueries (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ NamedQueries. html) annotations, or <named-query> XML element. Named queries are accessed through the EntityManager.createNamedQuery (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#createNamedQuery(java. lang. String)) API, and executed through the Query (https:/ / java. sun.com/javaee/5/docs/api/javax/persistence/Query.html) interface. Named queries can be defined on any annotated class, but are typically defined on the Entity that they query for. The name of the named query must be unique for the entire persistence unit, they name is not local to the Entity. In the orm.xml named queries can be defined either on the <entity-mappings> or on any <entity>. Named queries are typically parametrized, so they can be executed with different parameter values. Parameters are defined in JPQL using the :<name> syntax for named parameters, or the ? syntax for positional parameters. A collection of query hints can also be provided to a named query. Query hints can be used to optimize or to provide special configuration to a query. Query hints are specific to the JPA provider. Query hints are defined through the @QueryHint (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ QueryHint. html) annotation or query-hint XML element.
131
Dynamic Queries
Dynamic queries are normally used when the query depends on the context. For example, depending on which items in the query form were filled in, the query may have different parameters. Dynamic queries are also useful for uncommon queries, or prototyping. Because JPQL is a string based language, dynamic queries using JPQL typically involve string concatenation. Some JPA providers provide more dynamic query languages, and in JPA 2.0 a Criteria API will be provided to make dynamic queries easier. Dynamic queries can use parameters, and query hints the same as named queries. Dynamic queries are accessed through the EntityManager.createQuery (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#createQuery(java. lang. String)) API, and executed through the Query (https://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html) interface. Example dynamic query execution EntityManager em = getEntityManager(); Query query = em.createQuery("Select emp from Employee emp where emp.address.city = :city"); query.setParameter("city", "Ottawa"); query.setHint("acme.jpa.batch", "emp.address"); List<Employee> employees = query.getResultList(); ...
132
Parameters
Parameters are defined in JPQL using the :<param> syntax, i.e. "Select e from Employee e where e.id = :id". The parameter values are set on the Query using the Query.setParameter (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/persistence/Query.html#setParameter(java.lang.String, java.lang.Object)) API. Parameters can also be defined using the ?, mainly for native SQL queries. You can also use ?<int>. These are ordered parameters, not named parameters and are set using the Query API Query.setParameter (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ Query. html#setParameter(int, java. lang. Object)). The int is the index of the parameter in the SQL. Some JPA providers also allow the :<param> syntax for native queries. For temporal parameters (Date, Calendar) you can also pass the temporal type, depending on if you want the Date, Time or Timestamp from the value. Parameters are normally basic values, but you can also reference object's if comparing on their Id, i.e. "Select e from Employee e where e.address = :address", can take the Address object as a parameter. The parameter values are always at the object level when comparing to a mapped attribute, for example if comparing a mapped enum the enum value is used, not the database value.
Query Results
Normally JPA queries return your persistent Entity objects. The returned objects will be managed by the persistent context (EntityManager) and changes made to the objects will be tracked as part of the current transaction. In some case more complex queries can be built that just return data, instead of Entity objects, or even perform update or deletion operations. There are three methods to execute a Query, each returning different results: Query.getResultList (https://java.sun.com/javaee/5/docs/api/javax/persistence/Query. html#getResultList()) Query.getSingleResult (https://java.sun.com/javaee/5/docs/api/javax/persistence/Query. html#getSingleResult())
Java Persistence/Print version Query.executeUpdate (https://java.sun.com/javaee/5/docs/api/javax/persistence/Query. html#executeUpdate()) getResultList returns a List of the results. This is normally a List of Entity objects, but could also be a list of data, or arrays.
JPQL / SQL SELECT e FROM Employee e SELECT e.firstName FROM Employee e SELECT e.firstName, e.lastName FROM Employee e SELECT e, e.address FROM Employee e SELECT EMP_ID, F_NAME, L_NAME FROM EMP Result This returns a List<Employee> (List of Employee objects). The objects will be managed. This returns a List<String> (List of String values). The data is not managed.
133
This returns a List<Object[String, String]> (List of object arrays each with two String values). The data is not managed. This returns a List<Object[Employee, Address]> (List of object arrays each with an Employee and Address objects). The objects will be managed. This returns a List<Object[BigDecimal, String, String]> (List of object arrays each with the row data). The data is not managed.
getSingleResult returns the results. This is normally an Entity objects, but could also be data, or an object array. If the query returns nothing, an exception is thrown. This is unfortunate, as typically just returning null would be desired. Some JPA providers may have an option to return null instead of throwing an exception if nothing is returned. Also if the query returns more than a single row, and exception is also thrown. This is also unfortunate, as typically just returning the first result is desired. Some JPA providers may have an option to return the first result instead of throwing an exception, otherwise you need to call getResultList and get the first element.
JPQL / SQL SELECT e FROM Employee e SELECT e.firstName FROM Employee e SELECT e.firstName, e.lastName FROM Employee e SELECT e, e.address FROM Employee e Result This returns an Employee. The object will be managed. This returns a String. The data is not managed. This returns an Object[String, String] (object array with two String values). The data is not managed.
This returns an Object[Employee, Address] (object array with an Employee and Address object). The objects will be managed. This returns an Object[BigDecimal, String, String] (object array with the row data). The data is not managed.
executeUpdate returns the database row count. This can be used for UPDATE DELETE JPQL queries, or any native SQL (DML or DDL) query that does not return a result.
Common Queries
Join fetch, read both employee and address in same query
To query all employees and their address a join fetch is used. This selects both the employee and address data in the same query. If the join fetch was not used, the employee address would still be available, but could cause a query for each employee for its address. This reduces n+1 queries to 1 query. Join fetch: SELECT e FROM Employee e JOIN FETCH e.address Join fetch can also be used on collection relationships: SELECT e FROM Employee e JOIN FETCH e.address JOIN FETCH e.phones
Java Persistence/Print version Outer joins can be used to avoid null and empty relationships from filtering the results: SELECT e FROM Employee e LEFT OUTER JOIN FETCH e.address LEFT OUTER JOIN FETCH e.phones You can also select multiple objects in a query, but note that this does not instantiate the relationship, so accessing the relationship could still trigger another query: SELECT e, a FROM Employee e, Address a WHERE e.address = a
134
Java Persistence/Print version JPA 2.0: Select p from Employee e join e.projects p where e.id = :id and INDEX(p) = 1
135
Advanced
Join Fetch and Query Optimization
There are several ways to optimize queries in JPA. The typical query performance issue is that an object is read, then its related objects are read one by one. This can be optimized using JOIN FETCH is JPQL, other through JPA provider specific query hints. See, Join Fetching Batch Reading
Java Persistence/Print version the query, or to execute the query in a new EntityManager or transaction.
136
Flush Mode
Within a transaction context in JPA, changes made to the managed objects are normally not flushed (written) to the database until commit. So if a query were executed against the database directly, it would not see the changes made within the transaction, as these changes are only made in memory within the Java. This can cause issues if new objects have been persisted, or objects have been removed or changed, as the application may expect the query to return these results. Because of this JPA requires that the JPA provider performs a flush of all changes to the database before any query operation. This however can cause issues if the application is not expecting that a flush as a side effect of a query operation. If the application changes are not yet in a state to be flushed, a flush may not be desired. Flushing also can be expensive and causes the database transaction, and database locks are other resources to be held for the duration of the transaction, which can effect performance and concurrency. JPA allows the flush mode for a query to be configured using the FlushModeType (https://java.sun.com/javaee/5/ docs/ api/ javax/ persistence/ FlushModeType. html) enum and the Query.setFlushMode() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ Query. html#setFlushMode(javax. persistence. FlushModeType)) API. The flush mode is either AUTO the default which means flush before every query execution, or COMMIT which means only flush on commit. The flush mode can also be set on an EntityManager using the EntityManager.setFlushMode() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#setFlushMode(javax. persistence. FlushModeType)) API, to affect all queries executed with the EntityManager. The EntityManager.flush() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#flush()) API can be called directly on the EntityManager anytime that a flush is desired. Some JPA providers also let the flush mode be configured through persistence unit properties, or offer alternatives to flushing, such as performing the query against the in memory objects.
Java Persistence/Print version How these query properties are implemented depends on the JPA provider and database. JDBC allows the maxResults to be set, and most JDBC drivers support this, so it will normally work for most JPA providers and most databases. Support for firstResult can be less guaranteed to be efficient, as it normally requires database specific SQL. There is no standard SQL for pagination, so whether if this is supported depends on your database, and your JPA providers support. When performing pagination, it is also important to order the result. If the query does not order the result, then each subsequent query could potentially return the results in a different order, and give a different page. Also if rows are insert/deleted in between the queries, the results can be slightly different.
137
138
139
Example native named query execution EntityManager em = getEntityManager(); Query query = em.createNamedQuery("findAllEmployeesInCity"); query.setParameter(0, "Ottawa"); List<Employee> employees = query.getResultList(); ... Example dynamic native query execution EntityManager em = getEntityManager(); Query query = em.createNativeQuery("SELECT E.* from EMP E, ADDRESS A WHERE E.EMP_ID = A.EMP_ID AND A.CITY = ?", Employee.class); query.setParameter(0, "Ottawa"); List<Employee> employees = query.getResultList(); ...
140
Stored Procedures
See Stored Procedures
Raw JDBC
It can sometimes be required to mix JDBC code with JPA code. This may be to access certain JDBC driver specific features, or to integrate with another application that uses JDBC instead of JPA. If you just require a JDBC connection, you could access one from your JEE server's DataSource, or connect directly to DriverManager or a third party connection pool. If you need a JDBC connection in the same transaction context and your JPA application, you could use a JTA DataSource for JPA and your JDBC access to have them share the same global transaction. If you are not using JEE, or not using JTA, then you may be able to access the JDBC connection directly from your JPA provider. Some JPA providers provide an API to access a raw JDBC connection from their internal connection pool, or from their transaction context. In JPA 2.0 this API is somewhat standardized by the unwrap API on EntityManager. To access a JDBC connection from an EntityManager, some JPA 2.0 providers may support: java.sql.Connection connection = entityManager.unwrap(java.sql.Connection.class); This connection could then be used for raw JDBC access. It normally should not be close when finished, as the connection is being used by the EntityManager and will be released when the EntityManager is closed or transaction committed.
JPQL BNF
The following defines the structure of the JPQL query language. For further examples and usage see Querying. | = or [] = optional * = repeatable (zero or more times) {} = mandatory QL_statement ::= select_statement | update_statement | delete_statement
141
Select
Select employee from Employee employee join employee.address address where address.city = :city and employee.firstName like :name order by employee.firstName select_statement ::= select_clause from_clause [where_clause] [groupby_clause] [having_clause] [orderby_clause]
From
FROM Employee employee JOIN FETCH employee.address LEFT OUTER JOIN FETCH employee.phones JOIN employee.manager manager, Employee ceo from_clause ::= FROM identification_variable_declaration collection_member_declaration}}* {, {identification_variable_declaration |
identification_variable_declaration ::= range_variable_declaration { join | fetch_join }* range_variable_declaration ::= abstract_schema_name [AS] identification_variable join ::= join_spec join_association_path_expression [AS] identification_variable fetch_join ::= join_spec FETCH join_association_path_expression association_path_expression ::= collection_valued_path_expression | single_valued_association_path_expression join_spec::= [ LEFT [OUTER] | INNER ] JOIN join_association_path_expression ::= join_single_valued_association_path_expression join_collection_valued_path_expression |
join_collection_valued_path_expression::= identification_variable.collection_valued_association_field join_single_valued_association_path_expression::= identification_variable.single_valued_association_field collection_member_declaration ::= IN (collection_valued_path_expression) [AS] identification_variable single_valued_path_expression ::= state_field_path_expression | single_valued_association_path_expression state_field_path_expression ::= {identification_variable | single_valued_association_path_expression}.state_field single_valued_association_path_expression single_valued_association_field ::= identification_variable.{single_valued_association_field.}* ::=
Select Clause
SELECT employee.id, employee.phones SELECT DISTINCT employee.address.city, NEW com.acme.EmployeeInfo(AVG(employee.salary), MAX(employee.salary)) select_clause ::= SELECT [DISTINCT] select_expression {, select_expression}* select_expression ::= single_valued_path_expression | OBJECT(identification_variable) | constructor_expression aggregate_expression | identification_variable |
constructor_expression ::= NEW constructor_name ( constructor_item {, constructor_item}* ) constructor_item ::= single_valued_path_expression | aggregate_expression
Java Persistence/Print version aggregate_expression ::= { AVG | MAX | MIN | SUM } ([DISTINCT] state_field_path_expression) | COUNT ([DISTINCT] identification_variable | state_field_path_expression | single_valued_association_path_expression)
142
Where
WHERE employee.firstName = :name AND employee.address.city LIKE 'Ott%' ESCAPE '/' OR employee.id IN (1, 2, 3) AND (employee.salary * 2) > 40000 where_clause ::= WHERE conditional_expression conditional_expression ::= conditional_term | conditional_expression OR conditional_term conditional_term ::= conditional_factor | conditional_term AND conditional_factor conditional_factor ::= [ NOT ] conditional_primary conditional_primary ::= simple_cond_expression | (conditional_expression) simple_cond_expression ::= comparison_expression | between_expression | like_expression | in_expression | null_comparison_expression | empty_collection_comparison_expression | collection_member_expression | exists_expression between_expression ::= arithmetic_expression [NOT] BETWEEN arithmetic_expression AND arithmetic_expression | string_expression [NOT] BETWEEN string_expression AND string_expression | datetime_expression [NOT] BETWEEN datetime_expression AND datetime_expression in_expression ::= state_field_path_expression [NOT] IN ( in_item {, in_item}* | subquery) in_item ::= literal | input_parameter like_expression ::= string_expression [NOT] LIKE pattern_value [ESCAPE escape_character] null_comparison_expression ::= {single_valued_path_expression | input_parameter} IS [NOT] NULL empty_collection_comparison_expression ::= collection_valued_path_expression IS [NOT] EMPTY collection_member_expression ::= entity_expression [NOT] MEMBER [OF] collection_valued_path_expression exists_expression::= [NOT] EXISTS (subquery) all_or_any_expression ::= { ALL | ANY | SOME} (subquery) comparison_expression ::= string_expression comparison_operator {string_expression | all_or_any_expression} | boolean_expression { =|<>} {boolean_expression | all_or_any_expression} | enum_expression { =|<>} {enum_expression | all_or_any_expression} | datetime_expression comparison_operator {datetime_expression | all_or_any_expression} | entity_expression { = | <> } {entity_expression | all_or_any_expression} | arithmetic_expression comparison_operator {arithmetic_expression | all_or_any_expression} comparison_operator ::= = | > | >= | < | <= | <> arithmetic_expression ::= simple_arithmetic_expression | (subquery) simple_arithmetic_expression ::= arithmetic_term | simple_arithmetic_expression { + | - } arithmetic_term arithmetic_term ::= arithmetic_factor | arithmetic_term { * | / } arithmetic_factor arithmetic_factor ::= [{ + | - }] arithmetic_primary arithmetic_primary ::= state_field_path_expression | numeric_literal input_parameter | functions_returning_numerics | aggregate_expression string_expression ::= string_primary | (subquery) string_primary ::= state_field_path_expression | string_literal | input_parameter | functions_returning_strings | aggregate_expression datetime_expression ::= datetime_primary | (subquery) | (simple_arithmetic_expression) |
143
boolean_expression ::= boolean_primary | (subquery) boolean_primary ::= state_field_path_expression | boolean_literal | input_parameter | enum_expression ::= enum_primary | (subquery) enum_primary ::= state_field_path_expression | enum_literal | input_parameter | entity_expression ::= single_valued_association_path_expression | simple_entity_expression simple_entity_expression ::= identification_variable | input_parameter
Functions
LENGTH(SUBSTRING(UPPER(CONCAT('FOO', :bar)), 1, 5)) functions_returning_numerics::= LENGTH(string_primary) | LOCATE(string_primary, string_primary[, simple_arithmetic_expression]) | ABS(simple_arithmetic_expression) | SQRT(simple_arithmetic_expression) | MOD(simple_arithmetic_expression, simple_arithmetic_expression) | SIZE(collection_valued_path_expression) functions_returning_datetime ::= CURRENT_DATE| CURRENT_TIME | CURRENT_TIMESTAMP functions_returning_strings ::= CONCAT(string_primary, string_primary) | SUBSTRING(string_primary, simple_arithmetic_expression, simple_arithmetic_expression)| TRIM([[trim_specification] [trim_character] FROM] string_primary) | LOWER(string_primary) | UPPER(string_primary) trim_specification ::= LEADING | TRAILING | BOTH
Group By
GROUP BY employee.address.country, employee.address.city HAVING COUNT(employee.id) > 500 groupby_clause ::= GROUP BY groupby_item {, groupby_item}* groupby_item ::= single_valued_path_expression | identification_variable having_clause ::= HAVING conditional_expression
Order By
ORDER BY employee.address.country, employee.address.city DESC orderby_clause ::= ORDER BY orderby_item {, orderby_item}* orderby_item ::= state_field_path_expression [ ASC | DESC ]
Subquery
WHERE employee.salary = (SELECT MAX(wellPaid.salary) FROM Employee wellPaid) subquery ::= simple_select_clause subquery_from_clause [where_clause] [groupby_clause] [having_clause] subquery_from_clause ::= FROM subselect_identification_variable_declaration subselect_identification_variable_declaration}* subselect_identification_variable_declaration identification_variable_declaration | association_path_expression [AS] identification_variable collection_member_declaration simple_select_clause ::= SELECT [DISTINCT] simple_select_expression {, ::= |
144
Update
UPDATE Employee e SET e.salary = e.salary * 2 WHERE e.address.city = :city update_statement ::= update_clause [where_clause] update_clause ::= UPDATE abstract_schema_name [[AS] identification_variable] SET update_item {, update_item}* update_item ::= [identification_variable.]{state_field | single_valued_association_field} = new_value new_value ::= simple_arithmetic_expression | string_primary | datetime_primary | boolean_primary | enum_primary | simple_entity_expression | NULL
Delete
DELETE FROM Employee e WHERE e.address.city = :city delete_statement ::= delete_clause [where_clause] delete_clause ::= DELETE FROM abstract_schema_name [[AS] identification_variable]
Allows selecting on Map keys, values and Map Entry. SELECT ENTRY(e.contactInfo) from Employee e general_identification_variable VALUE(identification_variable) ::= identification_variable | KEY(identification_variable) |
Allows querying on Map keys and values. SELECT e from Employee e join e.contactInfo c where KEY(c) = 'Email' and VALUE(c) = 'joe@gmail.com' in_item ::= literal | single_valued_input_parameter Allows collection parameters for IN. SELECT e from Employee e where e.id in :param functions_returning_strings ::= CONCAT(string_primary, string_primary {, string_primary}*) Allows CONCAT with multiple arguments. SELECT e from Employee e where CONCAT(e.address.street, e.address.city, e.address.province) = :address
Java Persistence/Print version SUBSTRING(string_primary, simple_arithmetic_expression [, simple_arithmetic_expression]) Allows SUBSTRING with single argument. SELECT e from Employee e where SUBSTRING(e.name, 3) = 'Mac' case_expression ::= general_case_expression | simple_case_expression |coalesce_expression | nullif_expression general_case_expression::= CASE when_clause {when_clause}* ELSE scalar_expression END when_clause::= WHEN conditional_expression THEN scalar_expression simple_case_expression::= CASE case_operand simple_when_clause {simple_when_clause}* ELSE scalar_expression END case_operand::= state_field_path_expression | type_discriminator simple_when_clause::= WHEN scalar_expression THEN scalar_expression coalesce_expression::= COALESCE(scalar_expression {, scalar_expression}+) nullif_expression::= NULLIF(scalar_expression, scalar_expression) Allows CASE, COALESCE and NULLIF functions.
SELECT e.name, CASE WHEN (e.salary >= 100000) THEN 1 WHEN (e.salary < 100000) THEN 2 ELSE 0 END from Employee e
145
functions_returning_numerics::= ... | INDEX(identification_variable) Allows querying in indexed List mapping's index. SELECT e from Employee e join e.phones p where INDEX(p) = 1 and p.areaCode = '613' join_collection_valued_path_expression::= identification_variable. {single_valued_embeddable_object_field.}* collection_valued_field join_single_valued_path_expression::= variable.{single_valued_embeddable_object_field.}*single_valued_object_field Allows nested dot notation on joins. SELECT p from Employee e join e.employeeDetails.phones p where e.id = :id select_item ::= select_expression [[AS] result_variable] Allows AS option in select. SELECT AVG(e.salary) AS s, e.address.city from Employee e group by e.address.city order by s literalTemporal = DATE_LITERAL | TIME_LITERAL | TIMESTAMP_LITERAL Allows JDBC date/time escape syntax. SELECT e from Employee e where e.startDate > {d'1990-01-01'}
Persisting
JPA uses the EntityManager (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html) API for runtime usage. The EntityManager represents the application session or dialog with the database. Each request, or each client will use its own EntityManager to access the database. The EntityManager also represents a transaction context, and in a typical stateless model an new EntityManager is created for each transaction. In a stateful model, and EntityManager may match the lifecycle of a client's session. The EntityManager provides API for all required persistence operations. These includes CRUD operations including: persist (https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#persist(java.lang. Object)) (INSERT)
Java Persistence/Print version merge (https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#merge(java.lang. Object)) (UPDATE) remove (https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#remove(java.lang. Object)) (DELETE) find (https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#find(java.lang.Class, java.lang.Object)) (SELECT) The EntityManager is an object-oriented API, so does not map directly onto database SQL or DML operations. For example to update an object, you just need to read the object and change its state through its set methods, and then call commit on the transaction. The EntityManager figures out which objects you changed and performs the correct updates to the database, there is no explicit update operation in JPA.
146
Persist
The EntityManager.persist() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#persist(java. lang. Object)) operation is used to insert a new object into the database. persist does not directly insert the object into the database, it just registers it as new in the persistence context (transaction). When the transaction is committed, or if the persistence context is flushed, then the object will be inserted into the database. If the object uses a generated Id, the Id will normally be assigned to the object when persist is called, so persist can also be used to have an object's Id assigned. The one exception is if IDENTITY sequencing is used, in this case the Id is only assigned on commit or flush because the database will only assign the Id on INSERT. If the object does not use a generated Id, you should normally assign its Id before calling persist. The persist operation can only be called within a transaction, an exception will be thrown outside of a transaction. The persist operation is in-place, in that the object being persisted will become part of the persistence context. The state of the object at the point of the commit of the transaction will be persisted, not its state at the point of the persist call. persist should normally only be called on new objects. It is allowed to be called on existing objects if they are part of the persistence context, this is only for the purpose of cascading persist to any possible related new objects. If persist is called on an existing object that is not part of the persistence context, then an exception may be thrown, or it may be attempted to be inserted and a database constraint error may occur, or if no constraints are defined, it may be possible to have duplicate data inserted. persist can only be called on Entity objects, not on Embeddable objects, or collections, or non-persistent objects. Embeddable objects are automatically persisted as part of their owning Entity. Calling persist is not always required. If you related a new object to an existing object that is part of the persistence context, and the relationship is cascade persist, then it will be automatically inserted when the transaction is committed, or when the persistence context is flushed. Example persist EntityManager em = getEntityManager(); em.getTransaction().begin(); Employee employee = new Employee(); employee.setFirstName("Bob"); Address address = new Address(); address.setCity("Ottawa"); employee.setAddress(address); em.persist(employee);
147
em.getTransaction().commit();
Cascading Persist
Calling persist on an object will also cascade the persist operation to across any relationship that is marked as cascade persist. If a relationship is not cascade persist, and a related object is new, then an exception may be thrown if you do not first call persist on the related object. Intuitively you may consider marking every relationship as cascade persist to avoid having to worry about calling persist on every objects, but this can also lead to issues. One issue with marking all relationships cascade persist is performance. On each persist call all of the related objects will need to be traversed and checked if they reference any new objects. This can actually lead to n^2 performance issues if you mark all relationships cascade persist, and persist a large new graph of objects. If you just call persist on the root object, this is ok. However, if you call persist on each object in the graph, then you will traverse the entire graph for each object in the graph, and this can lead to a major performance issue. The JPA spec should probably define persist to only apply to new objects, not already part of the persistence context, but it requires persist apply to all objects, whether new, existing, or already persisted, so can have this issue. A second issue is that if you remove an object to have it deleted, if you then call persist on the object, it will resurrect the object, and it will become persistent again. This may be desired if it is intentional, but the JPA spec also requires this behavior for cascade persist. So if you remove an object, but forget to remove a reference to it from a cascade persist relationship, the remove will be ignored. I would recommend only marking relationships that are composite or privately owned as cascade persist.
Merge
The EntityManager.merge() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#merge(java. lang. Object)) operation is used to merge the changes made to a detached object into the persistence context. merge does not directly update the object into the database, it merges the changes into the persistence context (transaction). When the transaction is committed, or if the persistence context is flushed, then the object will be updated in the database. Normally merge is not required, although it is frequently misused. To update an object you simply need to read it, then change its state through its set methods, then commit the transaction. The EntityManager will figure out everything that has been changed and update the database. merge is only required when you have a detached copy of a persistence object. A detached object is one that was read through a different EntityManager (or in a different transaction in a JEE managed EntityManager), or one that was cloned, or serialized. A common case is a stateless SessionBean where the object is read in one transaction, then updated in another transaction. Since the update is processed in a different transaction, with a different EntityManager, it must first be merged. The merge operation will look-up/find the managed object for the detached object, and copy each of the detached objects attributes that changed into the managed object, as well as cascading any related objects marked as cascade merge. The merge operation can only be called within a transaction, an exception will be thrown outside of a transaction. The merge operation is not in-place, in that the object being merged will never become part of the persistence context. Any further changes must be made to the managed object returned by the merge, not the detached object. merge is normally called on existing objects, but can also be called on new objects. If the object is new, a new copy of the object will be made and registered with the persistence context, the detached object will not be persisted itself. merge can only be called on Entity objects, not on Embeddable objects, or collections, or non-persistent objects. Embeddable objects are automatically merged as part of their owning Entity.
Java Persistence/Print version Example merge EntityManager em = createEntityManager(); Employee detached = em.find(Employee.class, id); em.close(); ... em = createEntityManager(); em.getTransaction().begin(); Employee managed = em.merge(detached); em.getTransaction().commit();
148
Cascading Merge
Calling merge on an object will also cascade the merge operation across any relationship that is marked as cascade merge. Even if the relationship is not cascade merge, the reference will still be merged. If the relationship is cascade merge the relationship and each related object will be merged. Intuitively you may consider marking every relationship as cascade merge to avoid having to worry about calling merge on every objects, but this is normally a bad idea. One issue with marking all relationships cascade merge is performance. If you have an object with a lot of relationships, the each merge call can require to traverse a large graph of objects. Another issues is that your detached object is corrupt in one way or another. By this I mean you have, for example, an Employee who has a manager, but that manager has a different copy of the detached Employee object as its managedEmployee. This may cause the same object to be merged twice, or at least may not be consistent which object will be merged, so you may not get the changes you expect merged. The same is true if you didn't change an object at all, but some other user did, if merge cascades to this unchanged object, it will revert the other user's changes, or throw an OptimisticLockException (depending on your locking policy). This is normally not desirable. I would recommend only marking relationships that are composite or privately owned as cascade merge.
Transient Variables
Another issue with merge is transient variables. Since merge is normally used with object serialization, if a relationship was marked as transient (Java transient, not JPA transient), then the detached object will contain null, and null will be merged into the object, even though it is not desired. This will occur even if the relationship was not cascade merge, as merge always merges the references to related objects. Normally transient is required when using serialization to avoid serializing the entire database when only a single, or small set of objects are required. One solution is to avoid marking anything transient, and instead use LAZY relationships is JPA to limit what is serialized (lazy relationships that have not been accessed, will normally not be serialized). Another solution is to manually merge in your own code. Some JPA provides provide extended merge operations, such as allowing a shallow merge or deep merge, or merging without merging references.
149
Remove
The EntityManager.remove() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#remove(java. lang. Object)) operation is used to delete an object from the database. remove does not directly delete the object from the database, it marks the object to be deleted in the persistence context (transaction). When the transaction is committed, or if the persistence context is flushed, then the object will be deleted from the database. The remove operation can only be called within a transaction, an exception will be thrown outside of a transaction. The remove operation must be called on a managed object, not on a detached object. Generally you must first find the object before removing it, although it is possible to call EntityManager.getReference() on the object's Id and call remove on the reference. Depending on how you JPA provider optimizes getReference and remove, it may not require reading the object from the database. remove can only be called on Entity objects, not on Embeddable objects, or collections, or non-persistent objects. Embeddable objects are automatically removed as part of their owning Entity. Example remove EntityManager em = getEntityManager(); em.getTransaction().begin(); Employee employee = em.find(Employee.class, id); em.remove(employee); em.getTransaction().commit();
Cascading Remove
Calling remove on an object will also cascade the remove operation across any relationship that is marked as cascade remove. Note that cascade remove only effects the remove call. If you have a relationship that is cascade remove, and remove an object from the collection, or dereference an object, it will not be removed. You must explicitly call remove on the object to have it deleted. Some JPA providers provide an extension to provide this behavior, and in JPA 2.0 there will be an orphanRemoval option on OneToMany and OneToOne mappings to provide this.
Reincarnation
Normally an object that has been removed, stays removed, but in some cases you may need to bring the object back to life. This normally occurs with natural ids, not generated ones, where a new object would always get an new id. Generally the desire to reincarnate an object occurs from a bad object model design, normally the desire to change the class type of an object (which cannot be done in Java, so a new object must be created). Normally the best solution is to change your object model to have your object hold a type object which defines its type, instead of using inheritance. But sometimes reincarnation is desirable. When done it two separate transactions, this is normally fine, first you remove the object, then you persist it back. This can be more complex if you wish to remove and persist an object with the same Id in the same transaction. If you call remove on an object, then call persist on the same object, it will simply no longer be removed. If you call remove on an object, then call persist on a different object with the same Id the behavior may depend on your JPA provider, and probably will not work. If you call flush after calling remove, then call persist, then the object should be successfully reincarnated. Note that it will be a different row, the existing row will have been deleted, and a new row inserted. If you wish the same row to be updated, you may need to resort to using a native SQL update query.
150
Advanced
Refresh
The EntityManager.refresh() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#refresh(java. lang. Object)) operation is used to refresh an object's state from the database. This will revert any non-flushed changes made in the current transaction to the object, and refresh its state to what is currently defined on the database. If a flush has occured, it will refresh to what was flushed. Refresh must be called on a managed object, so you may first need to find the object with the active EntityManager if you have a non-managed instance. Refresh will cascade to any relationships marked cascade refresh, although it may be done lazily depending on your fetch type, so you may need to access the relationship to trigger the refresh. refresh can only be called on Entity objects, not on Embeddable objects, or collections, or non-persistent objects. Embeddable objects are automatically refreshed as part of their owning Entity. Refresh can be used to revert changes, or if your JPA provider supports caching, it can be used to refresh stale cached data. Sometimes it is desirable to have a Query or find operation refresh the results. Unfortunately JPA 1.0 does not define how this can be done. Some JPA providers offer query hints to allow refreshing to be enabled on a query. TopLink / EclipseLink : Define a query hint "eclipselink.refresh" to allow refreshing to be enabled on a query. JPA 2.0 defines a set of standard query hints for refeshing, see JPA 2.0 Cache APIs. Example refresh EntityManager em = getEntityManager(); em.refresh(employee);
Lock
See, Read and Write Locking.
Get Reference
The EntityManager.getReference() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#getReference(java. lang. Class, java. lang. Object)) operation is used to obtain a handle to an object without requiring it to be loaded. It is similar to the find operation, but may return a proxy or unfetched object. JPA does not require that getReference avoid loading the object, so some JPA providers may not support it and just perform a normal find operation. The object returned by getReference should appear to be a normal object, if you access any method or attribute other than its Id it will trigger itself to be refreshed from the database. The intention of getReference is that it could be used on an insert or update operation as a stand-in for a related object, if you only have its Id and want to avoid loading the object. Note that getReference does not verify the existence of the object as find does. If the object does not exist and you try to use the unfetched object in an insert or update you may get a foreign key constraint violation, or if you access the object it may trigger an exception. Example getReference EntityManager em = getEntityManager(); Employee manager = em.getReference(managerId); Employee employee = new Employee(); ... em.persist(employee);
151
Flush
The EntityManager.flush() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#flush()) operation can be used the write all changes to the database before the transaction is committed. By default JPA does not normally write changes to the database until the transaction is committed. This is normally desirable as it avoids database access, resources and locks until required. It also allows database writes to be ordered, and batched for optimal database access, and to maintain integrity constraints and avoid deadlocks. This means that when you call persist, merge, or remove the database DML INSERT, UPDATE, DELETE is not executed, until commit, or until a flush is triggered. Flush has several usages: Flush changes before a query execution to enable the query to return new objects and changes made in the persistence unit. Insert persisted objects to ensure their Ids are assigned and accessible to the application if using IDENTITY sequencing. Write all changes to the database to allow error handling of any database errors (useful when using JTA or SessionBeans). To flush and clear a batch for batch processing in a single transaction. Avoid constraint errors, or reincarnate an object. Example flush public long createOrder(Order order) throws ACMEException { EntityManager em = getEntityManager(); em.persist(order); try { em.flush(); } catch (PersistenceException exception) { throw new ACMEException(exception); } return order.getId(); }
Clear
The EntityManager.clear() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#clear()) operation can be used to clear the persistence context. This will clear all objects read, changed, persisted, or removed from the current EntityManager or transaction. Changes that have already been written to the database through flush, or any changes made to the database will not be cleared. Any object that was read or persisted through the EntityManager is detached, meaning any changes made to it will not be tracked, and it should no longer be used unless merged into the new persistence context. clear can be used similar to a rollback to abandon changes and restart a persistence context. If a transaction commit fails, or a rollback is performed the persistence context will automatically be cleared. clear is similar to closing the EntityManager and creating a new one, the main difference being that clear can be called while a transaction is in progress. clear can also be used to free the objects and memory consumed by the EntityManager. It is important to note that an EntityManager is responsible for tracking and managing all objects
Java Persistence/Print version read within its persistence context. In an application managed EntityManager this includes every objects read since the EntityManager was created, including every transaction the EntityManager was used for. If a long lived EntityManager is used, this is an intrinsic memory leak, so calling clear or closing the EntityManager and creating a new one is an important application design consideration. For JTA managed EntityManagers the persistence context is automatically cleared across each JTA transaction boundary. Clearing is also important on large batch jobs, even if they occur in a single transaction. The batch job can be slit into smaller batches within the same transaction and clear can be called in between each batch to avoid the persistence context from getting too big. Example clear
public void processAllOpenOrders() { EntityManager em = getEntityManager(); List<Long> openOrderIds = em.createQuery("SELECT o.id from Order o where o.isOpen = true"); em.getTransaction().begin(); try { for (int batch = 0; batch < openOrderIds.size(); batch += 100) { for (int index = 0; index < 100 && (batch + index) < openOrderIds.size(); index++) { Long id = openOrderIds.get(batch + index); Order order = em.find(Order.class, id); order.process(em); } em.flush(); em.clear(); } em.getTransaction().commit(); } catch (RuntimeException error) { if (em.getTransaction().isActive()) { em.getTransaction().rollback(); } } }
152
Close
The EntityManager.close() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#close()) operation is used to release an application managed EntityManager's resources. JEE JTA managed EntityManagers cannot be closed, as they are managed by the JTA transaction and JEE server. The life-cycle of an EntityManager can last either a transaction, request, or a users session. Typically the life-cycle is per request, and the EntityManager is closed at the end of the request. The objects obtained from an EntityManager become detached when the EntityManager is closed, and any LAZY relationships may no longer be accessible if they were not accessed before the EntityManager was closed. Some JPA providers allow LAZY relationships to be accessed after close.
Java Persistence/Print version Example close public Order findOrder(long id) { EntityManager em = factory.createEntityManager(); Order order = em.find(Order.class, id); order.getOrderLines().size(); em.close(); return order; }
153
Get Delegate
The EntityManager.getDelegate() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#getDelegate()) operation is used to access the JPA provider's EntityManager implementation class in a JEE managed EntityManager. A JEE managed EntityManager will be wrapped by a proxy EntityManager by the JEE server that forwards requests to the EntityManager active for the current JTA transaction. If a JPA provider specific API is desired the getDelegate() API allows the JPA implementation to be accessed to call the API. In JEE a managed EntityManager will typically create a new EntityManager per JTA transaction. Also the behavior is somewhat undefined outside of a JTA transaction context. Outside a JTA transaction context, a JEE managed EntityManager may create a new EntityManager per method, so getDelegate() may return a temporary EntityManager or even null. Another way to access the JPA implementation is through the EntityManagerFactory, which is typically not wrapped with a proxy, but may be in some servers. In JPA 2.0 the getDelegate() API has been replaced by the unwrap() API which is more generic. Example getDelegate
public void clearCache() { EntityManager em = getEntityManager();
((JpaEntityManager)em.getDelegate()).getServerSession().getIdentityMapAccessor().initializeAllIdentityMaps(); }
154
em.unwrap(JpaEntityManager.class).getServerSession().getIdentityMapAccessor().initializeAllIdentityMaps(); }
Transactions
A transaction is a set of operations that either fail or succeed as a unit. Transactions are a fundamental part of persistence. A database transaction consists of a set of SQL DML (Data Manipulation Language) operations that are committed or rolled back as a single unit. An object level transaction is one in which a set of changes made to a set of objects are committed to the database as a single unit. JPA provides two mechanisms for transactions. When used in JEE JPA provides integration with JTA (Java Transaction API). JPA also provides its own EntityTransaction implementation for JSE and for use in a non-managed mode in JEE. Transactions in JPA are always at the object level, this means that all changes made to all persistent objects in the persistence context are part of the transaction.
155
<non-jta-data-source>amce</non-jta-data-source>
</persistence-unit>
</persistence>
JTA Transactions
JTA transactions are used in JEE, in managed mode (EJB). To use JTA transactions the transaction-type attribute in the persistence.xml is set to JTA. If JTA transactions are used with a DataSource, the <jta-datasource> element should be used to reference a server DataSource that has been configure to be JTA managed. JTA transactions are defined through the JTA UserTransaction (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ transaction/ UserTransaction. html) class, or more likely implicitly defined through SessionBean usage/methods. In a SessionBean normally each SessionBean method invocation defines a JTA transaction. UserTransaction can be obtained through a JNDI lookup in most application servers, or from the EJBContext in EJB 2.0 style SessionBeans. JTA transactions can be used in two modes in JEE. In JEE managed mode, such as an EntityManager injected into a SessionBean, the EntityManager reference, represents a new persistence context for each transaction. This means objects read in one transaction become detached after the end of the transaction, and should no longer be used, or need to be merged into the next transaction. In managed mode, you never create or close an EntityManager. The second mode allows the EntityManager to be application managed, (normally obtained from an injected EntityManagerFactory, or directly from JPA Persistence). This allows the persistence context to survive transaction boundaries, and follow the normal EntityManager life-cycle similar to resource local. If the EntityManager is created in the context of an active JTA transaction, it will automatically be part of the JTA transaction and commit/rollback with the JTA transaction. Otherwise it must join a JTA transaction to commit/rollback using EntityManager.joinTransaction() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#joinTransaction())
<jta-data-source>amce</jta-data-source>
</persistence-unit>
</persistence>
156
Advanced
Join Transaction
The EntityManager.joinTransaction() (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ EntityManager. html#joinTransaction()) API allows an application managed EntityManager to join the active JTA transaction context. This allows an EntityManager to be created outside the JTA transaction scope and commit its changes as part of the current transaction. This is normally used with a stateful SessionBean, or with a JSP or Servlet where an EXTENDED EntityManager is used in a stateful architecture. A stateful architecture is one where the server stores information on a client connection until the client's session is over, it differs from a stateless architecture where nothing is stored on the server in between client requests (each request is processed on its own). There are pros and cons with both stateful and stateless architectures. One of the advantages with using a stateful architecture and and EXTENDED EntityManager, is that you do not have to worry about merging objects. You can read your objects in one request, let the client modify them, and then commit them as part of a new transaction. This is where joinTransaction would be used. One issue with this is that you normally want to create the EntityManager when there is no active JTA transaction, otherwise it will commit as part of that transaction. However, even if it does commit, you can still continue to use it and join a future transaction. You do have to avoid using transactional API such as merge or remove until you are ready to commit the transaction. joinTransaction is only used with JTA managed EntityManagers (JTA transaction-type in persistence.xml). For RESOURCE_LOCAL EntityManagers you can just commit the JPA transaction whenever you desire.
157
Nested Transactions
JPA and JTA do not support nested transactions. A nested transaction is used to provide a transactional guarantee for a subset of operations performed within the scope of a larger transaction. Doing this allows you to commit and abort the subset of operations independently of the larger transaction. The rules to the usage of a nested transaction are as follows: While the nested (child) transaction is active, the parent transaction may not perform any operations other than to commit or abort, or to create more child transactions. Committing a nested transaction has no effect on the state of the parent transaction. The parent transaction is still uncommitted. However, the parent transaction can now see any modifications made by the child transaction. Those modifications, of course, are still hidden to all other transactions until the parent also commits. Likewise, aborting the nested transaction has no effect on the state of the parent transaction. The only result of the abort is that neither the parent nor any other transactions will see any of the database modifications performed under the protection of the nested transaction. If the parent transaction commits or aborts while it has active children, the child transactions are resolved in the same way as the parent. That is, if the parent aborts, then the child transactions abort as well. If the parent commits, then whatever modifications have been performed by the child transactions are also committed.
Java Persistence/Print version The locks held by a nested transaction are not released when that transaction commits. Rather, they are now held by the parent transaction until such a time as that parent commits. Any database modifications performed by the nested transaction are not visible outside of the larger encompassing transaction until such a time as that parent transaction is committed. The depth of the nesting that you can achieve with nested transaction is limited only by memory.
158
Caching
Caching is the most important performance optimization technique. There are many things that can be cached in persistence, objects, data, database connections, database statements, query results, meta-data, relationships, to name a few. Caching in object persistence normally refers to the caching of objects or their data. Caching also influences object identity, that is that if you read an object, then read the same object again you should get the identical object back (same reference). JPA 1.0 does not define a server object cache, JPA providers can support a server object cache or not, however most do. Caching in JPA is required with-in a transaction or within an extended persistence context to preserve object identity, but JPA does not require that caching be supported across transactions or persistence contexts. JPA 2.0 defines the concept of a shared cache. The @Cacheable (https:/ / java. sun. com/ javaee/ 6/ docs/ api/ javax/ persistence/Cacheable.html) annotation can be used to enable or disable caching on a class. There are two types of caching. You can cache the objects themselves including all of their structure and relationships, or you can cache their database row data. Both provide a benefit, however just caching the row data is missing a huge part of the caching benefit as the retrieval of each relationship typically involves a database query, and the bulk of the cost of reading an object is spent in retrieving its relationships.
Object Identity
Object identity in Java means if two variables (x, y) refer to the same logical object, then x == y returns true. Meaning that both reference the same thing (both a pointer to the same memory location). In JPA object identity is maintained within a transaction, and (normally) within the same EntityManager. The exception is in a JEE managed EntityManager, object identity is only maintained inside of a transaction. So the following is true in JPA: Employee employee1 = entityManager.find(Employee.class, 123); Employee employee2 = entityManager.find(Employee.class, 123); assert (employee1 == employee2); This holds true no matter how the object is accessed: Employee employee1 = entityManager.find(Employee.class, 123); Employee employee2 = employee1.getManagedEmployees().get(0).getManager(); assert (employee1 == employee2); In JPA object identity is not maintained across EntityManagers. Each EntityManager maintains its own persistence context, and its own transactional state of its objects. So the following is true in JPA: EntityManager entityManager1 = factory.createEntityManager(); EntityManager entityManager2 = factory.createEntityManager(); Employee employee1 = entityManager1.find(Employee.class, 123);
Java Persistence/Print version Employee employee2 = entityManager2.find(Employee.class, 123); assert (employee1 != employee2); Object identity is normally a good thing, as it avoids having your application manage multiple copies of objects, and avoids the application changing one copy, but not the other. The reason different EntityManagers or transactions (in JEE) don't maintain object identity is that each transaction must isolate its changes from other users of the system. This is also normally a good thing, however it does require the application to be aware of copies, detached objects and merging. Some JPA products may have a concept of read-only objects, in which object identity may be maintained across EntityManagers through a shared object cache.
159
Object Cache
An object cache is where the Java objects (entities) are themselves cached. The advantage of an object cache is that the data is cached in the same format that it is used in Java. Everything is stored at the object-level and no conversion is required when obtaining a cache hit. With JPA the EntityManager must still copy the objects to and from the cache, as it must maintain its transaction isolation, but that is all that is required. The objects do not need to be re-built, and the relationships are already available. With an object cache, transient data may also be cached. This may occur automatically, or may require some effort. If transient data is not desired, you may also need to clear the data when the object gets cached. Some JPA products allow read-only queries to access the object cache directly. Some products only allow object caching of read-only data. Obtaining a cache hit on read-only data is extremely efficient as the object does not need to be copied, other than the look-up, no work is required. It is possible to create your own object cache for your read-only data by loading objects from JPA into your own object cache or JCache implementation. The main issue, which is always the main issue in caching in general, is how to handle updates and stale cached data, but if the data is read-only, this may not be an issue. TopLink / EclipseLink : Support an object cache. The object cache is on by default, but can be globally or selectively enabled or configured per class. The persistence unit property "eclipselink.cache.shared.default" can be set to "false" to disable the cache. Read-only queries are supported through the "eclipselink.read-only" query hint, entities can also be marked to always be read-only using the @ReadOnly annotation.
Data Cache
A data cache, caches the object's data, not the objects themselves. The data is normally a representation of the object's database row. The advantage of a data cache is that it is easier to implement as you do not have to worry about relationships, object identity, or complex memory management. The disadvantage of a data cache is that it does not store the data as it is used in the application, and does not store relationships. This means that on a cache hit, the object must still be built from the data, and the relationships fetched from the database. Some products that support a data cache, also support a relationship cache, or query cache to allow caching of relationships. Hibernate : Supports integration with a third party data cache. Caching is not enabled by default and a third party caching product such as Ehcache must be used to enable caching.
160
Caching Relationships
Some products support a separate cache for caching relationships. This is normally required for OneToMany and ManyToMany relationships. OneToOne and ManyToOne relationships normally do not need to be cached, as they reference the object's Id. However an inverse OneToOne will require the relationship to be cached, as it references the foreign key, not primary key. For a relationship cache, the results normally only store the related object's Id, not the object, or its data (to avoid duplicate and stale data). The key of the relationship cache is the source object's Id and the relationship name. Sometimes the relationship is cached as part of the data cache, if the data cache stores a structure instead of a database row. When a cache hit occurs on a relationship, the related objects are looked up in the data cache one by one. A potential issue with this, is that if the related object is not in the data cache, it will need to be selected from the database. This could result in very poor database performance as the objects can be loaded one by one. Some product that support caching relationships also support batching the selects to attempt to alleviate this issue.
Cache Types
There are many different caching types. The most common is a LRU cache, or one that ejects the Least Recently Used objects and maintains a fixed size number of MRU (Most Recently Used) objects. Some cache types include: LRU - Keeps X number of recently used objects in the cache. Full - Caches everything read, forever. (not always the best idea if the database is large) Soft - Uses Java garbage collection hints to release objects from the cache when memory is low. Weak - Normally relevant with object caches, keeps any objects currently in use in the cache. L1 - This refers to the transactional cache that is part of every EntityManager, this is not a shared cache. L2 - This is a shared cache, conceptually stored in the EntityManagerFactory, so accessible to all EntityManagers. Data cache - The data representing the objects is cached (database rows). Object cache - The objects are cached directly. Relationship cache - The object's relationships are cached. Query cache - The result set from a query is cached. Read-only - A cache that only stores, or only allows read-only objects. Read-write - A cache that can handle insert, updates and deletes (non read-only). Transactional - A cache that can handle insert, updates and deletes (non read-only), and obeys transactional ACID properties. Clustered - Typically refers to a cache that uses JMS, JGroups or some other mechanism to broadcast invalidation messages to other servers in the cluster when an object is updated or deleted. Replicated - Typically refers to a cache that uses JMS, JGroups or some other mechanism to broadcast objects to all servers when read into any of the servers cache. Distributed - Typically refers to a cache that spreads the cached objects across several servers in a cluster, and can look-up an object in another server's cache. TopLink / EclipseLink : Support an L1 and L2 object cache. LRU, Soft, Full and Weak cache types are supported. A query cache is supported. The object cache is read-write and always transactional. Support for cache coordination through RMI and JMS is provided for clustering. The TopLink product includes a Grid component that integrates with Oracle Coherence to provide a distributed cache.
161
Query Cache
A query cache caches query results instead of objects. Object caches cache the object by its Id, so are generally not very useful for queries that are not by Id. Some object caches support secondary indexes, but even indexed caches are not very useful for queries that can return multiple objects, as you always need to access the database to ensure you have all of the objects. This is where query caches are useful, instead of storing objects by Id, the query results are cached. The cache key is based on the query name and parameters. So if you have a NamedQuery that is commonly executed, you can cache its results, and only need to execute the query the first time. The main issue with query caches, as with caching in general is stale data. Query caches normally interact with an object cache to ensure the objects are at least as up to date as in the object cache. Query caches also typically have invalidation options similar to object caches. TopLink / EclipseLink : Support query cache enabled through the query hint "eclipselink.query-results-cache". Several configuration options including invalidation are supported.
Stale Data
The main issue with caching anything is the possibility of the cache version getting out of synch with the original. This is referred to as stale or out of synch data. For read-only data, this is not an issue, but for data that changes in-frequently or frequently this can be a major issue. There are many techniques for dealing with stale data and out of synch data.
Java Persistence/Print version EntityManager per request, or per transaction. The 1st level cache can also be cleared using the EntityManager.clear() method, or an object can be refreshed using the EntityManager.refresh() method.
162
Refreshing
Refreshing is the most common solution to stale data. Most application users are familiar with the concept of a cache, and know when they need fresh data and are willing to click a refresh button. This is very common in an Internet browser, most browsers have a cache of web pages that have been accessed, and will avoid loading the same page twice, unless the user clicks the refresh button. This same concept can be used in building JPA applications. JPA provides several refreshing options, see refreshing. Some JPA providers also support refreshing options in their 2nd level cache. One option is to always refresh on any query to the database. This means that find() operations will still access the cache, but if the query accesses the database and brings back data, the 2nd level cache will be refreshed with the data. This avoids queries returning stale data, but means there will be less benefit from caching. The cost is not just in refreshing the objects, but in refreshing their relationships. Some JPA providers support this option in combination with optimistic locking. If the version value in the row from the database is newer than the version value from the object in the cache, then the object is refreshed as it is stale, otherwise the cache value is returned. This option provides optimal caching, and avoids stale data on queries. However objects returned through find() or through relationships can still be stale. Some JPA providers also allow find() operation to be configured to first check the database, but this general defeats the purpose of caching, so you are better off not using a 2nd level cache at all. If you want to use a 2nd level cache, then you must have some level of tolerance to stale data.
163
Cache Invalidation
A common way to deal with stale cached data is to use cache invalidation. Cache invalidation removes or invalidates data or objects in the cache after a certain amount of time, or at a certain time of day. Time to live invalidation guarantees that the application will never read cached data that is older than a certain amount of time. The amount of the time can be configured with respect to the application's requirements. Time of day invalidation allows the cache to be invalidated at a certain time of day, typically done in the night, this ensures data is never older than a day old. This can also be used if it is know that a batch job updates the database a night, the invalidation time can be set after this batch job is scheduled to run. Data can also be invalidated manually, such as using the JPA 2.0 evict() API. Most cache implementations support some form of invalidation, JPA does not define any configurable invalidation options, so this depends on the JPA and cache provider. TopLink / EclipseLink : Provide support for time to live and time of day cache invalidation using the @Cache annotation and <cache> orm.xml element. Cache invalidation is also supported through API, and can be used in a cluster to invalidate objects changed on other machines.
164
Caching in a Cluster
Caching in a clustered environment is difficult because each machine will update the database directly, but not update the other machine's caches, so each machines cache can become out of date. This does not mean that caching cannot be used in a cluster, but you must be careful in how it is configured. For read-only objects caching can still be used. For read mostly objects, caching can be used, but some mechanism should be used to avoid stale data. If stale data is only an issue for writes, then using optimistic locking will avoid writes occurring on stale data. When an optimistic lock exception occurs, some JPA providers will automatically refresh or invalidate the object in the cache, so if the user or application retries the transaction the next write will succeed. Your application could also catch the lock exception and refresh or invalidate the object, and potentially retry the transaction if the user does not need to be notified of the lock error (be careful doing this though, as normally the user should be aware of the lock error). Cache invalidation can also be used to decrease the likelyhood of stale data by setting a time to live on the cache. The size of the cache can also affect the occurrence of stale data. Although returning stale data to a user may be an issue, normally returning stale data to a user that just updated the data is a bigger issue. This can normally be solved through session infinitely, but ensuring the user interacts with the same machine in the cluster for the duration of their session. This can also improve cache usage, as the same user will typically access the same data. It is normally also useful to add a refresh button to the UI, this will allow the user to refresh their data if they think their data is stale, or they wish to ensure they have data that is up to date. The application can also choose the refresh the objects in places where up to date data is important, such as using the cache for read-only queries, but refreshing when entering a transaction to update an object. For write mostly objects, the best solution may be to disable the cache for those objects. Caching provides no benefit to inserts, and the cost of avoiding stale data on updates may mean there is no benefit to caching objects that are always updated. Caching will add some overhead to writes, as the cache must be updated, having a large cache also affects garbage collection, so if the cache is not providing any benefit it should be turned off to avoid this overhead. This can depend on the complexity of the object though, if the object has a lot of complex relationships, and only part of the object is updated, then caching may still be worth it. Cache Coordination One solution to caching in a clustered environment is to use a messaging framework to coordinate the caches between the machines in the cluster. JMS or JGroups can be used in combination with JPA or application events to broadcast messages to invalidate the caches on other machines when an update occurs. Some JPA and cache providers support cache coordination in a clustered environment. TopLink / EclipseLink : Support cache coordination in a clustered environment using JMS or RMI. Cache coordination is configured through the @Cache annotation or <cache> orm.xml element, and using the persistence unit property eclipselink.cache.coordination.protocol. Distributed Caching A distributed cache is one where the cache is distributed across each of the machines in the cluster. Each object will only live on one or a set number of the machines. This avoids stale data, because when the cache is accessed or updated the object is always retrieved from the same location, so is always up to date. The draw back to this solution is that cache access now potentially requires a network access. This solution works best when the machines in the cluster are connected together on the same high speed network, and when the database machine is not as well connected, or is under load. A distributed cache reduces database access, so allows the application to be scaled to a larger cluster without the database becoming a bottleneck. Some distributed cache providers also provide a local cache, and offer cache coordination between the caches. TopLink : Supports integration with the Oracle Coherence distributed cache.
165
Common Problems
I can't see changes made directly on the database, or from another server This means you have either enabled caching in your JPA configuration, or your JPA provider caches by default. You can either disable the 2nd level cache in your JPA configuration, or refresh the object or invalidate the cache after changing the database directly. See, Stale Data TopLink / EclipseLink : Caching is enabled by default. To disable caching set the persistence property "eclipselink.cache.shared.default" to false in your persistence.xml or persistence properties. You can also configure this on a per class basis if you want to allow caching in some classes and not in others. See, EclipseLink FAQ (http:/ / wiki. eclipse. org/ EclipseLink/ FAQ/ How_to_disable_the_shared_cache?).
Spring
Spring (http:/ / en. wikipedia. org/ wiki/ Spring_Framework) is an application framework for Java. Spring is an IoC (http:/ / en. wikipedia. org/ wiki/ Inversion_of_Control) container that allows for a different programming model. Spring is similar to a JEE server, in that it provides a transaction service, XML deployment, annotation processing, byte-code weaving, and JPA integration.
Persistence
Persistence in Spring in normally done through a DOA (Data Access Object) layer. The Spring DOA layer is meant to encapsulate the persistence mechanism, so the same application data access API would be given no matter if JDBC, JPA or a native API were used. Spring also defines a transaction manager implementation that is similar to JTA. Spring also supports transactional annotations and beans similar to SessionBeans.
166
JPA
Spring has specific support for JPA and can emulate some of the functionality of a JEE container with respect to JPA. Spring allows a JPA persistence unit to be deployed in container managed mode. If the spring-agent is used to start the JVM, Spring can deploy a JPA persistence unit with weaving similar to a JEE server. Spring can also pass a Spring DataSource and integrate its transaction service with JPA. Spring allows the @PersistenceUnit and @PersistenceContext annotations to be used in any Spring bean class, to have an EntityManager or EntityManagerFactory injected. Spring supports a managed transactional EntityManager similar to JEE, where the EntityManager binds itself as a new persistence context to each new transaction and commits as part of the transaction. Spring supports both JTA integration, and its own transaction manager.
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd">
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="acme" /> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> </bean>
<bean id="persistenceUnitManager" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="defaultDataSource" ref="dataSource" /> <property name="dataSources"> <map> <entry> <key> <value>jdbc/__default</value> </key> <ref bean="dataSource" /> </entry> <entry> <key> <value>jdbc/jta</value> </key> <ref bean="dataSource" /> </entry> </map> </property> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver" /> </property> </bean>
167
</beans>
MySQL
Using MySQL with Java Persistence API is quite straightforward as the JDBC driver is available directly from MySQL web site (http://dev.mysql.com/). MySQL Connector/J is the official JDBC driver for MySQL and has a good documentation available directly on the web site. In this page we will see some aspects of managing MySQL using the Persistence API.
Installing
Installation is straightforward and consists in making the .jar file downloaded from MySQL web site visible to the JVM. It can be already installed if you are using Apache or JBOSS.
Configuration tips
You can learn a lot from the documentation on MySQL web site (http:/ / dev. mysql. com/ doc/ refman/ 5. 0/ en/ connector-j-reference-configuration-properties.html).
Java Persistence/Print version If not, the persistence API will complain that the database does not exist.
168
References
Resources
EJB 3.0 JPA 1.0 Spec (http:/ / jcp. org/ aboutJava/ communityprocess/ final/ jsr220/ index. html) JPA 2.0 Spec (http:/ / jcp. org/ en/ jsr/ detail?id=317) JPA 1.0 ORM XML Schema (http:/ / java. sun. com/ xml/ ns/ persistence/ orm_1_0. xsd) JPA 1.0 Persistence XML Schema (http:/ / java. sun. com/ xml/ ns/ persistence/ persistence_1_0. xsd) JPA 1.0 JavaDoc (https:/ / java. sun. com/ javaee/ 5/ docs/ api/ javax/ persistence/ package-summary. html) JPQL BNF JPA 2.0 Reference Implementation Development (http:/ / wiki. eclipse. org/ EclipseLink/ Development/ JPA) Java Programming
Wikis
EclipseLink Wiki (http:/ / wiki. eclipse. org/ EclipseLink) Oracle TopLink Wiki (http:/ / wiki. oracle. com/ page/ TopLink) Glassfish TopLink Essentials Wiki (http:/ / wiki. glassfish. java. net/ Wiki. jsp?page=TopLinkEssentials) Hibernate Wiki (http:/ / www. hibernate. org/ 37. html) JPA on Wikipedia (http:/ / en. wikipedia. org/ wiki/ Java_Persistence_API) JPA on Javapedia (http:/ / wiki. java. net/ bin/ view/ Javapedia/ JPA) JPA on freebase (http:/ / www. freebase. com/ view/ guid/ 9202a8c04000641f8000000004666d33) JPA on DMOZ (Open Directory) (http:/ / www. dmoz. org/ Computers/ Programming/ Languages/ Java/ Databases_and_Persistence/ Object_Persistence/ JPA/ )
Forums
Sun EJB Forum (http:/ / forum. java. sun. com/ forum. jspa?forumID=13) JavaRanch ORM Forum (http:/ / saloon. javaranch. com/ cgi-bin/ ubb/ ultimatebb. cgi?ubb=forum& f=78) Nabble JPA Forum (http:/ / www. nabble. com/ JPA-f27109. html) EclipseLink Forum (http:/ / www. nabble. com/ EclipseLink-f26430. html) EclipseLink Newsgroup (http:/ / www. eclipse. org/ newsportal/ thread. php?group=eclipse. rt. eclipselink) Oracle TopLink Forum (http:/ / forums. oracle. com/ forums/ forum. jspa?forumID=48) Hibernate Forum (http:/ / forum. hibernate. org/ ) TopLink Essentials Mailing List (Glassfish persistence) (http:/ / www. nabble. com/ java. net---glassfish-persistence-f13455. html)
Products
Oracle TopLink Home (http:/ / www. oracle. com/ technology/ products/ ias/ toplink/ index. html) EclipseLink Home (http:/ / www. eclipse. org/ eclipselink/ ) TopLink Essentials Home (https:/ / glassfish. dev. java. net/ javaee5/ persistence/ ) Hibernate Home (http:/ / www. hibernate. org/ ) Open JPA Home (http:/ / openjpa. apache. org/ ) HiberObjects (http:/ / objectgeneration. com/ eclipse/ )
169
Blogs
Java Persistence (http:/ / java-persistence. blogspot. com/ ) (Doug Clarke) System.out (http:/ / jroller. com/ mkeith/ ) (Mike Keith) On TopLink (http:/ / ontoplink. blogspot. com/ ) (Shaun Smith) EclipseLink (http:/ / eclipselink. blogspot. com/ ) Hibernate Blog (http:/ / blog. hibernate. org/ )
Books
Pro EJB 3 (http:/ / www. amazon. com/ gp/ product/ 1590596455/ 102-2412923-9620152?v=glance& n=283155)
170
License
Creative Commons Attribution-Share Alike 3.0 Unported http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/