SQL Workbench Manual
SQL Workbench Manual
Table of Contents
1. General Information ............................................................................................................................. 5 1.1. Software license ....................................................................................................................... 5 1.2. Program version ....................................................................................................................... 5 1.3. Feedback and support ................................................................................................................ 5 1.4. Credits and thanks .................................................................................................................... 5 1.5. Third party components ............................................................................................................. 6 2. Change log ......................................................................................................................................... 7 3. Installing and starting SQL Workbench/J ............................................................................................... 10 3.1. Pre-requisites .......................................................................................................................... 10 3.2. First time installation ............................................................................................................... 10 3.3. Upgrade installation ................................................................................................................. 10 3.4. Starting the program from the commandline ................................................................................. 10 3.5. Starting the program using the shell script ................................................................................... 10 3.6. Starting the program using the Windows launcher ......................................................................... 11 3.7. Configuration directory ............................................................................................................ 11 3.8. Increasing the memory available to the application ........................................................................ 12 3.9. Command line parameters ......................................................................................................... 13 4. JDBC Drivers ................................................................................................................................... 17 4.1. Configuring JDBC drivers ........................................................................................................ 17 4.2. Connecting through ODBC ....................................................................................................... 17 4.3. Specifying a library directory .................................................................................................... 18 4.4. Popular JDBC drivers .............................................................................................................. 18 5. Connecting to the database .................................................................................................................. 20 5.1. Connection profiles .................................................................................................................. 20 5.2. Managing profile groups ........................................................................................................... 20 5.3. JDBC related profile settings ..................................................................................................... 21 5.4. Extended properties for the JDBC driver ..................................................................................... 21 5.5. SQL Workbench/J specific settings ............................................................................................. 22 5.6. Connect to Oracle with SYSDBA privilege .................................................................................. 26 5.7. ODBC connections without a data source .................................................................................... 26 6. Editing SQL Statements ...................................................................................................................... 27 6.1. Editing files ........................................................................................................................... 27 6.2. Command completion .............................................................................................................. 27 6.3. JOIN completion ..................................................................................................................... 27 6.4. Customizing keyword highlighting ............................................................................................. 28 6.5. Reformat SQL ........................................................................................................................ 28 6.6. Create SQL value lists ............................................................................................................. 29 6.7. Programming related editor functions .......................................................................................... 30 7. Using SQL Workbench/J ..................................................................................................................... 33 7.1. Displaying help ....................................................................................................................... 33 7.2. Resizing windows ................................................................................................................... 33 7.3. Executing SQL statements ........................................................................................................ 33 7.4. Displaying results .................................................................................................................... 35 7.5. Creating stored procedures and triggers ....................................................................................... 36 7.6. Dealing with BLOB and CLOB columns ..................................................................................... 37 7.7. Performance tuning when executing SQL .................................................................................... 39 7.8. SQL Macros ........................................................................................................................... 39 7.9. Using workspaces .................................................................................................................... 41 7.10. Saving and loading SQL scripts ............................................................................................... 41
7.11. Viewing server messages ........................................................................................................ 7.12. Editing data .......................................................................................................................... 7.13. Deleting rows from the result .................................................................................................. 7.14. Deleting rows with foreign keys ............................................................................................... 7.15. Navigating referenced rows ..................................................................................................... 7.16. Sorting the result ................................................................................................................... 7.17. Filtering the result ................................................................................................................. 7.18. Running stored procedures ...................................................................................................... 7.19. Export result data .................................................................................................................. 7.20. Copy data to the clipboard ...................................................................................................... 7.21. Import data into the result set .................................................................................................. 8. Variable substitution in SQL statements ................................................................................................. 8.1. Defining variables ................................................................................................................... 8.2. Editing variables ..................................................................................................................... 8.3. Using variables in SQL statements ............................................................................................. 8.4. Prompting for values during execution ........................................................................................ 9. Using SQL Workbench/J in batch files .................................................................................................. 9.1. Specifying the connection ......................................................................................................... 9.2. Specifying the script file(s) ....................................................................................................... 9.3. Specifying a SQL command directly ........................................................................................... 9.4. Specifying a delimiter .............................................................................................................. 9.5. Specifying an encoding for the file(s) ......................................................................................... 9.6. Specifying a logfile ................................................................................................................. 9.7. Handling errors ....................................................................................................................... 9.8. Specify a script to be executed on successful completion ................................................................ 9.9. Specify a script to be executed after an error ................................................................................ 9.10. Ignoring errors from DROP statements ...................................................................................... 9.11. Changing the connection ......................................................................................................... 9.12. Controlling console output during batch execution ....................................................................... 9.13. Running batch scripts interactively ........................................................................................... 9.14. Setting configuration properties ................................................................................................ 9.15. Examples ............................................................................................................................. 10. Using SQL Workbench/J in console mode ............................................................................................ 10.1. Entering statements ................................................................................................................ 10.2. Exiting console mode ............................................................................................................. 10.3. Setting or changing the connection ........................................................................................... 10.4. Displaying result sets ............................................................................................................. 10.5. Running SQL scripts that produce a result ................................................................................. 10.6. Controlling the number of rows displayed .................................................................................. 10.7. Controlling the query timeout .................................................................................................. 10.8. Managing connection profiles .................................................................................................. 11. Export data using WbExport .............................................................................................................. 11.1. Memory usage and WbExport .................................................................................................. 11.2. Exporting Excel files .............................................................................................................. 11.3. General WbExport parameters .................................................................................................. 11.4. Parameters for text export ....................................................................................................... 11.5. Parameters for XML export ..................................................................................................... 11.6. Parameters for type SQLUPDATE, SQLINSERT or SQLDELETEINSERT ...................................... 11.7. Parameters for Spreadsheet types (ods, xslm, xls, xlsx) ................................................................. 11.8. Parameters for HTML export ................................................................................................... 11.9. Compressing export files ......................................................................................................... 11.10. Examples ............................................................................................................................ 12. Import data using WbImport .............................................................................................................. 12.1. General parameters ................................................................................................................ 12.2. Parameters for the type TEXT .................................................................................................
41 42 43 43 44 44 45 46 46 47 47 49 49 50 50 50 52 52 52 52 53 53 53 53 53 54 54 54 54 55 55 55 57 57 57 58 58 59 59 60 60 62 62 62 63 67 69 69 71 71 72 72 75 75 80
13.
14.
15.
16.
12.3. Text Import Examples ............................................................................................................ 83 12.4. Parameters for the type XML .................................................................................................. 86 12.5. Update mode ........................................................................................................................ 86 Copy data across databases ................................................................................................................ 88 13.1. General parameters for the WbCopy command. ........................................................................... 88 13.2. Copying data from one or more tables ....................................................................................... 89 13.3. Copying data based on a SQL query ......................................................................................... 91 13.4. Update mode ........................................................................................................................ 91 13.5. Synchronizing tables .............................................................................................................. 91 13.6. Examples ............................................................................................................................. 92 Other SQL Workbench/J specific commands ......................................................................................... 94 14.1. Create a report of the database objects - WbSchemaReport ............................................................ 94 14.2. Compare two database schemas - WbSchemaDiff ........................................................................ 95 14.3. Compare data across databases - WbDataDiff ............................................................................. 96 14.4. Search source of database objects - WbGrepSource ...................................................................... 99 14.5. Search data in multiple tables - WbGrepData ............................................................................ 100 14.6. Define a script variable - WbVarDef ....................................................................................... 101 14.7. Delete a script variable - WbVarDelete .................................................................................... 101 14.8. Show defined script variables - WbVarList ............................................................................... 101 14.9. Confirm script execution - WbConfirm .................................................................................... 101 14.10. Run a stored procedure with OUT parameters - WbCall ............................................................ 101 14.11. Execute a SQL script - WbInclude (@) .................................................................................. 103 14.12. Extract and run SQL from a Liquibase ChangeLog - WbRunLB .................................................. 104 14.13. Handling tables or updateable views without primary keys ......................................................... 104 14.14. Change the default fetch size - WbFetchSize ........................................................................... 105 14.15. Run statements as a single batch - WbStartBatch, WbEndBatch .................................................. 106 14.16. Extracting BLOB content - WbSelectBlob .............................................................................. 106 14.17. Control feedback messages - WbFeedback .............................................................................. 107 14.18. Setting connection properties - SET ....................................................................................... 107 14.19. Changing read only mode - WbMode .................................................................................... 107 14.20. Show table structure - DESCRIBE ........................................................................................ 108 14.21. List tables - WbList ............................................................................................................ 108 14.22. List stored procedures - WbListProcs ..................................................................................... 108 14.23. List triggers - WbListTriggers .............................................................................................. 109 14.24. Show the source of a stored procedures - WbProcSource ........................................................... 109 14.25. List catalogs - WbListCat .................................................................................................... 109 14.26. List schemas - WbListSchemas ............................................................................................. 109 14.27. Change the connection for a script - WbConnect ...................................................................... 109 14.28. Run an XSLT transformation - WbXslt .................................................................................. 110 14.29. Using Oracle's DBMS_OUTPUT package ............................................................................... 111 DataPumper ................................................................................................................................... 112 15.1. Overview ............................................................................................................................ 112 15.2. Selecting source and target connection ..................................................................................... 112 15.3. Copying a complete table ...................................................................................................... 112 15.4. Advanced copy tasks ............................................................................................................ 114 Database Object Explorer ................................................................................................................. 115 16.1. Objects tab ......................................................................................................................... 115 16.2. Table details ....................................................................................................................... 117 16.3. Modifying the definition of database objects ............................................................................. 117 16.4. Table data ........................................................................................................................... 118 16.5. Changing the display order of table columns ............................................................................. 118 16.6. Customize data retrieval ........................................................................................................ 119 16.7. Customizing the generation of the table source .......................................................................... 120 16.8. View details ........................................................................................................................ 120 16.9. Procedure tab ...................................................................................................................... 120
16.10. Search table data ................................................................................................................ 17. Common problems ......................................................................................................................... 17.1. The driver class was not found ............................................................................................... 17.2. Syntax error when creating stored procedures ............................................................................ 17.3. Timestamps with timezone information are not displayed correctly ................................................ 17.4. Excel export not available ..................................................................................................... 17.5. Out of memory errors ........................................................................................................... 17.6. Display problems when running under Windows ..................................................................... 17.7. High CPU usage when executing statements ............................................................................. 17.8. Oracle Problems .................................................................................................................. 17.9. MySQL Problems ................................................................................................................ 17.10. Microsoft SQL Server Problems ........................................................................................... 17.11. DB2 Problems ................................................................................................................... 17.12. PostgreSQL Problems ......................................................................................................... 17.13. Sybase SQL Anywhere Problems .......................................................................................... 18. Options dialog ............................................................................................................................... 18.1. General options ................................................................................................................... 18.2. Editor options ...................................................................................................................... 18.3. Editor colors ....................................................................................................................... 18.4. Font settings ....................................................................................................................... 18.5. Auto-completion options ....................................................................................................... 18.6. Workspace options ............................................................................................................... 18.7. Options for displaying data .................................................................................................... 18.8. Options for formatting data .................................................................................................... 18.9. Options for data editing ........................................................................................................ 18.10. DbExplorer options ............................................................................................................. 18.11. Window Title .................................................................................................................... 18.12. SQL Formatting ................................................................................................................. 18.13. SQL Generation ................................................................................................................. 18.14. External tools .................................................................................................................... 18.15. Look and Feel ................................................................................................................... 19. Configuring keyboard shortcuts ......................................................................................................... 19.1. Assign a shortcut to an action ................................................................................................ 19.2. Removing a shortcut from an action ........................................................................................ 19.3. Reset to defaults .................................................................................................................. 20. Advanced configuration options ........................................................................................................ 20.1. Database Identifier ............................................................................................................... 20.2. DBID ................................................................................................................................. 20.3. GUI related settings .............................................................................................................. 20.4. Editor related settings ........................................................................................................... 20.5. DbExplorer Settings ............................................................................................................. 20.6. Database related settings ....................................................................................................... 20.7. SQL Execution related settings ............................................................................................... 20.8. Default settings for Export/Import ........................................................................................... 20.9. Controlling the log file .......................................................................................................... 20.10. Configure Log4J logging ..................................................................................................... 20.11. Settings related to SQL statement generation ........................................................................... 20.12. Customize table source retrieval ............................................................................................ 20.13. Filter settings ..................................................................................................................... Index .................................................................................................................................................
120 123 123 123 123 123 123 124 124 124 125 126 127 128 129 130 130 131 133 134 134 135 136 137 138 139 140 141 143 144 144 145 145 145 145 146 146 146 146 147 148 149 154 155 156 158 158 160 160 162
1. General Information
1.1. Software license
Copyright (c) 2002-2010, Thomas Kellerer Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, publish, distribute, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. The source code or parts of the source-code may only be reused with the permission of the author In order to ensure that this software stays free, selling, licensing or charging for the use of this software is prohibited. The right to include this software in a commercial product (bundling) is still granted as long as this software is not the major functionality delivered.
Disclaimer
The software is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the author (Thomas Kellerer), be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage. In other words: use it at your own risk, and don't blame me if you accidently delete your database!
1.5.2. Icons
Some icons are taken from Tango project: http://tango.freedesktop.org/Tango_Icon_Library Some icons are taken from KDE Crystal project: http://www.everaldo.com/crystal/ The DbExplorer icon is from the icon set "Mantra" by Umar Irshad: http://umar123.deviantart.com/
2. Change log
Changes from build 109 to build 110
Enhancements
The (self-written) Windows launcher has been dropped and replaced with executables from Winrun4J, including suppport for 64bit executables (thanks to Markus for testing this). EXPLAIN is now supported for auto-completion (PostgreSQL, Oracle, MySQL) The DbExplorer will now show the status of Oracle stored procedures. The DbExplorer now includes information about partitioned tables and indexes for Oracle in the generated source. The possibility to influence the generated INSERT statement for WbImport is now available as a commandline parameter (previously this was only possible through a property in workbench.settings). Two new options for the SQL Formatter are available that control the placement of the comma when line breaks are inserted. Thanks to Andreas for this patch. The option "Allow empty line as statement delimiter" has been moved into the settings dialog and is now also being used when detecting the current statement in the editor. The font size of the editor can now be changed dynamically using Ctrl-Numpad Plus/Ctrl-Numpad Minus or via the scroll wheel of the mouse (holding down the Ctrl Key). The font size of a result table can be changed using the scroll wheel or through the context menu in the table header. It is now (optionally) possible to modify the text in the editor while a statement is running. Statement and error highlighting will not be available if the editor contents was modified during execution. When the option "Show Max. Rows warning" is enabled, the warning is now shown using an icon in the tab header (instead of changing the color). The join condition inside a JOIN part of a SELECT statement can now be generated automatically (if the tables involved are using foreign keys) using SQL -> JOIN completion Primary key columns are now shown in bold face in the auto-completion popup window. If comments are defined for columns, tables or view, these will be shown as a tooltip for the entry in the popup A new parameter -skipTargetCheck is available for WbCopy to disable checking of the target table. This is useful if the target table is not visible by the JDBC driver (e.g. temporary tables for Informix) A new parameter (-tableType) is available for WbCopy to control what kind of table is created when using createTarget. The parameter value selects a SQL "template" that is defined in workbench.settings When using WbExport and -quoteAlways=true, null values are no longer escaped. The parameter can now be use with WbImport to distinguis between null values and empty strings. The escaping of embedded quotes can now also be selected when using "Save Data as" or "Import file". When connected to an Excel Spreadsheet, [] are now also recognized as quote characters (to allow a semicolon in a "table" name). The JDBC driver templates can now be loaded from an external a file named "DriverTemplates.xml" that has to be stored in the directory where the jar file is located. When starting the console mode without specifying a password or when using a profile without a password, the application now prompts to enter a password
A new option "Connection Timeout" has been added to the connectioin profiles. A new option to log all executed SQL statements to the logfile has been added (Tools -> Options -> General -> Log all SQL statements). The DataPumper now remembers the settings for importing text or XML files A new SQL formatter option has been added, to insert a space after a comma when processing IN (...) lists. The "Delete data" dialog in the DbExplorer now correctly commits for DBMS where TRUNCATE is transaction safe. The CASCADE option for PostgreSQL is also supported.
Bug fixes
Changing the font size in the editor would corrupt the display of the caret When exporting data, two tabs could not be defined as a separator (only a single tab) Oracle Procedures with the status "INVALID" where not shown in the DbExplorer The JOIN completion did not always detect the FKs correctly. Cancelling an import did not rollback the changes. When reloading the table list in the DbExplorer while "editing" mode was active, the table list was not retrieved correctly. Using WbCopy with schema qualified table names did not work properly when the target connection was using a different default schema. The rowcount in the statusbar was not always showing the correct values when more than one result was present JOIN completion did not work WbCall did not work for functions that were using OUT parameters When using a query as the source for WbCopy that used column aliases between databases that store object names in different case a wrong isnert or update statement was created When changing the current database, the object cache was not updated correctly "Filter by selected value" did not work for boolean columns For PostgreSQL, columns defined as bit(x) with x > 1 where not displayed correctly. When running a statement there was a built-in limit on 15 warnings that would be displayed by SQL Workbench. That limited the output of messages with RAISe NOTICE to 15 as well in PostgreSQL. The datatype for parameters in Oracle stored procedures was not always displayed correctly. When selecting JdbcOdbc bridge as the Driver class, SQL Workbench incorrectly showed an error message that the driver could not be found. Copying a table's column definitions into the clipboard did not work. When using WbCopy and -createTarget with a fully qualified table name (otherschema.sometable) the table was not created in the specified schema Postgres DOMAINs where not displayed if the DOMAIN was created in a schema that is not in the schema search_path
A selection of '*' in the DbExplorer's schema selector was not restored if "Remember DbExplorer schema" was enabled for the workspace -quoteCharEscaping=duplicate was not working for WbImport if a quote character other than " was used. Procedures were no longer displayed in DbExplorer for DB2. A workaround for an Oracle driver bug was implemented, where the datatype for TIMESTAMP(3) was reported incorrectly. The generated XML content was not valid for ODS or OpenXML export if the generating SQL contained characters that needed escaping in XML. Statements where not closed properly when retrieving Oracle Object type information. Oracle RAW columns were not displayed correctly if the automatic data conversion was turned on. Using WbDataDiff (and WbSchemaDiff) with the -referenceSchema parameter or selecting specific tables for comparison did not work. Optimize column width calculated a width that was too wide (and increased with the total number of columns) When using the DataPumper to import xml files, non-standard column names were not quoted properly in the UPDATE statement When saving and loading the same file an empty line was appended each time the file was loaded. The console interface was no longer working. Importing multiple zipped text files into tables with BLOB columns did not work when using a batch size > 1 When copying data to the clipboard, the data was always copied in the column order as retrieved from the database. If the column order was changed, this was not reflected in the copied data If a stored procedure in Oracle with the same name existed as a standalone procedure and a packaged procedure, the procedure columns for the standalone procedure where not displayed correctly in the DbExplorer When using the new lobsPerDirectory parameter for WbExport, the directory numbering did not restart with a new table. WbSchemaReport created invalid SQL if column names contained characters that needed a replacement with an XML entity. The full release history is available at the SQL Workbench/J homepage
10
If WORKBENCH_JDK is not defined, the shell script will check for the environment variable JAVA_HOME. If that is defined, the script will use $JAVA_HOME/bin/java to run the application. If neither WORKBENCH_JDK nor JAVA_HOME is defined, the shell script will simply use java to start the application, assuming that a valid Java runtime is available on the path. All parameters that are passed to the shell scripts are passed to the application, not to the Java runtime. If you want to change the memory or other system settings for the JVM, you need to edit the shell script.
11
Note that, before Build 98 the default configuration directory was the program's directory and not a directory in the user's home directory. The following files are stored in the configuration directory: General configuration settings (workbench.settings) Connection profiles (WbProfiles.xml) JDBC Driver definitions (WbDrivers.xml) Customized shortcut definitions (WbShortcuts.xml). If you did not customize any of the shortcuts, this file does not exist Macro definitions (WbMacros.xml) Log file (workbench.log) Workspace files (*.wksp) If you want to use a different file for the connection profile than WbProfiles.xml then you can specify the location of the profiles with the -profilestorage parameter on the commandline. Thus you can create different shortcuts on your desktop pointing to different sets of profiles. The different shortcuts can still use the same main configuration file.
or if you are using the Windows launcher: SQLWorkbench -configDir=c:\ConfigData\SQLWorkbench The placeholder ${user.home} will be replaced with the current user's home directory (as returned by the Operating System), e.g.: java -jar sqlworkbench.jar -configDir=${user.home}/.sqlworkbench If the specified directory does not exist, it will be created. To copy an installation to a different computer, simply copy all the above files to the other computer (the log file does not need to be copied). When a profile is connected to a workspace, the filename of the workspace file is usually stored with a placeholder for the configuration directory (%configDir%) so that the profiles don't need to be adjusted. You will need to edit the driver definitions (stored in WbDrivers.xml) as the full path to the driver's jar file(s) is stored in the file (unless you define the location of the drivers using the libdir variable.
12
SQL Workbench/J reads the data that is returned by a SELECT statement into the main memory. When retrieving large result sets, you might get an error message, indicating that not enough memory is available. In this case you need to increase the memory that the JVM requests from the operating system (or change your statement to return fewer rows). When you use the Windows Launcher to start SQL Workbench/J you need to create a configuration file named SQLWorkbench.ini (or SQLWorkbench64.ini when using the 64bit launcher) with the following content: vm.heapsize.preferred=512 This will increase the memory for the application to 512MB. For more options to configure the JVM, please refer to the documentation of WinRun4J If you are running SQL Workbench/J on a non-Windows operating system or do not want to use the launcher, then you need to pass this parameter directly to the JVM java -Xmx512m -jar sqlworkbench.jar If you are using the supplied shell scripts to start SQL Workbench/J, you can edit the scripts and change the value for the -Xmx parameter in there.
java -jar sqlworkbench.jar -configDir=${user.home}/wbconfig SQLWorkbench -configDir='c:\Configurations\SQLWorkbench' On the Windows platform you can use a forward slash to separate directory names in the parameter.
13
If the value of the parameter does not contain a path, the file will be expected (and stored) in the configuration directory.
14
-rollbackOnDisconnect If this parameter is set to true, a ROLLBACK will be sent to the DBMS before the connection is closed. This setting is also available in the connection profile. -trimCharData -removeComments -fetchSize -ignoreDropError -emptyStringIsNull -connectionProperties Turns on right-trimming of values retrieved from CHAR columns. See the description of the profile properties for details. This parameter corresponds to the Remove comments setting of the connection profile. This parameter corresponds to the Fetch size setting of the connection profile. This parameter corresponds to the Ignore DROP errors setting of the connection profile. This parameter corresponds to the Empty String is NULL setting of the connection profile. This will only be needed when editing a result set in GUI mode. This parameter can be used to pass extended connection properties if the driver does not support them e.g. in the JDBC URL. The values are passed as key=value pairs, e.g. connectionProperties=someProp=42 If either a comma or an equal sign occurs in a parameter's value, it must be quoted. This means, when passing multiple properties the whole expression needs to be quoted: connectionProperties='someProp=42,otherProp=24'. As an alternative, a colon can be used instead of the equals sign, e.g connectionProperties=someProp:42,otherProp:24. In this case no quoting is needed (because no delimiter is part of the parameters value). If any of the property values contain a comma or an equal sign, then the whole parameter value needs to be quoted again, even when using a colon. connectionProperties='someProp:"answer=42",otherProp:"2,4"' will define the value answer=42 for the property someProp and the value 2,4 for the property otherProp. -altDelim The alternate delimiter to be used for this connection. To define a single line delimiter append the characters :nl to the parameter value: e.g. -altDelimiter=GO:nl to define a SQL Server like GO as the alternate delimiter. Note that when running in batchmode you can also override the default delimiter by specifying the -delimiter parameter.
15
Parameter -separateConnection
Description If this parameter is set to true, and SQL Workbench/J is run in GUI mode, each SQL tab will use it's own connection to the database server. This setting is also available in the connection profile. The default is true. The workspace file to be loaded. If the file specification does not include a directory, the workspace will be loaded from the configuration directory. If this parameter is not specified, the default workspace (Default.wksp) will be loaded. Puts the connection into read-only mode.
-workspace
-readOnly
If a value for one of the parameters contains a dash or a space, you will need to quote the parameter value. A disadvantage of this method is, that the password is displayed in plain text on the command line. If this is used in a batch file, the password will be stored in plain text in the batch file. If you don't want to expose the password, you can use a connection profile and enable password encryption for connection profiles.
16
4. JDBC Drivers
4.1. Configuring JDBC drivers
Before you can connect to a DBMS you have to configure the JDBC driver to be used. The driver configuration is available in the connection dialog or through File Manage Drivers The configuration of a specific driver requires at least two properties: the driver's class name the library ("JAR file") where to find the driver class After you have selected the .jar file for a driver, SQL Workbench/J will scan the jar file looking for a JDBC driver. If only a single driver is found, the classname is automatically put into the entry field. If more than one class is found that is a driver implementation, you will be prompted to select one. In that case, please refer to the manual of your driver, to choose the correct one. If you enter the class name of the driver manually, remember that it's case-sensitive. org.postgresql.driver is different to org.postgresql.Driver (note the capital D for Driver) The name of the library has to contain the full path to the driver's jar file, so that SQL Workbench/J can find it. Some drivers are distributed in several jar files. In that case, select all necessary files in the file open dialog, or enter all the filenames separated by a semicolon (or a colon on Unix style operating systems). This is also true for drivers that require a license file that is contained in a jar file. In this case you have to include the license jar in the list of files. Basically this list defines the classpath for the classloader that is used to load and instantiate the driver. If the driver accesses files through its classpath definition that are not contained in a jar library, you have to include that directory as part of the library definition (e.g: "c:\etc\TheDriver\jdbcDriver.jar;c:\etc \TheDriver"). The file selection dialog will not let you select a directory, so you have to add it manually to the library definition. SQL Workbench/J is not using the system CLASSPATH definition (i.e. environment variable) to load the driver classes. Changing the CLASSPATH environment variable to include your driver's library will not work. Using the -cp switch to add a driver to the classpath when starting the application through a batch file will also not work. You do not need to specify a library for the JDBC-ODBC bridge, as the necessary drivers are already part of the Java runtime. You can assign a sample URL to each driver, which will be put into the URL property of the profile, when the driver class is selected. SQL Workbench/J comes with some sample URLs pre-configured. Some of these sample URLs use brackets to indicate a parameters that need to be replaced with the actual value for your connection: (servername) In this case the entire sequence including the brackets need to be replaced with the actual value.
17
org.firebirdsql.jdbc.FBDriver oracle.jdbc.OracleDriver
com.sybase.jdbc3.jdbc.SybDriver
com.mysql.jdbc.Driver
18
19
20
URL
Fetch size
21
Some driver require those properties to be so called "System properties" (see the manual of your driver for details). If this is the case for your driver, check the option Copy to system properties before connecting.
22
23
As an ANSI compliant SQL Lexer is used for detecting comments, this does not work for non-standard MySQL comments using the # character.
5.5.15. Workspace
For each connection profile, a workspace file can (and should) be assigned. When you create a new connection, you can either leave this field empty or supply a name for a new profile. If the profile that you specify does not exist, you will be prompted if you want to create a new file, load a different workspace or want to ignore the missing file. If you choose to ignore, the association with the workspace file will be cleared and the default workspace will be loaded. If you choose to leave the workspace file empty, or ignore the missing file, you can later save your workspace to a new file. When you do that, you will be prompted if you want to assign the new workspace to the current connection profile.
24
To save you current workspace choose Workspace Save Workspace as to create a new workspace file. When specifying the location of the workspace file, you can use the placeholder %ConfigDir% as part of the filename. The file will then be stored in the same directory as SQL Workbench/J's configuration files e.g.: %ConfigDir%/oracle.wksp When you use the %ConfigDir% placeholder, you can move the profiles and workspaces to a different computer, without changing the location of the workspace files. The placeholder will be put automatically into the filename when you select the location of the profile using the file dialog. The file dialog will be opened when you click the button with ... to the right of the input field. As the workspace stores several settings that are related to the connection (e.g. the selected schema in the DbExplorer) it is recommended to create one workspace for each connection profile.
25
If you want to filter all schemas that start with a certain value, the regular expression would be: ^pg_toast.*. Note the dot followed by a * at the end. In a regular expression the dot matches any character, and the * will allow any number of characters to follow. The ^ specifies that the whole string must occur at the beginning of the value. The regular expression must match completely in order to exlude the value from the dropdown. If you want to learn more about regular expressions, please have a look at http://www.regular-expressions.info/
26
27
SELECT ord.amount, ord.order_date, prod.name FROM orders ord JOIN product prod ON When the cursor is located behind the ONkeyword, and you select SQL JOIN completion, SQL Workbench/J will retrieve the foreign key and corresponding primary key definitions between the tables orders and product. If such constraints exist, the corresponding condition will be generated and written into the editor. After executing JOIN completion, the SQL statement will look like this: SELECT ord.amount, ord.order_date, prod.name FROM orders ord JOIN product prod ON prod.id = ord.product_id This feature depends on the usage of the JOIN keyword. Joining tables in the WHERE clause is not supported!
28
user_profile.user_id = uprof.user_id and user_data.user_role = 1 and user_data.delete_flag = 'F' and not exists (select 1 from data_detail where data_detail.id = user_data.id and data_detail.flag = 'X' and data_detail.value > 42) this will be reformatted to look like this: SELECT user.* FROM user, user_profile, user_data WHERE user.user_id = user_profile.user_id AND user_profile.user_id = uprof.user_id AND user_data.user_role = 1 AND user_data.delete_flag = 'F' AND NOT EXISTS (SELECT 1 FROM data_detail WHERE data_detail.id = user_data.id AND data_detail.flag = 'x' AND data_detail.value > 42) You can configure a threshold up to which sub-SELECTs will not be reformatted but put into one single line. The default for this threshold is 80 characters. Meaning that any subselect that is shorter than 80 characters will not be reformatted as the sub-SELECT in the above example. Please refer to Formatting options for details.
29
will be converted to: (42, 43, 44, 45) These two functions will only be available when text is selected which spans more then one line.
30
String sql="SELECT p.name, \n" + " p.firstname, \n" + " a.street, \n" + //" a.county, \n" + " a.zipcode, \n" + " a.phone \n" + "FROM person p, \n" + " address a \n" + "WHERE p.person_id = a.person_id; \n" will be converted to: SELECT p.name, p.firstname, a.street, --" a.county, " + a.zipcode, a.phone FROM person p, address a WHERE p.person_id = a.person_id;
31
This feature requires that the getParameterCount() and getParameterType() methods of the ParameterMetaData class are implemented by the JDBC driver and return the correct information about the available parameters. The following drivers have been found to support (at least partially) this feature: PostgreSQL, driver version 8.1-build 405 H2 Database Engine, Version 1.0.73 Apache Derby, Version 10.2 Firebird SQL, Jaybird 2.0 driver HSQLDB, version 1.8.0 Drivers known to not support this feature: Oracle 10g driver (ojdbc14.jar) Microsoft SQL Server 2000/2005 driver (sqljdbc.jar)
32
33
DELETE FROM person| WHERE lastname = 'Dent'; COMMIT; When pressing Ctrl-Enter the DELETE statement will be exectuted You can configure SQL Workbench/J to automatically jump to the next statement, after executing the current statement. Simply select SQL Auto advance to next The check mark next to the menu item indicates if this option is enabled. This option can also be changed through the Options dialog Execute All If you want to execute the complete text in the editor regardless of the current selection, use the Execute all command. Either by pressing Ctrl-Shift-E or selecting SQL Execute All When executing all statements in the editor you have to delimit each statement, so that SQL Workbench/J can identify each statement. If your statements are not delimited using a semicolon, the whole editor text is sent as a single statement to the database. Some DBMS support this (e.g. Microsoft SQL Server), but most DBMS will throw an error in that case. A script with two statements could look like this: UPDATE person SET numheads = 2 WHERE name='Beeblebrox'; COMMIT; or: DELETE FROM person; DELETE FROM address; COMMIT; INSERT INTO person (id, firstname, lastname) VALUES (1, 'Arthur', 'Dent'); INSERT INTO person (id, firstname, lastname) VALUES (4, 'Mary', 'Moviestar'); INSERT INTO person (id, firstname, lastname) VALUES (2, 'Zaphod', 'Beeblebrox'); INSERT INTO person (id, firstname, lastname) VALUES (3, 'Tricia', 'McMillian'); COMMIT; You can specifiy an alternate delimiter that can be used instead of the semicolon. See the description of the alternate delimiter for details. This is also needed when running DDL scripts (e.g. for stored procedures) that contain semicolons that should not delimit the statements. As long as at least one statement is running the title of the main window will be prefixed with the sign. Even if the main window is minimized you can still see if a statement is running by looking at the window title.
34
You can use variables in your SQL statements that are replaced when the statement is executed. Details on how to use variables can be found in the chapter Variable substitution. JDBC drivers do not support multi-threaded execution of statements on the same physical connection. If you want to run two statements at the same time, you will need to enable the Separate connection per tab option in your connection profile. In this case SQL Workbench/J will open a physical connection for each SQL tab, so that statements in the different tabs can run concurrently.
Statement history
When executing a statement the contents of the editor is put into an internal buffer together with the information about the text selection and the cursor position. Even when you select a part of the current text and execute that statement, the whole text is stored in the history buffer together with the selection information. When you select and execute different parts of the text and then move through the history you will see the selection change for each history entry. The previous statement can be recalled by pressing Alt-Left or choosing SQL Previous Statement statement from the menu. Once the previous statement(s) have been recalled the next statement can be shown using Alt-Right or choosing SQL Next Statement from the menu. This is similar to browsing through the history of a web browser. You can clear the statement history for the current tab, but selecting SQL Clear history When you clear the content of the editor (e.g. by selecting the whole text and then pressing the Del key) this will not clear the statement history. When you load the associated workspace the next time, the editor will automatically display the last statement from the history. You need to manually clear the statement history, if you want an empty editor the next time you load the workspace.
35
The following examples executes two statements. The result for the first will be labelled "List of contacts" and the second will be labelled "List of companies": -- @wbresult List of contacts SELECT * FROM person; /* @wbresult List of companies this will retrieve all companies from the database */ SELECT * FROM company; As you can see, you can put the @wbresult keyword into a single-line or multi-line comment. The name that is used, will be everything after the keyword until the end of the line. For the second select (with the multi-line comment), the name of the result tab will be List of companies, the comment on the second line will not be considered.
36
END; / Note the trailing forward slash (/) at the end in order to "turn on" the use of the alternate delimiter. If you run scripts with embedded semicolons and you get an error, please verify the setting for your alternate delimiter. When is the alternate delimiter used? As soon as the statement (or script) that you execute is terminated with the alternate delimiter, the alternate delimiter is used to separate the individual SQL statements. When you execute selected text from the editor, be sure to select the alternate delimiter as well, otherwise it will not be recognized (if the alternate delimiter is not selected, the statement to be executed does not end with the alternate delimiter). You cannot mix the standard semicolon and the alternate delimiter inside one script. If you use the alternate delimiter (by terminating the whole script with it), then all statements have to be delimited with it. You cannot mix the use of the normal semicolon and the alternate delimiter for one execution. The following statement (when executed completely) would produce an error message: SELECT sysdate FROM DUAL; CREATE OR REPLACE FUNCTION proc_sample RETURN INTEGER IS result INTEGER; BEGIN SELECT max(col1) INTO result FROM sometable; RETURN result; END; / SQL Workbench/J will use the alternate delimiter present, the SELECT statement at the beginning will also be sent to the database together with the CREATE statement. This of course is an invalid statement. You will need to either select and run each statement individually or change the delimiter after the SELECT to the alternate delimiter.
37
WHERE id=24; SQL Workbench/J will rewrite the UPDATE statement and send the contents of the file located in c:/data/ image.bmp to the database. The syntax for inserting BLOB data is similar. Note that some DBMS might not allow you to supply a value for the blob column during an insert. In this case you need to first insert the row without the blob column, then use an UPDATE to send the blob data. You should make sure to update only one row by specifying an approriate WHERE clause. INSERT INTO theTable (id, blob_col) VALUES (42,{$blobfile=c:/data/image.bmp}); This will create a new record with id=42 and the content of c:/data/image.bmp in the column blob_col
38
From within this information dialog, you can also upload a file to be stored in that BLOB column. The file contents will not be sent to the database server until you actually save the changes to your result set (this is the same for all changes you make directly in the result set, for details please refer to Editing the data) When using the upload function in the BLOB info dialog, SQL Workbench/J will use the file content for any subsequent display of the binary data or the the size information in the information dialog. You will need to reretrieve the data, in order to use the blob data from the server.
39
${current_statement}$
${text}$
The SQL statement that is eventually executed will be logged into the message panel when invoking the macro from the menu. Macros that use the above paramters cannot correctly be executed by entering the macro alias in the SQL editor (and then executing the "statement"). The parameter keywords are case sensitiv, i.e. the text ${SELECTION}$ will not be replaced! This feature can be used to create SQL scripts that work only with with an additional statement. e.g. for Oracle you could define a macro to run an explain plan for the current statement: EXPLAIN PLAN FOR ${current_statement}$ ; COMMIT; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY); When you run this macro, it will run an EXPLAIN PLAN for the statement in which the cursor is currently located, and will immediately display the results for the explain. Note that the ${current_statement}$ keyword is terminated with a semicolon, as the replacement for ${current_statement}$ will never add the semicolon. If you use ${selection}$ instead, you have to pay attention to not select the semicolon in the editor before running this macro. For PostgreSQL you can define a similar macro that will automatically run the EXPLAIN command for a statemet: explain ${current_statement}$ Another usage of the parameter replacement could be a SQL Statement that retrieves the rowcount that would be returned by the current statement: SELECT count(*) FROM ( ${current_statement}$ )
40
41
AS $body$ BEGIN RAISE NOTICE 'Thinking hard...'; RETURN 42; END; $body$ /
7.11.2. Oracle
For Oracle the DBMS_OUTPUT package is supported. Support for this package can be turned on with the ENABLEOUT command. If this support is not turned on, the messages will not be displayed. This is the same as using the SET SERVEROUTPUT ON command in SQL*Plus. If you want to turn on support for DBMS_OUTPUT automatically when connecting to an Oracle database, you can put the ENABLEOUT command into the pre-connect script. Any message "printed" with DBMS_OUTPUT.put_line() will be displayed in the message part after the SQL command has finished. Please refer to the Oracle documentation if you want to learn more about the DBMS_OUTPUT package. dbms_output.put_line("The answer is 42"); Once the command has finished, the following will be displayed in the Messages tab. The answer is 42
42
If you have primary keys defined for the underlying tables, those primary key columns will be used for the WHERE statements for UPDATE and DELETE. If no primary key columns are found, the JDBC driver is asked for a best row identifier. If that doesn't return any information, your defined PK Mapping will be queried. If still no PK columns can be found, you will be prompted to select the key columns based on the current result set. The changes (modified, new or deleted rows) will not be saved to the database until you choose Data Save Changes to Database. If the update is successful (no database errors) a COMMIT will be sent to the database automatically. If your SELECT was based on more than one table, you will be prompted to specify which table should be updated. Only columns for the chosen table will be included in the UPDATE or INSERT statements. If no primary key can be found for the update table, you will be prompted to select the columns that should be used to uniquel identify a row in the update table. If an error is reported during the update, a ROLLBACK will be sent to the database. The COMMIT or ROLLBACK will only be sent if autocommit is turned off. Columns containing BLOB data will be displayed with a ... button. By clicking on that button, you can view the blob data, save it to a file or upload the content of a file to the DBMS. Please refer to BLOB support for details. When editing, SQL Workbench/J will highlight columns that are defined as NOT NULL in the database. You can turn this feature off, or change the color that is used in the options dialog. When editing date, timestamp or time fields, the format specified in the options dialog is used for parsing the entered value and converting that into the internal representation of a date. The value entered must match the format defined there. If you want to input the current date and time you can use now, today, sysdate, current_timestamp, current_date instead. This will then use the current date & time and will convert this to the approriate data type for that column. e.g. now will be converted to the current time for a time column, the current date for a date column and the current date/time for a timestamp column. These keywords also work when importing text files using WbImport or importing a text file into the result set. The exact keywords that are recognized can be configure in the settings file If the option Empty String is NULL is disabled for the current connection profile, you can still set a column's value to null when editing it. To do this, double click the current value, so that you can edit it. In the context menu (right mouse button) the option "Set to NULL" is available. This will clear the value and set it to NULL. You can assign a shortcut to this action, but the shortcut will only be active when editing a value inside a column.
43
Note that the generated SQL statements to delete the dependent rows will only be shown if you have enabled the preview of generated DML statements in the options dialog You can also generate a script to delete the selected and all depending rows through Data Generate delete script. This will not remove any rows from the current result set, but instead create and display a script that you can run at a later time.
44
If you want to sort by more than one column, hold down the Ctrl key will clicking on the (second) header. The initial sort order is ascending for that additional column. To switch the sort order hold down the Ctrl key and click on the column header again. The sort order for all "secondary" sort columns will be indicated with a slightly smaller triangle than the one for the primary sort column. To define a different secondary sort column, you first have to remove the current secondary column. This can be done by holding down the Shift key and clicking on the secondary column again. Note that the data will not be resorted. Once you have removed the secondary column, you can define a different secondary sort column. By default SQL Workbench/J will use "ASCII" sorting which is case-sensitive and will not sort special characters according to your language. You can change the locale that is used for sorting data in the options dialog under the category "Data Display". Sorting using a locale is a bit slower than "ASCII" sorting.
Using the Alt key you can select individual columns of one or more rows. Together with the Ctrl key you can select e.g. the first, third and fourth column. You can also select the e.g. second column of the first, second and fifth row. Whether the quick filter is available depends on the selected rows and columns. It will be enabled when: You have selected one or more columns in a single row
45
You have selected one column in multiple rows If only a single row is selected, the quick filter will use the values of the selected columns combined with AND to define the filter (e.g. username = 'Bob' AND job = 'Clerk'). Which columns are used depends on the way you select the row and columns. If the whole row in the result is selected, the quick filter will use the value of the focused column (the one with the yellow rectangle), otherwise the individually selected columns will be used. If you select a single column in multiple rows, this will create a filter for that column, but with the values will be combined with OR (e.g. name = 'Dent' OR name = 'Prefect'). The quick filter will not be available if you select more than one column in multiple rows. Once you have applied a quick filter, you can use the regular filter definition dialog to check the definition of the filter or to further modify it.
46
All format specific options that are available in the lower part, are also available when using the WbExport command. For a detailed discussion of the individual options please refer to that section. The options SQL UPDATE and SQL DELETE/INSERT are only available when the current result has a single table that can be updated, and the primary key columns for that table could be retrieved. If the current result does not have key columns defined, you can select the key columns that should be used when creating the file. If the current result is retrieved from multiple tables, you have to supply a table name to be used for the SQL statements. Please keep in mind that exporting the data from the result set requires you to load everything into memory. If you need to export data sets which are too big to fit into memory, you should use the WbExport command to either create SQL scripts or to save the data as text or XML files that can be imported into the database using the WbImport command. You can also use SQL Export query result to export the result of the currently selected SQL statement.
47
When selecting the file, you can change some parameters for the import: Option Header Delimiter Date Format Decimal char Quote char Description if this option this is checked, the first line of the import file will be ignored the delimiter used to separate column values. Enter \t for the tab character The format in which date fields are specified. The character that is used to indicate the decimals in numeric values (typically a dot or a comma) The character used to quote values with special characters. Make sure that each opening quote is followed by a closing quote in your text file.
You can also import text and XML files using the WbImport command. Using the WbImport command is the recommended way to import data, as it is much more flexible, and - more important - it does not read the data into memory.
48
49
50
SELECT id FROM person WHERE name like '$[&search_name]%' The first time you execute this statement (and no value has been assigned to search_name before using WBVARDEF or on the commandline) you will be prompted for a value for search_name. Any subsequent execution of the statement (or any other statement referencing $[&search_name]) will re-use the value you entered.
51
<java classname="workbench.WbStarter" classpath="sqlworkbench.jar" fork="true"> <arg value="-profile='my profile'"/> <arg value="-script=load_data.sql"/> </java> The parameters to specifiy the connection and the SQL script to be executed have to be passed on the commandline.
52
If a script has been specified using the -script parameter, the -command parameter is ignored.
53
If you update data in the database, this script usually contains a COMMIT command to make all changes permanent. The abort script usually contains a ROLLBACK command.
54
9.15. Examples
For readability the examples in this section are displayed on several lines. If you enter them manually on the commandline you will need to put everything in one line, or use the escape character for your operating system to extend a single command over more then one input line. Connect to the database without specifying a connection profile: java -jar sqlworkbench.jar -url=jdbc:postgresql:/dbserver/mydb -driver=org.postgresql.Driver -username=zaphod -password=vogsphere -driverjar=C:/Programme/pgsql/pg73jdbc3.jar -script='test-script.sql' This will start SQL Workbench/J, connect to the database server as specified in the connection parameters and execute the script test-script.sql. As the script's filename contains a dash, it has to be quoted. This is also necessary when the filename contains spaces. Executing several scripts with a cleanup and failure script: java -jar sqlworkbench.jar -script='c:/scripts/script-1.sql','c:/scripts/script-2.sql',c:/scripts/script3.sql -profile=PostgreSQL -abortOnError=false -cleanupSuccess=commit.sql -cleanupError=rollback.sql Note that you need to quote each file individually (where it's needed) and not the value for the -script parameter Run a SQL command in batch mode without using a script file The following example exports the table "person" without using the -script parameter:
55
java -jar sqlworkbench.jar -profile='TestData' -command='WbExport -file=person.txt -type=text -sourceTable=person' The following example shows how to run two different SQL statements without using the -script parameter: java -jar sqlworkbench.jar -profile='TestData' -command='delete from person; commit;'
56
57
SQL> WbDisplay record; Display changed to single record format Execution time: 0.0s SQL> select id, firstname, lastname, comment from person; ---- [Row 1] ------------------------------id : 1 firstname : Arthur lastname : Dent comment : this is a very long comment that would not fit onto the screen when printed as ---- [Row 2] ------------------------------id : 2 firstname : Zaphod lastname : Beeblebrox comment : ---- [Row 3] ------------------------------id : 4 firstname : Mary lastname : Moviestar comment :
58
59
60
If the current connection references a JDBC driver that is not already defined, a new entry for the driver defintions is created referencing the library that was passed on the commandline. All profiles are automatically saved after executing WbStoreProfile.
61
62
Simply unzip the archive into the directory where sqlworkbench.jar is located.
63
Parameter
Description If you want to export tables from a different user or schema you can use a schema name combined with a wildcard e.g. -sourcetable=otheruser.*. In this case the generated output files will contain the schema name as part of the filename (e.g. otheruser.person.txt). When importing these files, SQL Workbench/J will try to import the tables into the schema/user specified in the filename. If you want to import them into a different user/schema, then you have to use the -schema switch for the import command. Selects the object types to be exported. By default only TABLEs are exported. If you want to export the content of VIEWs or SYNONYMs as well, you have to specify all types with this parameter. -sourceTable=* -types=VIEW,SYNONYM or -sourceTable=T% types=TABLE,VIEW,SYNONYM
-types
-excludeTables
The tables listed in this parameter will not be exported. This can be used when all but a few tables should be exported from a schema. First all tables specified through sourceTable will be evaluated. The tables specified by -excludeTables can include wildcards in the same way, -sourceTable allows wildcards. -sourceTable=* -excludeTables=TEMP* will export all tables, but not those starting with TEMP.
-sourceTablePrefix
Define a common prefix for all tables listed with -sourceTable. When this parameter is specified the existence of each table is not tested any longer (as it is normally done). When this paarameter is specified the generated statement for exporting the table is changed to a SELECT * FROM [prefix]tableName instead of listing all columns individually. This can be used when exporting views on tables, when for each table e.g. a view with a certain prefix exists (e.g. table PERSON has the view V_PERSON and the view does some filtering of the data.
When using the -sourceTable switch with multiple tables, this parameter is mandatory and defines the directory where the generated files should be stored. When exporting more than one table, this parameter controls whether the whole export will be terminated if an error occurs during export of one of the tables. Defines the encoding in which the file should be written. Common encodings are ISO-8859-1, ISO-8859-15, UTF-8 (or UTF8). To get a list of available encodings, execut WbExport with the parameter -showencoding. This parameter is ignored for XLS, XLSX and ODS exports. Displays the encodings supported by your Java version and operating system. If this parameter is present, all other parameters are ignored. Possible values are: crlf, lf Defines the line ending to be used for XML or text files. crlf puts the ASCII characters #13 and #10 after each line. This is the standard format on Windows based systems. dos and win are synonym values for crlf, unix is a synonym for lf. lf puts only the ASCII character #10 at the end of each line. This is the standard format on Unix based systems (unix is a synonym value for this format). The default line ending used depends on the platform where SQL Workbench/J is running.
-showEncodings -lineEnding
-header
64
Parameter
Description If this parameter is set to true, the header (i.e. the column names) are placed into the first line of output file. The default is to not create a header line. You can define the default value for this parameter in the file workbench.settings. This parameter is valid for text and spreadsheet (OpenDocument, Excel) exports. Selects whether the output file should be compressed and put into a ZIP archive. An archive will be created with the name of the specified outputfile but with the extension zip. The archive will then contain the specified file (e.g. if you specify data.txt, an archive data.zip will be created containing exactly one entry with the name data.txt). If the exported result set contains BLOBs, they will be stored in a separate archive, named data_lobs.zip. When exporting multiple tables using the -sourcetable parameter, then SQL Workbench/J will create one ZIP archive for each table in the specified output directory with the filename "tablename".zip. For any table containing BLOB data, one additional ZIP archive is created.
-compress
-tableWhere
Defines an additional WHERE clause that is appended to all SELECT queries to retrieve the rows from the database. No validation check will be done for the syntax or the columns in the where clause. If the specified condition is not valid for all exported tables, the export will fail. Possible values: true, false For SQL, XML and Text export this controls how the contents of CLOB fields are exported. Usually the CLOB content is put directly into the output file When generating SQL scripts with WbExport this can be a problem as not all DBMS can cope with long character literals (e.g. Oracle has a limit of 4000 bytes). When this parameter is set to true, SQL Workbench/J will create one file for each CLOB column value. This is the same behaviour as with BLOB columns. Text files that are created with this parameter set to true, will contain the filename of the generated output file instead of the actual column value. When importing such a file using WbImport you have to specify the -clobIsFilename=true parameter. Otherwise the filenames will be stored in the database and not the clob data. This parameter is not necessary when importing XML exports, as WbImport will automatically recognize the external files. Note that SQL exports (-type=sqlinsert) generated with -clobAsFile=true can only be run with SQL Workbench/J! All CLOB files that are written using the encoding specified with the -encoding switch. If the -encoding parameter is not specified the default file encoding will be used.
-clobAsFile
-lobIdCols
When exporting CLOB or BLOB columns as external files, the filename with the LOB content is generated using the row and column number for the currently exported LOB column (e.g. data_r15_c4.data). If you prefer to have the value of a unique column combination as part of the file name, you can specify those columns using the lobIdCols parameter. The filename for the LOB will then be generated using the base name of the export file, the column name of the LOB column and the values of the specified columns. If you export your data into a file called user_info and specify lobIdCols=id and your result contains a column called img, the LOB files will be named e.g. user_info_img_344.data
65
Parameter -lobsPerDirectory
Description When exporting CLOB or BLOB columns as external files, the generated files can be distributed over several directories to avoid an excessive number of files in a single directory. The parameter lobsPerDirectory defines how many LOB files are written into a single directory. When the specified number of files have been written, a new directory is created. The directories are always created as a sub-directory of the target directory. The name for each directory is the base export filename plus "_lobs" plus a running number. So if you export the data into a file "the_big_table.txt", the LOB files will be stored in "the_big_table_lobs_1", "the_big_table_lobs_2", "the_big_table_lobs_3" and so on. The directories will be created if needed, but if the directories already exist (e.g. because of a previous export) their contents will not be deleted!
-extensionColumn
When exporting CLOB or BLOB columns as external files, the extension of the generated filenames can be defined based on a column of the result set. If the exported table contains more than one type of BLOBs (e.g. JPEG, GIF, PDF) and your table stores the information to define the extension based on the contents, this can be used to re-generate proper filenames. This parameter only makes sense if exactly one BLOB column of a table is exported.
-filenameColumn
When exporting CLOB or BLOB columns as external files, the complete filename can be taken from a column of the result set (instead of dynamically creating a new file name based on the row and column numbers). This parameter only makes sense if exactly one BLOB column of a table is exported.
-append
Possible values: true,false Controls whether results are appended to an existing file, or overwrite an existing file. This parameter is only supported for text or SQL export types.
The date format to be used when writing date columns into the output file. This parameter is ignored for SQL exports. The format to be used when writing datetime (or timestamp) columns into the output file. This parameter is ignored for SQL exports. Possible values: file, dbms, ansi, base64 This parameter controls how BLOB data will be put into the generated SQL statements. By default no conversion will be done, so the actual value that is written to the output file depends on the JDBC driver's implementation of the Blob interface. It is only valid for Text, SQL and XML exports, although not all parameter values make sense for all export types. The type base64 is primarily intended for Text exports (e.g. to be used with PostgreSQL's COPY command) The types dbms and ansi are intended for SQL exports and generate a representation of the binary data as part of the SQL statement. DBMS will use a format that is understood by the DBMS you are exporting from, while ansi will generate a standard hex based representation of the binary data. The syntax generated by the ansi format is not understood by all DBMS!
66
Parameter
Description Two additional SQL literal formats are available that can be used together with PostgreSQL: pgDecode and pgEscape. pgDecode will generate a hex representation using PostgreSQL's decode() function. Using decode is a very compact format. pgEscape will use PostgreSQL's escaped octets, and generates much bigger statements (due to the increase escaping overhead). When using file, base64 or ansi the file can imported using WbImport The parameter value file, will cause SQL Workbench/J to write the contents of each blob column into a separate file. The SQL statement will contain the SQL Workbench/ J specific extension to read the blob data from the file. For details please refer to BLOB support. If you are planning to run the generated SQL scripts using SQL Workbench/J this is the recommended format. Note that SQL scripts generated with -blobType=file can only be run with SQL Workbench/J The parameter value ansi, will generate "binary strings" that are compatible with the ANSI definition for binary data. MySQL and Microsoft SQL Server support these kind of literals. The parameter value dbms, will create a DBMS specific "binary string". MySQL, HSQLDB, H2 and PostgreSQL are known to support literals for binary data. For other DBMS using this option will still create an ansi literal but this might result in an invalid SQL statement.
-replaceExpression replaceWith
Using these parameters, arbitrary text can be replaced during the export. replaceExpression defines the regular expression that is to be replaced. replaceWith defines the replacement value. -replaceExpression='(\n|\r \n)' -replaceWith=' ' will replace all newline characters with a blank. The search and replace is done on the "raw" data retrieved from the database before the values are converted to the corresponding output format. In particular this means replacing is done before any character escaping takes place. Because the search and replace is done before the data is converted to the output format, it can be used for all export types. Only character columns (CHAR, VARCHAR, CLOB, LONGVARCHAR) are taken into account.
-showProgress
Valid values: true, false, <numeric value> Control the update frequence in the statusbar (when running in GUI mode). The default is every 10th row is reported. To disable the display of the progress specifiy a value of 0 (zero) or the value false. true will set the progress interval to 1 (one).
67
Parameter -quoteChar
Description The character (or sequence of characters) to be used to enclose text (character) data if the delimiter is contained in the data. By default quoting is disabled until a quote character is defined. To set the double quote as the quote character you have to enclose it in single quotes: -quotechar='"' Possible values: none, escape, duplicate Defines how quote characters that appear in the actual data are written to the output file. If no quote character has been defined using the -quoteChar switch, this option is ignored. If escape is specified a quote character (defined through -quoteChar) that is embedded in the exported (character) data is written as e.g. here is a \" quote character. If duplicate is specified, a quote character (defined through -quoteChar) that is embedded in the exported (character) data is written as two quotes e.g. here is a "" quote character.
-quoteCharEscaping
-quoteAlways
Possible values: true, false If quoting is enabled (via -quoteChar, then character data will normally only be quoted if the delimiter is found inside the actual value that is written to the output file. If -quoteAlways=true is specified, character data will always be enclosed in the specified quote character. This parameter is ignored if not quote character is specified. If you expect the quote character to be contained in the values, you should enable character escaping, otherwise the quote character that is part of the exported value will break the quote during import. NULL values will not be quoted even if this parameter is set to true. This is usefull to distinguish between NULL values and empty strings.
-decimal -escapeText
The decimal symbol to be used for numbers. The default is a dot (e.g. 3.14152) This parameter controls the escaping of non-printable or non-ASCII characters. Valid options are ctrl which will escape everything below ASCII 32 (newline, tab, etc), 7bit which will escape everything below ASCII 32 and above 126, 8bit which will escape everything below ASCII 32 and above 255 and extended which will escape everything outside the range [32-126] and [161-255] This will write a unicode representation of the character into the text file e.g. \n for a newline, \u00F6 for . This file can only be imported using SQL Workbench/J (at least I don't know of any DBMS specific loader that will decode this properly) If character escaping is enabled, then the quote character will be escaped inside quoted values and the delimiter will be escaped inside non-quoted values. The delimiter could also be escaped inside a quoted value if the delimiter falls into the selected escape range (e.g. a tab character).
-formatFile
Possible values: postgres, oracle, sqlserver, db2 This parameter controls the creation of a control file for the bulk load utilities of Oracle and Microsoft SQL Server. oracle will create a control file for Oracle's SQL*Loader utility, sqlserver will create a format file for Microsoft's bcp utility. The format file has the same filename as the output file but with the ending .ctl for Oracle and .fmt for SQL Server. For PostgreSQL, this will create the necessary COPY syntax to import the generated text file. For DB2 this will create an IMPORT command to import the exported data.
68
Parameter
Description You can specify several formats at the same time. In that case one control file for each format specified will be created. The generated format file(s) are intended as a starting point for your own adjustments. Don't expect them to be complete specifying all possible options.
-xsltOutput
-verboseXML
69
Parameter
Description INSERT INTO ... VALUES ('First line'||chr(13)||'Second line' ... ) This setting will affect ASCII values from 0 to 31
-concat
If the parameter -charfunc is used SQL Workbench/J will concatenate the individual pieces using the ANSI SQL operator for string concatenation. In case your DBMS does not support the ANSI standard (e.g. MS ACCESS) you can specify the operator to be used: -concat=+ defines the plus sign as the concatenation operator. Possible values: jdbc, ansi, dbms, default This parameter controls the generation of date or timestamp literals. By default literals that are specific for the current DBMS are created. You can also choose to create literals that comply with the JDBC specification or ANSI SQL literals for dates and timestamps. jdbc selects the creation of JDBC compliant literals. These should be usable with every JDBC based tool, including your own Java code: {d '2004-04-28'} or {ts '2002-04-02 12:02:00.042'}. This is the recommended format if you plan to use SQL Workbench/J (or any other JDBC based tool) to run the generated statements. ansi selects the creation of ANSI SQL compliant date literals: DATE '2004-04-28' or TIMESTAMP '2002-04-02 12:04:00'. Please consult the manual of the target DBMS, to find out whether it supports ANSI compliant date literals. default selects the creation of quoted date and timestamp literals in ISO format (e.g. '2004-04-28'). Several DBMS support this format (e.g. PostgreSQL, Microsoft SQL Server) dbms selects the creation of specific literals to be used with the current DBMS (using e.g. the to_date() function for Oracle). The format of these literals can be customized if necessary in workbench.settings using the keys workbench.sql.literals. [type].[datatype].pattern where [type] is the type specified with this parameter and [datatype] is one of time, date, timestamp. If you add new literal types, please also adjust the key workbench.sql.literals.types which is used to show the possible values in the GUI (auto-completion "Save As" dialog, Options dialog). If no type is specified (or dbms), SQL Workbench/J first looks for an entry where [type] is the current dbid. If no value is found, default is used. You can define the default literal format to be used for the WbExport command in the options dialog.
-sqlDateLiterals
-commitEvery
A numeric value which identifies the number of INSERT or UPDATE statements after which a COMMIT is put into the generated SQL script. -commitevery=100 will create a COMMIT; after every 100th statement. If this is not specified one COMMIT; will be added at the end of the script. To suppress the final COMMIT, you can use -commitEvery=none. Passing commitEvery=atEnd is equivalent to -commitEvery=0
-createTable
Possible values: true, false If this parameter is set to true, the necessary CREATE TABLE command is put into the output file. This parameter is ignored when creating UPDATE statements.
-useSchema
70
Parameter
Description If this parameter is set to true, all table names are prefixed with the approriate schema. The default is taken from the global option Include owner in export A comma separated list of column names that occur in the table or result set that should be used as the key columns for UPDATE or DELETE If the table does not have key columns, or the source SELECT statement uses a join over several tables, or you do not want to use the key columns defined in the database, this key can be used to define the key columns to be used for the UPDATE statements. This key overrides any key columns defined on the base table of the SELECT statement.
-keyColumns
71
Description The title for the HTML page (put into the <title> tag of the generated output) With this parameter you can specify a HTML chunk that will be added before the export data is written to the output file. This can be used to e.g. create a heading for the data: preDataHtml='<h1>List of products</h1>'. The value will be written to the output file "as is". Any escaping of the HTML must be provided in the parameter value.
-postDataHtml
With this parameter you can specify a HTML chunk that will be added after the data has been written to the output file.
11.10. Examples
11.10.1. Simple plain text export
WbExport -type=text -file='c:/data/data.txt' -delimiter='|' -decimal=',' -sourcetable=data_table; Will create a text file with the data from data_table. Each column will be separated with the character | Each fractional number will be written with a comma as the decimal separator.
72
This will export each specified table into a text file in the specified directory. The files are named "table_1.txt", "table_2.txt" and so on. Limiting the export data when using a table based export, can be done using the -tableWhere argument. This requires that the specified WHERE condition is valid for all tables, e.g. when every table has a column called MODIFIED_DATE WbExport -type=text -outputDir='c:/data' -delimiter=';' -header=true -tableWhere="WHERE modified_date > DATE '2009-04-02'" -sourcetable=table_1, table_2, table_3, table_4; This will add the specified where clause to each SELECT, so that only rows are exported that were changed after April 2nd, 2009
73
will create a SQL script which that contains statements like INSERT INTO newtable (...) VALUES (...); and the list of columns are all columns that are defined by the SELECT statement. If the parameter -table is omitted, the creation of SQL INSERT statements is only possible, if the SELECT is based on a single table (or view).
74
-file -table
-extension
75
Parameter -ignoreOwner
Description If the file names imported with from the directory specified with -sourceDir contain the owner (schema) information, this owner (schema) information can be ignored using this parameter. Otherwise the files might be imported into a wrong schema, or the target tables will not be found. Using -excludeFiles, files from the source directory (when using -sourceDir) can be excluded from the import. The value for this parameter is a comma separated list of partial names. Each file that contains at least one of the values supplied in this parameter is ignored. -excludeFiles=back,data will exclude any file that contains the value back or data in it, e.g.: backup, to_back, log_data_store etc. When importing more than one file (using the -sourcedir switch), into tables with foreign key constraints, this switch can be used to import the files in the correct order (child tables first). When -checkDependencies=true is passed, SQL Workbench/ J will check the foreign key dependencies for all tables. Note that this will not check dependencies in the data. This means that e.g. the data for a self-referencing table (parent/ child) will not be order so that it can be imported. To import self-referencing tables, the foreign key constraint should be set to "initially deferred" in order to postpone evaluation of the constraint until commit time. If your DBMS neeeds frequent commits to improve performance and reduce locking on the import table you can control the number of rows after which a COMMIT is sent to the server. -commitEveryis numeric value that defines the number of rows after which a COMMIT is sent to the DBMS. If this parameter is not passed (or a value of zero or lower), then the import is run as a single transaction that is committed at the end. When using batch import and your DBMS requires frequent commits to improve import performance, the -commitBatch option should be used instead. You can turn off the use of a commit or rollback during import completely by using the option -transactionControl=false. Using -commitEvery means, that in case of an error the already imported rows cannot be rolled back, leaving the data in a potential invalid state.
-excludeFiles
-checkDependencies
-commitEvery
-transactionControl
Possible values: true, false Controls if SQL Workbench/J handles the transaction for the import, or if the import must be committed (or rolled back) manually. If -transactionControl=false is specified, SQL Workbench/J will neither send a COMMIT nor a ROLLBACK at the end. This can be used when multiple files need to be imported in a single transaction. This can be combined with the cleanup and error scripts in batch mode.
-continueOnError
Possible values: true, false This parameter controls the behaviour when errors occur during the import. The default is true, meaning that the import will continue even if an error occurs during file parsing or updating the database. Set this parameter to false if you want to stop the import as soon as an error occurs. The default value for this parameter can be controlled in the settings file and it will be displayed if you run WbImport without any parameters. With PostgreSQL continueOnError will only work, if the use of savepoints is enabled using -useSavepoint=true.
-useSavepoint
76
Parameter
Description Controls if SQL Workbench/J guards every insert or update statement with a savepoint to recover from individual error during import, when continueOnError is set to true. Using a savepoint for each DML statement can drastically reduce the performance of the import.
-keyColumns
Defines the key columns for the target table. This parameter is only necessary if import is running in UPDATE mode. This parameter is ignored if files are imported using the -sourcedir parameter
-schema
Defines the schema into which the data should be imported. This is necessary for DBMS that support schemas, and you want to import the data into a different schema, then the current one. Defines the encoding of the input file (and possible CLOB files) Possible values: true, false If this parameter is set to true, data from the target table will be deleted (using DELETE FROM ...) before the import is started. This parameter will only be used if mode=insert is specified.
-encoding -deleteTarget
-truncateTable
Possible values: true, false This is essentially the same as -deleteTarget, but will use the command TRUNCATE to delete the contents of the table. For those DBMS that support this command, deleting rows is usually faster compared to the DELETE command, but it cannot be rolled back. This parameter will only be used if -mode=insert is specified.
-batchSize
A numeric value that defines the size of the batch queue. Any value greater than 1 will enable batch mode. If the JDBC driver supports this, the INSERT (or UPDATE) performance can be increased drastically. This parameter will be ignored if the driver does not support batch updates or if the mode is not UPDATE or INSERT (i.e. if -mode=update,insert or mode=insert,update is used).
-commitBatch
Possible values: true, false If using batch execution (by specifying a batch size using the -batchSize parameter) each batch will be committed when this parameter is set to true. This is slightly different to using -commitEvery with the value of the -batchSize parameter. The latter one will add a COMMIT statement to the batch queue, rather than calling the JDBC commit() method. Some drivers do not allow to add different statements in a batch queue. So, if a frequent COMMIT is needed, this parameter should be used. When you specify -commitBatch the parameter -commitEvery will be ignored. If no batch size is given (using -batchSize, then -commitBatch will also be ignored.
-updateWhere
When using update mode an additional WHERE clause can be specified to limit the rows that are updated. The value of the -updatewhere parameter will be added to the generated UPDATE statement. If the value starts with the keyword AND or OR the value will be added without further changes, otherwise the value will be added as an AND clause enclosed in brackets. This parameter will be ignored if update mode is not active. A numeric value to define the first row to be imported. Any row before the specified row will be ignored. The header row is not counted to determine the row number. For a text file with a header row, the pysical line 2 is row 1 (one) for this parameter.
-startRow
77
Parameter
Description When importing text files, empty lines in the input file are silently ignored and do not add to the count of rows for this parameter. So if your input file has two lines to be ignored, then one empty line and then another line to be ignored, startRow must be set to 4. A numeric value to define the last row to be imported. The import will be stopped after this row has been imported. When you specify -startRow=10 and -endRow=20 11 rows will be imported (i.e. rows 10 to 20). If this is a text file import with a header row, this would correspond to the physical lines 11 to 21 in the input file as the header row is not counted. Possible values: true, false If -continueOnError=true is used, you can specify a file to which rejected rows are written. If the provided filename denotes a directory a file with the name of the import table will be created in that directory. When doing multi-table inserts you have to specify a directory name. If a file with that name exists it will be deleted when the import for the table is started. The fill will not be created unless at least one record is rejected during the import. The file will be created with the same encoding as indicated for the input file(s).
-endRow
-badFile
-maxLength
With the parameter -maxLength you can truncate data for character columns (VARCHAR, CHAR) during import. This can be used to import data into columns that are not big enough (e.g. VARCHAR columns) to hold all values from the input file and to ensure the import can finish without errors. The parameter defines the maximum length for certain columns using the following format: -maxLength='firstname=30,lastname=20' Where firstname and lastname are columns from the target table. The above example will limit the values for the column firstname to 30 characters and the values for the column lastname to 20 characters. If a non-character column is specified this is ignored. Note that you have quote the parameter's value in order to be able to use the "embedded" equals sign.
-booleanToNumber
Possible values: true, false When exporting data from a DBMS that supports the BOOLEAN datatype, the export file will contain the literals "true" or "false" for the value of the boolean columns. When importing this file into a DBMS that does not support the BOOLEAN datatype, the import would fail. In case you are importing the boolean column into a numeric column in the target DBMS, SQL Workbench/J will automatically convert the literal true to the numeric value 1 (one) and the literal false to the numeric value 0 (zero). If you do not want this automatic conversion, you have to specify -booleanToNumber=false for the import. The default values for the true/false literals can be overwritten with the -literalsFalse and literalsTrue switches.
-literalsFalse -literalsTrue When dealing with boolean values in the input file, these two switches define the literals that represent the value false and the value true when parsing the input data. The value to these switches is a comma separated list of literals that should be treated as the specified value, e.g.: -literalsFalse='false,0' literalsTrue='true,1' will define the most commonly used values for true/false. Please note: The definition of the literals is case sensitive!
78
Parameter -constantValues
Description You always have to specify both switches, otherwise the definition will be ignored With this parameter you can supply constant values for one or more columns that will be used when inserting new rows into the database. The constant values will only be used when inserting rows (e.g. using -mode=insert) The format of the values is constantValues="column1=value1,column2=value2". The parameter can be repeated multiple times, to make quoting easier: -constantValues="column1=value1" constantValues="column2=value2" The values will be converted by the same rules as the input values from the input file. If the value for a character column is enclosed in single quotes, these will be removed from the value before sending it to the database. To include single quotes at the start or end of the input value you need to use two single quotes, e.g.-constantValues="name=''Quoted'',title='with space'" For the field name the value 'Quoted' will be sent to the database. for the field title the value with space will be sent to the database. To specify a function call to be executed, enclose the function call in ${...}, e.g. ${mysequence.nextval} or ${myfunc()}. The supplied function will be put into the VALUES part of the INSERT statement without further checking (after removing the ${ and } characters, of course). So make sure that the syntax is valid for your DBMS. If you do need to store a literal like ${some.value} into the database, you need to quote it: -constantValues="varname='${some.value}'". You can also specify a SELECT statement that retrieves information from the database based on values from the input file. This is useful when the input file contains e.g. values from a lookup table (but not the primary key from the lookup table). The syntax to specify a SELECT statement is similar to a function call: constantValues="$@{SELECT type_id FROM type_definition WHERE type_name = $4" where $4 references the fourth column from the input file. The first column is $1 (not $0). The parameter for the SELECT statement do not need to be quoted as internally a prepared statement is used. However the values in the input file must be convertible by the JDBC driver. Please refer to the examples for more details on the usage.
-insertSQL
Define the statement to be used for inserting rows. This can be used to use hints or customize the generated INSERT statement. The parameter may only contain the INSERT INTO part of the statement (i.e. INSERT INTO is the default if nothing is specified). This can be used to pass special hints to the database, e.g. to specify an append hint for Oracle: You have to quote the parameter value using single quotes, otherwise comments will be removed from the SQL statement! -insertSQL='INSERT /*+ append */ INTO'
-preTableStatement postTableStatement
This parameter defines a SQL statement that should be executed before the import process starts inserting data into the target table. The name of the current table (when e.g. importing a whole directory) can be referenced using ${table.name}.
79
Parameter
Description To define a statement that should be executed after all rows have been inserted and have been committed, you can use the -postTableStatement parameter. These parameters can e.g. be used to enable identity insert for MS SQL Server: -preTableStatement="set identity_insert ${table.name} on" -postTableStatement="set identity_insert ${table.name} off" Errors resulting from executing these statements will be ignored. If you want to abort the import in that case you can specify -ignorePrePostErrors=false and continueOnError=false.
Controls handling of errors for the -preTableStatement and ignorePrePostErrors=false postTableStatement parameters. If this is set to false, errors resulting from executing the supplied parameters are ignored. If set to true (default) then error handling depends on the parameter -continueOnError. -showProgress Valid values: true, false, <numeric value> Control the update frequence in the statusbar (when running in GUI mode). The default is every 10th row is reported. To disable the display of the progress specifiy a value of 0 (zero) or the value false. true will set the progress interval to 1 (one).
80
Parameter -columnWidths
Description To import files that do not have a delimiter but a fixed width for each column, this parameters defines the width of each column in the input file. The value for this parameter is a comma separated list, where each element defines the width for a single column. If this parameter is given, the -delimiter parameter is ignored. e.g.: -columnWidths='name=10,lastname=20,street=50,flag=1' Note that the whole list must be enclosed in quotes as the parameter value contains the equal sign. If you want to import only certain columns you have to use -fileColumns and importColumns to select the columns to import. You cannot use $wb_skip$ in the fileColumns parameter with a fixed column width import.
The format for date columns. The format for datetime (or timestamp) columns in the input file. The character which was used to quote values where the delimiter is contained. This parameter has no default value. Thus if this is not specified, no quote checking will take place. If you use -multiLine=true you have to specify a quote character in order for this to work properly. Possible values: true, false WbImport will always handled quoted values correctly, if a quote character is defined through -quoteChar. Using -quoteAlways=true enables the distinction between NULL values and empty strings in the import file, but only if -quoteAlways=true has also been used when running WbExport. Remember to also use -emptyStringIsNull=false, as by default empty string values are treated as NULLs
-quoteAlways
-quoteCharEscaping
Possible values: none, escape, duplicate Defines how quote characters that appear in the actual data are stored in the input file. You have to define a quote character in order for this option to have an effect. The character defined with the -quoteChar switch will then be imported according to the setting defined by this switch. If escape is specified, it is expected that a quote that is part of the data is preceded with a backslas, e.g. the input value here is a \" quote character will be imported as here is a " quote character If duplicate is specified, it is expected that the quote character is duplicated in the input data. This is similar to the handling of single quotes in SQL literals. The input value here is a "" quote character will be imported as here is a " quote character
-multiLine
Possible values: true, false Enable support for records spanning more than one line in the input file. These records have to be quoted, otherwise they will not be recognized. If you create your exports with the WbExport command, it is recommended to encode special characters using the -escapetext switch rather then using multi-line records.
81
Parameter
Description The default value for this parameter can be controlled in the settings file and it will be displayed if you run WbImport without any parameters. The decimal symbol to be used for numbers. The default is a dot Possible values: true, false If set to true, indicates that the file contains a header line with the column names for the target table. This will also ignore the data from the first line of the file. If the column names to be imported are defined using the -filecolumns or the -importcolumns switch, this parameter has to be set to true nevertheless, otherwise the first row would be treated as a regular data row. This parameter is always set to true when the -sourcedir parameter is used. The default value for this option can be changed in the settings file and it will be displayed if you run WbImport without any parameters. It defaults to true
-decimal -header
-decode
Possible values: true, false This controls the decoding of escaped characters. If the export file was e.g. written with escaping enabled then you need to set -decode=true in order to interpret string sequences like \t, \n or escaped Unicode characters properly. This is not enabled by default because applying the necessary checks has an impact on the performance.
-columnFilter
This defines a filter on column level that selects only certain rows from the input file to be sent to the database. The filter has to be defined as column1="regex",column2="regex". Only Rows matching all of the supplied regular expressions will be included by the import. This parameter is ignored when the -sourcedir parameter is used.
-lineFilter
This defines a filter on the level of the whole input row (rather than for each column individually). Only rows matching this regular expression will be included in the import. The complete content of the row from the input file will be used to check the regular expression. When defining the expression, remember that the (column) delimiter will be part of the input string of the expression.
-emptyStringIsNull
Possible values: true, false Controls whether input values for character type columns with a length of zero are treated as NULL (value true) or as an empty string. The default value for this parameter is true Note that, input values for non character columns (such as numbers or date columns) that are empty or consist only of whitespace will always be treated as NULL.
-trimValues
Possible values: true, false Controls whether leading and trailing whitespace are removed from the input values before they are stored in the database. When used in combination with emptyStringIsNull=true this means that a column value that contains only whitespace will be stored as NULL in the database. The default value for this parameter can be controlled in the settings file and it will be displayed if you run WbImport without any parameters.
82
Parameter
Description Note that, input values for non character columns (such as numbers or date columns) are always trimmed before converting them to their target datatype. Possible values: true, false This is a deprecated parameter. Please use -blobType instead. When exporting tables that have BLOB columns using WbExport into text files, each BLOB will be written into a separate file. The actual column data of the text file will contain the file name of the external file. When importing text files that do not reference external files into tables with BLOB columns setting this paramter to false, will send the content of the BLOB column "as is" to the DBMS. This will of course only work if the JDBC driver can handle the data that in the BLOB columns of the text file. The default for this parameter is true This parameter is ignored, if -blobType is also specified.
-blobIsFilename
-blobType
Possible values: file, ansi, base64 Specifies how BLOB data is stored in the input file. If file is specified, it is assumed that the column value contains a filename that in turn contains the real blob data. This is the default format when using WbExport. For the other two type, WbImport assumes that the blob data is stored as encoded character data in the column. If this parameter is specified, -blobIsFilename is ignored.
-clobIsFilename
Possible values: true, false When exporting tables that have CLOB columns using WbExport and the parameter clobAsFile=true the generated text file will not contain the actual CLOB contents, but the a filename indicating the file in which the CLOB content is stored. In this case clobIsFilename=true has to be specified in order to read the CLOB contents from the external files. The CLOB files will be read using the encoding specified with the encoding parameter.
83
-table=person -filecolumns=lastname,firstname,$wb_skip$,birthday -dateformat="yyyy-MM-dd"; This will import a file with four columns. The third column in the file does not have a corresponding column in the table person so its specified as $wb_skip$ and will not be imported. WbImport -file=c:/temp/contacts.txt -table=person -filecolumns=lastname,firstname,phone,birthday -importcolumns=lastname,firstname; This will import a file with four columns where all columns exist in the target table. Only lastname and firstname will be imported. The same effect could be achieved by specifying $wb_skip$ for the last two columns and leaving out the -importcolumns switch. Using -importcolumns is a bit more readable because you can still see the structure of the input file. The version with $wb_skip$ is mandatory if the input file contains columns that do not exist in the target table.
84
This will import all files with the extension txt located in the directory c:/data/backup into the database. This assumes that each filename indicates the name of the target table. WbImport -sourceDir=c:/data/backup -extension=txt -table=person -header=true This will import all files with the extension txt located in the directory c:/data/backup into the table person regardless of the name of the input file. In this mode, the parameter -deleteTarget will be ignored.
85
-constantValues="type_id=$@{SELECT type_id FROM contact_type WHERE type_name = $4}" As the ID column is now populated through a constant expression, it may not appear in the -importColumns list. Again you could alternatively use -fileColumns=$wb_skip$, first_name, last_name, $wb_skip$ to make sure the columns that are populated through the -constantValue parameter are not taken from the input file.
-createTarget
86
The keycolumns defined with the -keycolumns parameter don't have to match the real primary key, but they should identify one row uniquely. You cannot use the update mode, if the tables in question only consist of key columns (or if only key columns are specified). The values from the source are used to build up the WHERE clause for the UPDATE statement. If you specify a combined mode (e.g.: update,insert) and one of the tables involved consists only of key columns, the import will revert to insert mode. In this case database errors during an INSERT are not considered as real errors and are silently ignored. For maximum performance, choose the update strategy that will result in a succssful first statement more often. As a rule of thumb: Use -mode=insert,update, if you expect more rows to be inserted then updated. Use -mode=update,insert, if you expect more rows to be updated then inserted. To use insert/update or update/insert with PostgreSQL, make sure you have enabled savepoints for the import (which is enabled by default).
87
-syncDelete
88
Parameter -keyColumns
Description Defines the key columns for the target table. This parameter is only necessary if import is running in UPDATE mode. It is ignored when specifying more than one table with the sourceTable argument. In that case each table must have a primary key. Enable the use of the JDBC batch update feature, by setting the size of the batch queue. Any value greater than 1 will enable batch modee. If the JDBC driver supports this, the INSERT (or UPDATE) performance can be increased. This parameter will be ignored if the driver does not support batch updates or if the mode is not UPDATE or INSERT (i.e. if -mode=update,insert or mode=insert,update is used).
-batchSize
-commitBatch
Valid values: true, false When using the -batchSiez parameter, the -commitEvery is ignored (as not all JDBC drivers support a COMMIT inside a JDBC batch operation. When using commitBatch=true SQL Workbench/J will send a COMMIT to the database server after each JDBC batch is sent to the server.
-continueOnError
Defines the behaviour if an error occurs in one of the statements. If this is set to true the copy process will continue even if one statement fails. If set to false the copy process will be halted on the first error. The default value is false. With PostgreSQL continueOnError will only work, if the use of savepoints is enabled using -useSavepoint=true.
-useSavepoint
Possible values: true, false Controls if SQL Workbench/J guards every insert or update statement with a savepoint to recover from individual error during import, when continueOnError is set to true. Using a savepoint for each DML statement can drastically reduce the performance of the import.
-showProgress
Valid values: true, false, <numeric value> Control the update frequence in the statusbar (when running in GUI mode). The default is every 10th row is reported. To disable the display of the progress specifiy a value of 0 (zero) or the value false. true will set the progress interval to 1 (one).
-checkDependencies
-sourceWhere -targetTable
89
Parameter -createTarget
Description If this parameter is set to true the target table will be created, if it doesn't exist. Valid values are true or false. When using this option with different source and target DBMS, the information about the datatypes to be used in the target database are retrieved from the JDBC driver. In some cases this information might not be accurate or complete. You can enhance the information from the driver by configuring your own mappings in workbench.settings. Please see the section Customizing data type mapping for details.
-tableType
When -createTarget is set to true, this parameter can be used to control the SQL statement that is generated to create the target table. This is useful if the target table should e.g. be a temporary table When using the auto-completion for this parameter, all defined "create types" that are configured in workbench.settings (or are part of the default settings) are displayed together with the name of the DBMS they are used for. The list is not limited to definitions for the target database! The specified type must nonetheless match a type defined for the target connection. If you specify a type that does not exist, the default CREATE TABLE will be used. For details on how to configure a CREATE TABLE template for this parameter, please refer to the chapter Settings related to SQL statement generation
-skipTargetCheck
Normally WbCopy will check if the specified target table does exist. However, some JDBC drivers do not always return all table information correctly (e.g. temporary tables). If you know that the target table exists, the parameter -skipTargetCheck=true can be used to tell WbCopy, that the (column) definition of the source table should be assumed for the target table and not further test for the target table will be done. If this parameter is set to true the target table will be dropped before it is created. Defines the columns to be copied. If this parameter is not specified, then all matching columns are copied from source to target. Matching is done on name and data type. You can either specify a list of columns or a column mapping. When supplying a list of columns, the data from each column in the source table will be copied into the corresponding column (i.e. one with the same name) in the target table. If -createTarget=true is specified then this list also defines the columns of the target table to be created. The names have to be separated by comma: columns=firstname, lastname, zipcode A column mapping defines which column from the source table maps to which column of the target table (if the column names do not match) If -createtable=true then the target table will be created from the specified target names: -columns=firstname/ surname, lastname/name, zipcode/zip Will copy the column firstname from the source table to a column named surname in the target table, and so on. This parameter is ignored if more than one table is copied. When using a SQL query as the data source a mapping cannot be specified. Please check Copying data based on a SQL query for details.
-dropTarget -columns
-preTableStatement postTableStatement
This parameter defines a SQL statement that should be executed before the import process starts inserting data into the target table. The name of the current table (when e.g. importing a whole directory) can be referenced using ${table.name}. To define a statement that should be executed after all rows have been inserted and have been committed, you can use the -postTableStatement parameter.
90
Parameter
Description These parameters can e.g. be used to enable identity insert for MS SQL Server: -preTableStatement="set identity_insert ${table.name} on" -postTableStatement="set identity_insert ${table.name} off" Errors resulting from executing these statements will be ignored. If you want to abort the import in that case you can specify -ignorePrePostErrors=false and continueOnError=false.
Controls handling of errors for the -preTableStatement and ignorePrePostErrors=false postTableStatement parameters. If this is set to false, errors resulting from executing the supplied parameters are ignored. If set to true (default) then error handling depends on the parameter -continueOnError.
91
The automatic fallback [87] from update,insert or insert,update mode to insert mode applies for synchronizing tables using WbCopy as well.
13.6. Examples
13.6.1. Copy one table to another where all column names match
WbCopy -sourceProfile=ProfileA -targetProfile=ProfileB -sourceTable=the_table -targetTable=the_other_table;
92
-targetProfile=ProfileB -sourceQuery="SELECT firstname, lastname, birthday FROM person" -targetTable=contacts -deleteTarget=true -columns=surname, name, dob; This copies the data based on the SELECT statement into the table CONTACTS of the target database. The -columns parameter defines that the first column of the SELECT (firstname) is copied into the target column with the name surname, the second result column (lastname) is copied into the target column name and the last source column (birthday) is copied into the target column dob. This example could also be written as:
WbCopy -sourceProfile=ProfileA -targetProfile=ProfileB -sourceQuery="SELECT firstname as surname, lastname as name, birthday as dob FROM p -targetTable=contacts -deleteTarget=true
93
-excludeTableNames
-schemas
94
Parameter -xsltOutput
Description The name of the generated output file when applying the XSLT transformation.
95
Description Select whether table triggers are compared as well. The default value is true. Select whether table and column (check) constraints should be compared as well. SQL Workbench/J compares the constraint definition (SQL) as stored in the database. The default is to compare table constraints (true) Valid values are true or false.
-useConstraintNames
When including check constraints this parameter controls whether constraints should be matched by name, or only by their expression. If comparing by names is enabled, the diff output will contain elements for constraint modification otherwise only drop and add entries will be available. The default is to compare by names(true) Valid values are true or false.
-includeViews
Select whether views should also be compared. When comparing views, the source as it is stored in the DBMS is compared. This comparison is case-sensitiv, which means SELECT * FROM foo; will be reported as a difference to select * from foo; even if they are logically the same. A comparison across different DBMS will also not work properly! The default is true Valid values are true or false.
-includeProcedures
Select whether stored procedures should also be compared. When comparing procedures the source as it is stored in the DBMS is compared. This comparison is case-sensitiv. A comparison across different DBMS will also not work! The default is false Valid values are true or false.
Select whether indexes should be compared as well. The default is to not compare index definitions. Valid values are true or false. Select whether sequences should be compared as well. The default is to not compare sequences. Valid values are true, false. Define whether to compare the DBMS specific data types, or the JDBC data type returned by the driver. When comparing tables from two different DBMS it is recommended to use -useJdbcType=true as this will make the comparison a bit more DBMSindependent. When comparing e.g. Oracle vs. PostgreSQL a column defined as VARCHAR2(100) in Oracle would be reported as beeing different to a VARCHAR(100) column in PostgreSQL which is not really true As both drivers ropert the column as java.sql.Types.VARCHAR, they would be considered as identical when using useJdbcType=true. Valid values are true or false.
-stylesheet -xsltOutput
Apply a XSLT transformation to the generated XML file. The name of the generated output file when applying the XSLT transformation.
96
WbDataDiff requires that all involved tables have a primary key defined. If a table does not have a primary key, WbDataDiff will stop the processing. To improve performance (a bit), the rows are retrieved in chunks from the target table by dynamically constructing a WHERE clause for the rows that were retrieved from the reference table. The chunk size can be controlled using the property workbench.sql.sync.chunksize The chunk size defaults to 25. This is a conservative setting to avoid problems with long SQL statements when processing tables that have a PK with multiple columns. If you know that your primary keys consist only of a single column and the values won't be too long, you can increase the chunk size, possibly increasing the performace when generating the SQL statements. As most DBMS have a limit on the length of a single SQL statement, be careful when setting the chunksize too high. The same chunk size is applied when generating DELETE statements by the WbCopy command, when syncDelete mode is enabled. The command supports the following parameters: Parameter -referenceProfile -referenceGroup Description The name of the connection profile for the reference connection. If this is not specified, then the current connection is used. If the name of your reference profile is not unique across all profiles, you will need to specify the group in which the profile is located with this parameter. If the profile's name is unique you can omit this parameter The name of the connection profile for the target connection (the one that needs to be migrated). If this is not specified, then the current connection is used. If you use the current connection for reference and target, then you should prefix the table names with schema/user or use the -referenceschema and -targetschema parameters. -targetGroup -file If the name of your target profile is not unique across all profiles, you will need to specify the group in which the profile is located with this parameter. The filename of the main script file. The command creates two scripts per table. One script named update_<tablename>.sql that contains all needed UPDATE or INSERT statements. The second script is named delete_<tablename>.sql and will contain all DELETE statements for the target table. The main script merely calls (using WbInclude) the generated scripts for each table. A (comma separated) list of tables that are the reference tables, to be checked. You can specify the table with wildcards, e.g. -referenceTables=P% to compare all tables that start with the letter P. A (comma separated) list of tables in the target connection to be compared to the source tables. The tables are "matched" by their position in the list. The first table in the referenceTables parameter is compared to the first table in the -targetTables parameter, and so on. Using this parameter you can compare tables that do not have the same name. If you omit this parameter, then all tables from the target connection with the same names as those listed in -referenceTables are compared. If you omit both parameters, then all tables that the user can access are retrieved from the source connection and compared to the tables with the same name in the target connection. -referenceSchema -targetSchema -checkDependencies Compare all tables from the specified schema (user) A schema in the target connection to be compared to the tables from the reference schema. Valid values are true, false. Sorts the generated scripts in order to respect foreign key dependencies for deleting and inserting rows.
-targetProfile
-referenceTables
-targetTables
97
Parameter -includeDelete
Description The default is true. Valid values are true, false. Generates DELETE statements for rows that are present in the target table, but not in the reference table. The default is false. The default is false.
-type
Valid values are sql, xml Defines the type of the generated files.
-encoding
The encoding to be used for the SQL scripts. The default depends on your operating system. It will be displayed when you run WbDataDiff without any parameters. You can overwrite the platform default with the property workbench.encoding in the file workbench.settings XML files are always stored in UTF-8
-sqlDateLiterals
Valid values: jdbc, ansi, dbms, default Controls the format in which the values of DATE, TIME and TIMESTAMP columns are written into the generated SQL statements. For a detailed description of the possible values, please refer to the WbExport command.
-ignoreColumns
With this parameter you can define a list of column names that should not be considered when comparing data. You can e.g. exclude columns that store the last access time of a row, or the last update time if that should not be taken into account when checking for changes. Valid values: true, false, <numeric value> Control the update frequence in the statusbar (when running in GUI mode). The default is every 10th row is reported. To disable the display of the progress specifiy a value of 0 (zero) or the value false. true will set the progress interval to 1 (one).
-showProgress
WbDataDiff Examples
Compare all tables between two connections, and write the output to the file migrate_staging.sql, but do not generate DELETE statements. WbDataDiff -referenceProfile="Production" -targetProfile="Staging" -file=migrate_staging.sql -includeDelete=false Compare a list of matching tables between two databases and write the output to the file migrate_staging.sql including DELETE statements. WbDataDiff -referenceProfile="Production" -targetProfile="Staging" -referenceTables=person,address,person_address -file=migrate_staging.sql -includeDelete=true Compare three tables that are differently named in the target database and ignore all columns (regardless in which table they appear) that are named LAST_ACCESS or LAST_UPDATE WbDataDiff -referenceProfile="Production"
98
99
The functionality of the WbGrepSource command is also available through a GUI at Tools Search in object source
Description The value to be searched for Valid values are true, false. When set to true, the comparison is be done case-insesitive ("ARTHUR" will match "Arthur" or "arthur"). The default for this parameter is true.
-compareType
Valid values are contains, equals, matches, startsWith When specifying matches, the search value is used as a regular expression. A column is included in the search result if the regular expression is contained in the column value (not when the column value matches the regular expression entirely!). The default for this parameter is contains.
-tables
A list of table names to be searched. These names may contain SQL wildcards, e.g. tables=PER%,NO%. If you want to search in different schemas, you need to prefix the table names, e.g. -tables=schema1.p%,schema2.n%. By default WbGrepData will search all tables and views (including materialized views). If you want to search only one of those types, this can be specified with the -types parameter. Using -types=table will only search table data and skip views in the database. A list of table names to be excluded from the search. If e.g. the wildcard for -tables would select too many tables, you can exclude individual tables with this parameter. The parameter values may include SQL wildcards. -tables=p% -excludeTables=product_details,product_images would process all tables starting with P but not the product_detail and the product_images tables.
-types
-excludeTables
-excludeLobs
If this parameter is set to true, CLOB and BLOB columns will not be retrieved at all, which is useful if you retrieve a lot of rows from tables with columns of those type to reduce the memory that is needed. If this switch is set to true the content of CLOB columns will not be searched.
100
101
ANSWER
| 42
(1 Row) Converted procedure call to JDBC syntax: {call return_answer(?)} Execution time: 0.453s SQL>
CREATE PROCEDURE ref_cursor_example(pid number, person_result out sys_refcursor, addr_resu BEGIN OPEN person_result FOR SELECT * FROM person WHERE person_id = pid; OPEN addr_result FOR SELECT a.* FROM address a JOIN person p ON a.address_id = p.address_id WHERE p.person_id = pid; END; / To call this procedure you use the same syntax as with a regular OUT parameter: WbCall ref_cursor_example(42, ?, ?); SQL Workbench/J will display two result tabs, one for each cursor returned by the procedure. If you use WbCall ref_cursor_example(?, ?, ?) you will be prompted to enter a value for the first parameter (because that is an IN parameter).
102
You can call this function using WbCall refcursorfunc(); This will then display the result from the SELECT inside the function.
-delimiter
-encoding -verbose
-useSavepoint
-ignoreDropErrors
Execute my_script.sql
103
@my_script.sql; Execute my_script.sql but abort on the first error wbinclude -file="my_script.sql" -continueOnError=false;
-changeSet
-author
-continueOnError
-encoding
104
Assuming you have an updateable view called v_person where the primary key is the column person_id. When you simply do a SELECT * FROM v_person, SQL Workbench/J will prompt you for the primary key when you try to save changes to the data. If you run WbDefinePk v_person=person_id before retrieving the result, SQL Workbench/J will automatically use the person_id as the primary key (just as if this information had been retrieved from the database). To delete a definition simply call the command with an empty column list: WbDefinePk v_person= If you want to define certain mappings permanently, this can be done using a mapping file that is specified in the configuration file. The file specified has to be a text file with each line containing one primary key definition in the same format as passed to this command. The global mapping will automatically be saved when you exit the application if a filename has been defined. If no file is defined, then all PK mappings that you define are lost when exiting the application (unless you explicitely save them using WbSavePkMap v_person=person_id v_data=id1,id2 will define a primary key for the view v_person and one for the view v_data. The definitions stored in that file can be overwritten using the WbDefinePk command, but those changes won't be saved to the file. This file will be read for all database connections and is not profile specific. If you have conflicting primary key definitions for different databases, you'll need to execute the WbDefinePk command each time, rather then specifying the keys in the mapping file. When you define the key columns for a table through the GUI, you have the option to remember the defined mapping. If this option is checked, then that mapping will be added to the global map (just as if you had executed WbDefinePk manually. The mappings will be stored with lowercase table names internally, regardless how you specify them.
105
The following script changes the default fetch size to 2500 rows and then runs a WbExport command. WbFetchSize 2500; WbExport -sourceTable=person -type=text -file=/temp/person.txt; WbFetchSize will not change the current connection profile.
106
14.18.1. FEEEDBACK
SET feedback ON/OFF is equivalent to the WbFeedback command, but mimics the syntax of Oracle's SQL*Plus utility.
14.18.2. SERVEROUTPUT
SET serveroutput on is equivalent to the ENABLEOUT command and SET serveroutput off is equivalent to DISABLEOUT command.
14.18.3. AUTOCOMMIT
With the command SET autocommit ON/OFF autocommit can be turned on or off for the current connection. This is equivalent to setting the autocommit property in the connection profile or toggling the state of the SQL Autocommit menu item.
14.18.4. MAXROWS
Limits the number of rows returned by the next statement. The behaviour of this command is a bit different between the console mode and the GUI mode. In console mode, the maxrows stay in effect until you explicitely change it back using SET maxrows again. In GUI mode, the maxrows setting is only in effect for the script currently being executed and will only temporarily overwrite any value entered in the "Max. Rows" field.
107
Parameters for the WbMode command are: reset normal confirm readonly Resets the flags to the profile's definition Makes all changes possible (turns off read only and confirmations) Enables confirmation for all updating commands Turns on the read only mode
The following example will turn on read only mode for the current connection, so that any subsequent statement that updates the database will be ignored WbMode readonly; To change the current connection back to the settings from the profile use: WbMode reset;
108
You can limit the list by supplying a wildcard search for the name, e.g.: WbListProcs public.p%
109
When this command is entered directly in the commandline of the console mode, the current connection is closed and the new connection is kept open until the application ends, or a new connection is established using WbConnect on the commandline again. The command supports the following parameters: Parameter -profile Description Defines the profile to connect to. If this parameter is specified all other parameters are ignored.
or -url -username -password -driver -driverJar -autocommit The JDBC connection URL Specify the username for the DBMS Specify the password for the user Specify the full class name of the JDBC driver Specify the full pathname to the .jar file containing the JDBC driver Set the autocommit property for this connection. You can also control the autocommit mode from within your script by using the SET AUTOCOMMIT command.
-rollbackOnDisconnect If this parameter is set to true, a ROLLBACK will be sent to the DBMS before the connection is closed. This setting is also available in the connection profile. -trimCharData -removeComments -fetchSize -ignoreDropError Turns on right-trimming of values retrieved from CHAR columns. See the description of the profile properties for details. This parameter corresponds to the Remove comments setting of the connection profile. This parameter corresponds to the Fetch size setting of the connection profile. This parameter corresponds to the Ignore DROP errors setting of the connection profile.
If none of the parameters is supplied when running the command, it is assumed that any value after WbConnect is the name of a connection profile, e.g.: WbConnect production will connect using the profile name production, and is equivalent to WbConnect -profile=production
110
Parameter -xsltParameters
Description A list of parameters (key/value pairs) that should be passed to the XSLT processor. When using e.g. the wbreport2liquibase.xslt stylesheet, the value of the author attribute can be set using -xsltParameters="authorName=42".
111
15. DataPumper
15.1. Overview
The export and import features are useful if you cannot connect to the source and the target database at once. If your source and target are both reachable at the same time, it is more efficient to use the DataPumper to copy data between two systems. With the DataPumper no intermediate files are necessary. Especially with large tables this can be an advantage. To open the DataPumper, select Tools DataPumper The DataPumper lets you copy data from a single table (or SELECT query) to a table in the target database. The mapping between source columns and target columns can be specified as well Everything that can be done with the DataPumper, can also be accomplished with the WbCopy command. The DataPumper can also generate a script which executes the WbCopy command with the correct parameters according to the current settings in the window. This can be used to create scripts which copy several tables. The DataPumper can also be started as a stand-alone application - without the main window - by specifying -datapumper=true in the command line when starting SQL Workbench/J. You can also use the supplied Windows executable DataPumper.exe or the Linux/Unix shell script datapumper When opening the DatPumper from the main window, the main window's current connection will be used as the initial source connection. You can disable the automatic connection upon startup with the property workbench.datapumper.autoconnect in the workbench.settings file.
112
After both tables are selected, the middle part of the window will display the available columns from the source and target table. This grid display represents the column mapping between source and target table.
113
For maximum performance, choose the update strategy that will result in a succssful first statement more often. As a rule of thumb: Use -mode=insert,upadte, if you expect more rows to be inserted then updated. Use -mode=update,insert, if you expect more rows to be updated then inserted.
114
Export data
This will execute a WbExport command for the currently selected table(s). Choosing this option is equivalent to do a SELECT * FROM table; and then executing SQL Export query result from the SQL editor in the main window. See the description of the WbExport command for details. When using this function, the customization for datatypes is not applied to the generated SELECT statement.
115
Drop
Drops the selected objects. If at least one object is a table, and the currently used DBMS supports cascaded dropping of constraints, you can enable cascaded delete of constraints. If this option is enabled SQL Workbench/J would generate e.g. for Oracle a DROP TABLE mytable CASCADE CONSTRAINTS. This is necessary if you want to drop several tables at the same time that have foreign key constraints defined. If the current DBMS does not support a cascading drop, you can order the tables so that foreign keys are detected and the tables are dropped in the right order by clicking on the Check foreign keys button. If the checkbox "Add missing tables" is selected, any table that should be dropped before any of the selected tables (because of foreign key constraints) will be added to the list of tables to be dropped.
116
Delete data
Deletes all rows from the selected table(s) by executing a DELETE FROM table_name; to the server for each selected table. If the DBMS supports TRUNCATE then this can be done with TRUNCATE as well. Using TRUNCATE is usually faster as no transation state is maintained. The list of tables is sorted according to the sort order in the table list. If the tables have foreign key constraints, you can re-order them to be processed in the correct order by clicking on the Check foreign keys button. If the checkbox "Add missing tables" is selected, any table that should be deleted before any of the selected tables (because of foreign key constraints) will be added to the list of tables.
ALTER script
After you have changed the name of a table in the list of objects, you can generate and run a SQL script that will apply that change to the database. For details please refer to the section Changing table definitions
117
If the editing the name or comment is rejected, the necessary SQL statements have not been configured for your DBMS. If your DBMS does support changing the object type in question, please send a mail with the necessary information to the support email address. Once you have changed a name (or several) the menu item "ALTER script" in the context menu of the object list, will display a window with the necessary SQL statements to apply your changes. You can save the generated script to a file or run the statements directly from that window.
118
The column order will be stored using the fully qualified table name and the current connection's JDBC URL as the lookup key. To reset the column order use the menu item Reset column order from the popup menu. This will revert the column order to the order in which the columns appear in the source table. The saved order will be deleted as well.
119
workbench.db.oracle.selectexpression.xmltype=extract(${column}, '/').getClobVal() AS ${col In order for SQL Workbench/J to parse the SQL statement correctly, the AS keyword must be used. You can check the generated SELECT statement by using the Put SELECT into feature. The statement that is generated and put into the editor, is the same as the one used for the data retrieval. The defined expression will also be used for the Search table data feature, when using the server side search. If you want to search inside the data that is returned by the defined expression you have to make sure that you DBMS supports the result of that expression as part of a LIKE expression. E.g. for the above Oracle example, SQL Workbench/J will generate the following WHERE condition: WHERE to_clob(my_clob_col) LIKE '%searchvalue%'
120
The results displayed here are not editable. If you want to modify the results after a search, you have to use the WbGrepData command Two different implementations of the search are available: server side and client side.
121
The character representation that is used is based on the default formatting options from the Options Window. This means that e.g. a DATE column will be compared according to the standard formatting options before the comparison is done. The client side search is also available through the WbGrepData command
122
123
The memory that is available to the application is limited by the Java virtual machine to ensure that applications don't use all available memory which could potentially make a system unusable. If you retrieve large resultsets from the database, you may receive an error message indicating that the application does not have enough memory to store the data. Please refer to Increasing the memory for details on how to increase the memory that is available to SQL Workbench/J
124
By default Oracle's JDBC driver does not return comments made on columns or tables (COMMENT ON ..). Thus your comments will not be shown in the database explorer. To enable the display of column comments, you need to pass the property remarksReporting to the driver. In the profile dialog, click on the Extended Properties button. Add a new property in the following window with the name remarksReporting and the value true. Now close the dialog by clicking on the OK button. Turning on this features slows down the retrieval of table information e.g. in the Database Explorer. When you have comments defined in your Oracle database and use the WbSchemaReport command, then you have to enable the remrks reporting, otherwise the comments will not show up in the report.
125
126
Unfortunately there is no real solution to blocking transactions e.g. between a SQL tab and the DbExplorer. One (highly discouraged) solution is to run in autocommit mode, the other to have only one connection for all tabs (thus all of them share the same transaction an the DbExplorer cannot be blocked by a different SQL tab). The Microsoft JDBC Driver supports a connection property called lockTimeout. It is recommended to set that to 0 (zero) (or a similar low value). If that is done, calls to the driver's API will through an error if they encounter a lock rather than waiting until the lock is released. The jTDS driver does not support such a property. If you are using the jTDS driver, you can define a post-connect script that runs SET LOCK_TIMEOUT 0.
127
The connection properties for the DB2 JDBC driver are documented here: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=/com.ibm.db2.luw.apdv.java.doc/doc/ r0052038.html http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.apdv.java.doc/doc/r0052607.html The example claims that this property is only needed for z/OS, but it works as described for LUW as well.
128
workbench.db.postgresql.sql.usesavepoint=true in the file workbench.settings. If this is enabled, SQL Workbench/J will issue a SET SAVEPOINT before running each statement and will release the savepoint after the statement. If the statement failed, a rollback to the savepoint will be issued that will mark the transaction as "clean" again. So in the above example (with sql.usesavepoint set to true), the last statement would be rolled back automatically but the first two INSERTs can be committed (this will also required to turn on the "Ignore errors" option is enabled). If you want to use the modes update/insert or insert/update for WbImport, you should also add the property: workbench.db.postgresql.import.usesavepoint=true to enable the usage of savepoints during imports. This setting also affects the WbCopy command. You can also use the parameter -useSavepoint for the WbImport and WbCopy commands to control the use of savepoints for each import. Using savepoints can slow down the import substantially.
129
130
131
The editor always uses "unix" line ending internally. If you select a different value for this property, SQL Workbench/ J will convert the SQL statements to use the desired line ending before sending them to the DBMS. As this can slow down the execution of statements, it is highly recommended to leave the default setting of Unix line endings. You should only change this, if your DBMS does not understand the single linefeed character (ASCII value 10) properly.
132
133
134
Content only
Nothing
135
136
137
Letter Description M w W D d F E a H k K h m s S z Z Month in year (Number) Week in year (Number) Week in month (Number) Day in year (Number) Day in month (Number) Day of week in month (Number) Day in week (Text) AM/PM marker Hour in day (0-23) Hour in day (1-24) Hour in am/pm (0-11) Hour in am/pm (1-12) Minute in hour Second in minute Milliseconds General time zone (e.g. Pacific Standard Time; PST; GMT-08:00) RFC 822 time zone (e.g. -0800)
138
139
140
18.11.5. Separator
If you select to display the current profile's name and group, you can select the character that separates the two names.
SELECT p.name, p.firstname, a.city, a.zip FROM person p JOIN address a ON (p.person_id = a.person_id); The above example would list all columns in a single line, if this option is set to 4 (or a higher value):
SELECT p.name, p.firstname, a.city, a.zip FROM person p JOIN address a ON (p.person_id = a.person_id);
141
INSERT INTO PERSON ( id, firstname, lastname ) VALUES ( 42, 'Arthur', 'Dent' ); When setting this value to 2, the above example would be formatted as follows: INSERT INTO PERSON (id, firstname, lastname) VALUES (42, 'Arthur', 'Dent');
UPDATE person SET firstname = 'Arthur', lastname = 'Dent' WHERE id = 42; With a value of 2, the above example would be formatted as follows:
142
Column threshold
If the number of columns in the statement exceeds this value, the columns will be spread over several lines. The number of columns that are put into each line is controlled using the option "Columns per line".
143
144
145
If the description for a property in this chapter refers to a "Database Identifier", the text between (but not including) the square brackets has to be used.
20.2. DBID
For some settings, where the ID is part of the property's key, a "clean" version of the Database Identifer, called the DBID, is used. This DBID is displayed in the connection info dialog (right click on the connection URL in the main window, then choose "Connection Info"). The DBID is also reported in the log file: INFO 15.08.2004 10:24:42 Using DBID=hsql_database_engine
If the description for a property in this chapter refers to the "DBID", then this value has to be used. If the DBID is part of a property key this will be referred to as [dbid] in this chapter.
146
Possible values: true, false When printing the contents of a table, this settings controls the type of print dialog to be used. The default setting will open the native print dialog of the operating system. If you experience problems when trying to print, set this property to false. SQL Workbench/J will then open a cross-platform print dialog. Default value: true
147
148
DbExplorer's "Search table data" feature only includes columns with the datatypes CHAR and VARCHAR into the WHERE clause for searching. Some database systems allow CLOB columns to be searched using a LIKE expression as well. This property can be used to list all datatypes that can be used in a LIKE condition. Default values: For PostgreSQL: text For MySQL: longtext,tinytext,mediumtext
149
Default: true
COMMIT/ROLLBACK behaviour
Property: workbench.db.[dbid].usejdbccommit Possible values: true, false Some DBMS return an error when COMMIT or ROLLBACK is sent as a regular command through the JDBC interface. If the DBMS is listed here, the JDBC functions commit() or rollback() will be used instead. Default: false
150
No default
Filtering tables
Property: workbench.db.[dbid].exclude.tables Whenever SQL Workbench/J retrieves a list of tables (e.g. the DbExplorer, auto completion, WbSchemaReport) certain tables can be filtered out by supplying a regular expression in this property. The default setting will filter Oracle tables that reside in the "Recycle bin". This setting can be applied on a per DBMS basis Default value: workbench.db.oracle.exclude.tables=^BIN\\$.* Note that you need to use two backslashes in the RegeEx.
Filtering synonyms
Property: workbench.db.[dbid].exclude.synonyms The database explorer and the auto completion can display (Oracle public) synonyms. Some of these are usually not of interest to the end user. Therefor the list of displayed synonyms can be controlled. This property defines a regular expression. Each synonym that matches this regular expression, will be excluded from the list presented in the GUI. Default value (for Oracle): ^AQ\\$.*|^MGMT\\$.*|^GV\\$.*|^EXF\\$.*|^KU\\$_.*|^WM\\$.*| ^MRV_.*|^CWM_.*|^CWM2_.*|^WK\\$_.*|^CTX_.* Note that you need to use two backslashes in the RegeEx.
151
152
Possible values: true, false Some DBMS (such as PostgreSQL) cannot continue inside a transaction when an error occurs. A script with multiple DML statements can therefor not run completely if one statement fails, even if you choose to ignore the error. If this property is set to true, SQL Workbench/J will set a savepoint before executing a DML statement (SELECT, INSERT. In case of an error the savepoint will be rolled back and the transaction can continue. Default value: false
153
When using the -createTarget parameter for WbCopy, the type mapping from the JDBC driver might not be sufficient or correct. With this setting you can define your own type mapping for a specific dbms. The entry is a list of mappings that map the numeric value of a JDBC datatype (as defined in java.sql.Types) to a real data type name for the DBMS. The numeric JDBC datatype value and the DBMS specific datatype name are separated with a colon. Each pair is separated by a semicolon. The following entry maps the JDBC datatype with the value 3 (NUMERIC) to the target datatype double and the value 2 (BIGINT) to the target type NUMBER. The NUMBER datatypes needs uses two parameter placeholders $size and $digits. The last mapping maps the JDBC value -1 (LONGVARCHAR) to the DBMS type VARCHAR using only the $size parameter workbench.db.[some_db].typemap=3:DOUBLE;2:NUMBER($size,$digits);-1:VARCHAR($size) JDBC 4.0 defines the following constants: BIGINT = -5 BINARY = -2 BIT = -7 BLOB = 2004 BOOLEAN = 16 CHAR = 1 NCHAR = -15 CLOB = 2005 NCLOB = 2011 DATE = 91 DECIMAL = 3 DOUBLE = 8 FLOAT = 6 INTEGER = 4 LONGVARBINARY = -4 LONGVARCHAR = -1 LONGNVARCHAR = -16 NUMERIC = 2 REAL = 7 SMALLINT = 5 TIME = 92 TIMESTAMP = 93 TINYINT = -6 VARBINARY = -3 VARCHAR = 12 NVARCHAR = -9 ROWID = -8 SQLXML = 2009
154
Default: 1048576
155
Possible values: true, false This property controls whether XML exports are done using verbose XML or short tags and only basic formatting. This property sets the default value of the -verbosexml parameter for the WbExport command. Default: true
156
Log level
Property: workbench.log.level Set the log level for the log file. Valid values are DEBUG INFO WARN ERROR
Default: INFO
Log format
Property: workbench.log.format Define the elements that are included in log messages. The following placeholders are supported: {type} {timestamp} {message} {error} {source} {stacktrace}
This property does not define the layout of the message, only the elements that are logged. If the log level is set to DEBUG, the stacktrace will always be displayed even if it is not included in the format string. If you want more control over the log file and the format of the message, please switch the logging to use Log4J. Default: {type} {timestamp} {message} {error}
157
Default: false
158
Define a list of schemas that should be ignored for the DB ID When SQL Workbench/J creates DML statements and the current table is reported to belong to any of the schemas listed in this property, the schema will not be used to qualify the table. To ignore all schemas use a *, e.g. workbench.sql.ignoreschema.rdb=*. In this case, table names will never be prefixed with the schema name reported by the JDBC driver. The values specified in this property are case sensitiv. Note that for Oracle, tables that are owned by the current user will never be prefixed with the owner. Default values: .oracle=PUBLIC .postgresql=public .rdb=*
159
mysql: PRIMARY
160
Default value: 15
161
Index
B
Batch files connecting, 52 setting SQL Workbench/J configuration properties, 55 specify SQL script, 52 starting SQL Workbench/J, 52
C
Command line connection profile, 14 JDBC connection, 15 parameters, 13 Configuration JDBC driver, 17 Connection profile, 20 autocommit, 21 connection URL, 21 create, 20 default fetch size, 21 delete, 20 extended properties, 21 separate connection, 22 separate session, 22 timeout, 21
D
DB2 Problems, 127 DbExplorer show all triggers, 139 DDL Execute DDL statements, 36
E
Excel export installation, 62, 123 Export clipboard, 47 compress, 72 Excel, 71 HTML, 71 memory problems, 62 OpenOffice, 71 parameters, 63 result set, 46 Spreadsheet, 71 SQL INSERT script, 69 SQL query result, 62 SQL UPDATE script, 69 table, 62 text files, 67
162
XML files, 69
I
Import clipboard, 48 csv, 75 flat files, 75 parameters, 75 tab separated, 75 XML, 75
J
JDBC driver class name, 17 jar file, 17 library, 17 sample URL, 17
L
Liquibase Run SQL from Liquibase file, 104
M
Microsoft SQL Server Problems, 126 MySQL display table comments in DbExplorer, 149 problems, 125
O
ODBC datasource, 17 driver, 17 jdbc url, 17 Oracle database comments, 124 DATE datatype, 138 dbms_output, 111 Problems, 124
P
PostgreSQL Problems, 128 Problems create stored procedure, 123 create trigger, 123 driver not found, 123 Excel export not possible, 123 IBM DB2, 127 memory usage during export, 62 Microsoft SQL Server, 126 MySQL, 126 Oracle, 124 out of memory, 123
163
PostgreSQL, 128 Sybase SQL Anywhere, 129 timestamp with timezone, 123 timezone, 123
S
Stored procedures create stored procedure, 36
T
Triggers create trigger, 36 show all triggers in DbExplorer, 139
W
Windows 32bit, 11 64bit, 11 using the launcher, 11
164