Talend DataIntegration Studio UG 6.3.1 en
Talend DataIntegration Studio UG 6.3.1 en
Studio
User Guide
6.3.1
Talend Data Integration Studio
Notices
Talend is a trademark of Talend, Inc.
All brands, product names, company names, trademarks and service marks are the properties of their respective
owners.
1. General information
1.1. Purpose
This User Guide explains how to manage Talend Data Integration Studio functions in a normal
operational context.
Information presented in this document applies to Talend Data Integration Studio 6.3.1.
1.2. Audience
This guide is for users and administrators of Talend Data Integration Studio.
The layout of GUI screens provided in this document may vary slightly from your actual GUI.
• text in bold: window and dialog box buttons and fields, keyboard keys, menus, and menu options,
•
The icon indicates an item that provides additional information about an important point. It is
also used to add comments related to a table or a figure,
•
The icon indicates a message that gives information about the execution requirements or
recommendation type. It is also used to refer to situations or information the end-user needs to be
aware of or pay special attention to.
http://talendforge.org/forum
A third reason is the multiplication of data storage formats (XML files, positional flat files, delimited flat files,
multi-valued files and so on), protocols (FTP, HTTP, SOAP, SCP and so on) and database technologies.
A question arises from these statements: How to manage a proper integration of this data scattered throughout the
company's information systems? Various functions lie behind the data integration principle: business intelligence
or analytics integration (data warehousing) and operational integration (data capture and migration, database
synchronization, inter-application data exchange and so on).
Both ETL for analytics and ETL for operational integration needs are addressed by Talend Studio.
Furthermore industrialization features and extended monitoring capabilities are also offered in Talend Studio.
• Packaged applications (ERP, CRM, etc.), databases, mainframes, files, Web Services, and so on to address the
growing disparity of sources.
• Data warehouses, data marts, OLAP applications - for analysis, reporting, dashboarding, scorecarding, and so
on.
• Built-in advanced components for ETL, including string manipulations, Slowly Changing Dimensions,
automatic lookup handling, bulk loads support, and so on.
Most connectors addressing each of the above needs are detailed in Talend Components Reference Guide. For
information about their orchestration in Talend Studio, see Designing a Job. For high-level business-oriented
modeling, see Designing a Business Model.
Data migration/loading and data synchronization/replication are the most common applications of operational data
integration, and often require:
• Complex mappings and transformations with aggregations, calculations, and so on due to variation in data
structure,
• Conflicts of data to be managed and resolved taking into account record update precedence or "record owner",
Most connectors addressing each of the above needs are detailed in Talend Components Reference Guide. For
information about their orchestration in Talend Studio, see Designing a Job. For high-level business-oriented
modeling, see Designing a Business Model. For information about designing a detailed data integration Job using
the output stream feature, see Using the output stream feature.
The following chart illustrates the main architectural functional blocks used to handled your data integration tasks.
The building blocks you have available for use depend on your license.
• The Clients block includes one or more Talend Studio(s) and Web browsers that could be on the same or on
different machines.
From the Studio, you can carry out data integration processes regardless of the level of data volumes and process
complexity. Talend Studio allows you to work on any project for which you have authorization. .
From the Web browser, you connect to the remotely based Talend Administration Center through a secured
HTTP protocol.
• The Talend Servers block includes a web-based Administration Center (application server) connected to:
• two shared repositories: one based on an SVN or Git server and one based on a Nexus repository,
• databases: one for administration metadata, one for audit information, and one for Activity monitoring,
Talend Administration Center enables the management and administration of all projects. Administration
metadata (user accounts, access rights and project authorization for example) is stored in the Administration
database. Project metadata (Jobs, Business Models and Routines for example) is stored in the SVN or Git server.
For detailed information about the Administration Center, see Talend Administration Center User Guide.
• The Repositories block includes the SVN or Git server and the Nexus repository. The SVN or Git server is
used to centralize all project metadata like Jobs and Business Models shared between different end-users, and
accessible from the Talend Studio to develop them and from Talend Administration Center to publish, deploy
and monitor them.
• Jobs that are published from the Talend Studio and are ready to be deployed and executed.
• The Talend Execution Servers block includes one or more execution servers, deployed inside your information
system. Talend Jobs are deployed to the Job servers through the Administration Center's Job Conductor to be
executed on scheduled time, date, or event.
For detailed information about execution servers, see Talend Administration Center User Guide.
• The Databases block includes the Administration, the Audit and the Monitoring databases. The Administration
database is used to manage user accounts, access rights and project authorization, and so on. The Audit database
is used to evaluate different aspects of the Jobs implemented in projects realized in Talend Studio with the aim
of providing solid quantitative and qualitative factors for process-oriented decision support. The Monitoring
databases include the Talend Activity Monitoring Console database and the Service Activity Monitoring
database.
The Talend Activity Monitoring Console allows you to monitor the execution of technical processes. It provides
detailed monitoring capabilities that can be used to consolidate collected log information, understand the
underlying data flows interaction, prevent faults that could be unexpectedly generated and support the system
management decisions.
The Service Activity Monitoring allows you to monitor service calls. It provides monitoring and consolidated
event information that the end-user can use to understand the underlying requests and replies that compose the
event, monitor faults and support the system management decisions.
• Consolidation of all project information and enterprise metadata in a centralized repository so that all
stakeholders can access the same, single version of the truth. For more information regarding the shared
repository in Talend Studio, see Working collaboratively on project items.
• Coordination and scheduling of the execution of data integration Jobs, with a centralized execution interface.
For more information regarding the Job Conductor, see How to schedule Job executions via the Job Conductor
and check Talend Administration Center User Guide.
• Optimization of the use of the execution grid to ensure optimal scalability and availability of the integration
processes. For more information about virtual execution servers, check Talend Administration Center User
Guide.
• Remote execution of Jobs on specified systems, for testing and running Jobs upon request on specific systems.
For more information regarding the distant run, see How to run a Job remotely.
Furthermore, beyond on-error notification, it is often critical to monitor the overall health of the integration
processes and to watch for any degradation in their performance.
The Talend Activity Monitoring Console monitors Job events (successes, failures, warnings, etc.), execution times
and data volumes through a single console, fully integrated in Talend Studio, the AMC perspective.
For more information regarding Talend Activity Monitoring Console operation, see Talend Activity Monitoring
Console User Guide.
The Talend Activity Monitoring Console is also available as one of the Monitoring modules of Talend
Administration Center.
Another powerful functionality that is integrated in Talend Studio is monitoring task executions via Talend
Administration Center. Monitoring task execution automatically tracks task completion. It tracks in real-time the
status of all triggered tasks or those waiting to be triggered. This way, monitoring can support rapid identification
of bug issues.
For more information about tracking task completion, see Talend Administration Center User Guide.
This chapter deals with how to create, import, export, delete, and work in projects in Talend Studio. For how to
launch and get started with Talend Studio, see the Getting Started Guide.
When you launch the Studio using a locally loaded license for the first time, there are no default projects listed.
You need to create a project that will hold all data integration Jobs and business models you design in the current
instance of the Studio.
You can create as many projects as you need to store your data of different instances of your Studio.
When creating a new project, a tree folder is automatically created in the workspace directory on your repository
server. This will correspond to the Repository tree view displayed on the main window of the Studio.
• import the Demo project to discover the features of Talend Studio based on samples of different ready-to-use
Jobs. When you import the Demo project, it is automatically installed in the workspace directory of the current
session of the Studio.
• import projects you have already created with previous releases of Talend Studio into your current Talend Studio
workspace directory.
• open a project you created or imported in the Studio. You can as well open a project stored on the remote
repository.
For more information on how to open a local project, see How to open a local project. For more information
on how to open a remote project, see How to open a remote project.
• delete local projects that you already created or imported and that you do not need any longer.
Once you launch Talend Studio, you can export the resources of one or more of the created projects in the current
instance of the Studio. For more information, see How to export a project.
Talend Studio enables you as well to work on projects collaboratively. For more information about how to share
a project, see Working collaboratively on project items.
2. On the login window, select the Create a new project option and enter a project name in the field.
To create a new local project after the initial startup of the Studio, do the following:
1. On the login window, select the Create a new project option and enter a project name in the field.
2. Click Create to create the project. The newly created project is displayed on the list of existing projects.
3. Select the project on the list and click Finish to open the project in the Studio.
Later, if you want to switch between projects, on the Studio menu bar, use the combination File > Switch Project
or Workspace.
If your account already exists in Talend Administration Center, you will not be able to create a sandbox project.
2. On login screen, select Create a Sandbox Project and click Select. A [Create Sandbox project] dialog
box opens.
5. In the Login and Password fields, type in the email address and password that will be used to connect to
your remote project with Talend Studio and to connect to Talend Administration Center if you want to change
your password, for example.
Be aware that the email entered is never used for another purpose other than logging in.
If your account already exists in Talend Administration Center, you will not be able to create a sandbox project.
6. In the First name and Last name fields, type in your first and last name.
7. Click OK to validate.
A popup window prompts you to indicate that your sandbox project and its corresponding connection
have successfully been created. There are respectively named Sandbox_username_project and
username_Connection
You might receive an email notifying of your account creation on Talend Administration Center, if the
administrator activated this functionality.
The Connection, Email and Password fields are automatically filled in with the connection information you
provided and the Project list is automatically filled in with your newly created sandbox project.
To open the newly created sandbox project in Talend Studio, select your Sandbox project connection from the
connection list, select the project list, and click Finish.
1. When launching your Talend Studio, select the Import a demo project option on the Studio login window
and click Select, or click the Demos link on the welcome window, to open the [Import demo project] dialog
box.
After launching the Studio, click button on the toolbar, or select Help > Welcome from the Studio menu
bar to open the welcome window and then click the Demos link, to open the [Import demo project] dialog
box.
2. In the [Import Demo Project] dialog box, select the demo project you want to import and view the description
on the right panel.
The demo projects available in the dialog box may vary depending on the license you are using.
4. In the new dialog box that opens, type in a new project name and description information if needed.
All the samples of the demo project are imported into the newly created project, and the name of the new
project is displayed in the Project list on the login screen.
6. To open the imported demo project in Talend Studio, back on the login window, select it from the Project
list and then click Finish.
The Job samples in the open demo project are automatically imported into your workspace directory and
made available in the Repository tree view under the Job Designs folder.
1. From the Studio login window, select Import an existing project then click Select to open the [Import]
wizard.
2. Click the Import project as button and enter a name for your new project in the Project Name field.
3. Click Select root directory or Select archive file depending on the source you want to import from.
4. Click Browse... to select the workspace directory/archive file of the specific project folder. By default,
the workspace in selection is the current release's one. Browse up to reach the previous release workspace
directory or the archive file containing the projects to import.
5. Click Finish to validate the operation and return to the login window.
1. From the Studio login window, select Import an existing project then click Select to open the [Import]
wizard.
3. Click Select root directory or Select archive file depending on the source you want to import from.
4. Click Browse... to select the workspace directory/archive file of the specific project folder. By default,
the workspace in selection is the current release's one. Browse up to reach the previous release workspace
directory or the archive file containing the projects to import.
5. Select the Copy projects into workspace check box to make a copy of the imported project instead of moving
it. This option is available only when you import several projects from a root directory.
If you want to remove the original project folders from the Talend Studio workspace directory you import from, clear
this check box. But we strongly recommend you to keep it selected for backup purposes.
6. Select the Hide projects that already exist in the workspace check box to hide existing projects from the
Projects list. This option is available only when you import several projects.
7. From the Projects list, select the projects to import and click Finish to validate the operation.
Make sure that the name of the imported project is not already used for a remote project. Otherwise, an error message
will appear when you try to import the project unless you store the local and remote projects in two different workspace
directories.
Upon successful project import, the names of the imported projects are displayed on the Project list of the login
window.
You can now select the imported project you want to open in Talend Studio and click Finish to launch the Studio.
A generation initialization window might come up when launching the application. Wait until the initialization is complete.
On the Studio login screen, select the connection to the local repository that holds your project from the connection
list, select the project of interest from the project list and click Finish.
A progress bar appears. Wait until the task is complete and the Talend Studio main window opens.
When you open a project imported from a previous version of the Studio, an information window pops up to list a short
description of the successful migration tasks.
1. On the Connection area of the Studio login window, select the connection to the repository in which the
project is stored from the Connection list.
As soon as you are connected with Talend Administration Center and if an update for your Studio is found, an update
button appears at the bottom of the login window and the Open button becomes inoperable. Click update to download
and install the update. When the installation completes, click the restart button that appears next to the update button
to restart your Studio so that the newly installed update takes effect. For more information on the software update
process, see the Talend Installation Guide.
2. Click the Refresh button to update the list of existing projects, which are the projects allocated to you in
Talend Administration Center.
Note that, if an administrator edits your access rights on a project while you are already connected to this
project in the Studio, you have to relaunch the Studio to take these rights into account.
3. From the project list, select the project you want to open.
4. From the Branch list, select the trunk (SVN only) or master (Git only), a branch, or a tag, whichever is desired.
A tag is a read-only copy of an SVN or Git managed project. If you choose to open a tag, you can make changes to
your project items but you will be unable to permanently save your changes to a Job unless you copy the Job to a
branch or the trunk. For how to copy a Job to a branch, see How to copy a Job to a branch.
A progress bar appears, and the Talend Studio main window opens. A generation engine initialization dialog box
displays. Wait until the initialization is complete.
Upon opening a remote project, Talend Studio checks periodically its connection with Talend Administration
Center.
When Talend Studio detects loss of connection, it tries automatically to reconnect to Talend Administration Center.
You can view the connection progress on the Progress tab by double-clicking Check Administrator connection
at the lower right corner of the Talend Studio main window. If you click the button at this phase, the project
will enter the read-only mode.
Once Talend Studio detects that you have been logged out by an administrator in Talend Administration Center,
a confirmation dialog box appears asking you whether to reconnect to Talend Administration Center.
Click Yes to reconnect to Talend Administration Center. Talend Studio will perform an authorization check when
trying a reconnection. A warning will be displayed and the project will enter the read-only mode if:
• you no longer have access to any reference project of the project you have opened, or
• the number of reference projects of the project you have opened has changed.
If your access right to the project you have opened has changed from read-write to read-only, or if you click No
in the confirmation dialog box, the project directly goes into the read-only mode.
When the project is in the read-only mode, you can still edit the Job or Jobs currently open in the design workspace,
and changes you make will be committed to the SVN or Git the next time you log in to Talend Administration
Center with read-write access to the project.
1. On the login screen, click Manage Connections, then on the dialog box that opens click Delete Existing
Project(s) to open the [Select Project] dialog box.
Be careful, this action is irreversible. When you click OK, there is no way to recuperate the deleted project(s).
If you select the Do not delete projects physically check box, you can delete the selected project(s) only from the
project list and still have it/them in the workspace directory of Talend Studio. Thus, you can recuperate the deleted
project(s) any time using the Import existing project(s) as local option on the Project list from the login window.
1.
On the toolbar of the Studio main window, click to open the [Export Talend projects in archive file]
dialog box.
2. Select the check boxes of the projects you want to export. You can select only parts of the project through
the Filter Types... link, if need be (for advanced users).
3. In the To archive file field, type in the name of or browse to the archive file where you want to export the
selected projects.
4. In the Option area, select the compression format and the structure type you prefer.
The archived file that holds the exported projects is created in the defined place.
If you click the Refresh button in the upper right corner of the Repository tree view, the items that have been
locked by other users will have a red lock docked on them. You will not be able to make changes to these items.
By default, upon each action you make in your Talend Studio, the lock status of all items is automatically refreshed
too. If you find communications with the Talend Administration Center slow or if the project contains a big number
of locked items, you can disable the automatic retrieval of lock status in the Talend Studio preferences settings to
gain performance. For more information, see Performance preferences (Talend > Performance).
Items stored in the Repository tree view that are submitted to lock/unlock system include:
• Business Models,
• Jobs,
• Routines,
Items at project level are also submitted to lock/unlock system. These items include all Project Settings.
Talend Studio provides several lock modes that allow to grant the "read and write" rights to one of the simultaneous
users of the repository item.
Until you release the lock by closing the item you are editing, other users will not be able to make any change on
it. The item will show with a red lock in their Repository tree views.
All other users will have a read-only access for locked items until they are unlocked.
To intentionally lock/unlock an item, simply right-click it in the Repository tree view and select Lock/Unlock.
But you can still open and view the locked item in the design workspace in a read-only mode. Right-click the item
in the Repository tree view, and then click Open to view the item content.
Alternatively, you can get read-write access to locked items by opening the project in offline mode. For more
information, see How to access items of a remote project in offline mode.
Also, you have the possibility to log information about the changes you made on any item, on the condition that
the relevant option is selected in Talend Administration Center. Check Talend Administration Center User Guide
for further details and read How to log information on edited items.
If you want to edit the item you are opening, click OK to put a lock on it. The item becomes read-only for other
users like in the default mode.
When closing or saving the item, you get prompted again to unlock it. If you are complete with the changes, then
click OK to remove the lock and allow other users to lock it if needed.
If you do not need to open the item in edition mode (lock) then click No when prompted to open it in read-only
mode.
To intentionally lock an item (for edition purpose for example), simply right-click it and select the Lock option
while the item is in the closed state.
The same way, a locked item can only be unlocked through the same procedure by the lock owner (or through
Talend Administration Center web application by the administrator).
By default, items can only be opened in read-only in this manual lock mode.
Prerequisite: You have already logged on to the remote project successfully via a remote connection so that the
project information already exists in the workspace directory of your Talend Studio.
1. Launch your Talend Studio, or if you have already opened the project using a remote connection, restart your
Studio by selecting File > Switch Project or Workspace from the menu.
2. Create a local connection by following the steps described in the Getting Started Guide, without modifying
the workspace directory that contains the information of the remote project in the Workspace field.
3. On the login screen, select the local connection you just created from the Connection list, and select the
remote project from the Project field, and then click Finish.
Now you can continue working locally on the project branch that you previously worked on.
• When you work in offline mode on an SVN project, items with uncommitted changes are preceded by a >
symbol in the Repository tree view, until you commit them to the SVN when you reopen the project using a
remote connection.
• When you work in offline mode on a Git project, you are working on the local branch associated with the branch
you last worked on. Your changes are automatically committed to your local Git repository, and the top bar of
the Repository tree view indicates the number of local commits.
You can revert the current local branch, switch between local branches and delete local branches you are not
currently working on.
When you reopen the project using a remote connection and if you select any branch on which you made changes
while your worked in offline mode, you will be presented the corresponding local branch and you need push
your commits manually to the remote Git repository.
For more information about working with project branches, see Working with project branches and tags.
When the commit mode is set to Unlocked Items in Talend Administration Center, the changes you make to an
item are committed to the SVN or Git only after the item is unlocked.
In the Repository tree view of Talend Studio, an item with uncommitted changes is preceded by a > symbol.
In this case, if you're the only owner of a changed item, you can get your changes committed to the SVN or Git by:
• closing the item if the lock mode is set to Automatic in Talend Administration Center.
• closing the item and unlocking it when prompted if the lock mode is set to Ask user in Talend Administration
Center.
• closing the item and manually unlocking it before quitting the Studio if the lock mode is set to Manual in
Talend Administration Center.
In cetain situations, a dialog box opens when uncommitted items are found, providing you options to handle those
items. For details, see Handling uncommitted items when prompted (SVN only) and Handling uncommitted items
when prompted (Git only) respectively.
For more information about the commit mode and lock mode options, see Talend Administration Center User
Guide.
• you have created, modified, or deleted any item when working on the project in offline mode.
For how to open a remote project in offline mode, see How to access items of a remote project in offline mode.
• any item has been modified but it is currently locked by another user.
Click OK to commit the changes to the selected items to the SVN and revert those items that are not selected,
which are either currently locked by other users or are in conflict state, and continue opening the remote project.
Presently no merge action is supported.
If you click OK, changes to items that are not selected will be permanently lost.
To save your changes to items that are locked or in conflict, click Cancel, open the SVN project offline, and then
export the items to your local file system so that you can import them into your Studio when needed.
• you have made changes to your project items while the Commit mode is set to Unlocked Items and the Lock
mode is set to Manual in Talend Administration Center.
• any files have been manually added to the project folder of your Talend Studio, regardless of the Commit mode
and Lock mode settings in Talend Administration Center.
• any items are found to have changes not committed to your local Git repository for any reason.
You can:
• click Commit to commit the files to your local Git repository and continue your current operation.
• click Reset to abort your changes and continue your current operation.
• click Cancel to cancel your current operation without committing or removing the uncommitted files.
When you log on to a project for the first time after the project is created or modified in Talend Administration
Center, and each time you are about to commit changes made to any item belonging to that project, the [Commit
changes on server] window appears to prompt you to log the changes you have made.
• a text box where you can enter your comment on the commit,
• an Additional logs tab, which contains the log information that will be appended to your comment,
• (SVN only) a Revision properties tab, which allows you to add revision properties and the corresponding
values,
• a Change list tab, which lists the actual studio files that will be committed.
To log information on edited items, complete the following in the [Commit changes on server] window:
Note that the read-only appended log, at the bottom of the window, lists a summary of all changes made to the current
Project, in case you have not committed them to SVN or Git at each individual change.
1. Type in your comment conforming your SVN or Git commit log pattern convention in the text box.
2. (SVN only) In the Revision properties tab, click the [+] button to add any Revision_property and the
corresponding value if needed. It can be for example, the author name and the bug ID.
3. Click Finish when you completed the form. Or click Cancel if you want to postpone the change log to a
later stage.
Note that the log prompt will append all changes that you have not committed to the SVN or Git yet. So you can choose to
log information after several actions, rather than on every action.
For more information on managing projects stored on SVN or Git and setting the commit log pattern, see Talend
Administration Center User Guide.
This section addresses topics related to project branches and tags, including:
• How to push changes on a local branch to the remote end (Git only)
• How to revert a local branch to the previous update state (Git only)
• How to view the project history specific to a branch or tag (Git only)
Your Talend Studio provides two options for you to create a local branch:
2. In the [New Branch] dialog box, select the source, which can be a remote or local branch your local branch
will be based on, enter a name for your new branch, and click OK.
3. When asked whether to switch to the newly created branch directly, click OK to switch or click Cancel to
stay on the current branch.
1. From the Talend Studio, click the top bar of the Repository tree view, select the source from the drop-down
menu, and then select check out as local branch from the sub-menu.
2. In the [Checkout as local branch] dialog box, enter a name for your new branch, and click OK to create
the branch and switch to it.
Now the branch is created. You can work on it and manage it using tools provided by your Talend Studio.
Related topics:
• How to push changes on a local branch to the remote end (Git only)
• How to revert a local branch to the previous update state (Git only)
Once a remote branch associated with your local branch is created on the Git server upon your first push, the
number of new commits not yet pushed to the associated remote branch will be indicated on the top bar of the
Repository tree view. You can choose to push such commits to the Git server or abort them by reverting your
local branch to the previous update state.
This section describes how to push local changes to the Git server. For information on reverting a local branch,
see How to revert a local branch to the previous update state (Git only).
1. Save your changes so that they are committed to your local repository.
2. Click the top bar of the Repository tree view and select Push from the drop-down menu.
3. If any editor windows are open, you will see a warning message. Click OK to close the editor windows and
proceed with the push action.
4. When the push operation completes, a dialog box opens informing you that your changes have been pushed
to the Git server successfully. Click OK to close the dialog box.
Your changes have now been pushed to the Git server. If this is the first push from your local branch, a remote
branch with the same name is automatically created as the associated branch to hold the commits you push from
your local branch.
Related topics:
• How to revert a local branch to the previous update state (Git only)
1. Save your changes so that they are automatically committed to your local repository.
2. Click the top bar of the Repository tree view and select Pull And Merge Branch from the drop-down menu.
3. If any editor window is open, you will see a warning message. Click OK to close the editor and proceed
with the pull action.
4. In the [Select a branch to merge or update] dialog box, select the source branch to pull from, which can be
a remote or local branch, and then click OK to complete the pull and merge operation.
Your local branch is now up to date. Depending on the source branch you selected, a dialog box opens to show
the pull or merge result:
• If the source branch is the default branch, which is the remote counterpart of the local branch, the dialog box
shows the pull result.
• If the source branch is another one, the dialog box shows the merge result.
Related topics:
• How to push changes on a local branch to the remote end (Git only)
• How to revert a local branch to the previous update state (Git only)
To revert your local branch to the previous update state, do the following:
1. Click the top bar of the Repository tree view, select More... from the drop-down menu and then select Reset
from the sub-menu.
2. If any editor window is open, you will see a warning message. Click OK to close the editor and proceed.
Related topics:
• How to push changes on a local branch to the remote end (Git only)
1. If you are currently on the local branch you want to delete, switch to another branch first. For more
information, see How to switch between branches or tags.
2. Click the top bar of the Repository tree view, select the local branch you want to delete from the drop-down
menu, and then select Delete Branch from the sub-menu.
Related topics:
• How to push changes on a local branch to the remote end (Git only)
• How to revert a local branch to the previous update state (Git only)
When working in a tag, you can make changes to your project items, but you will be unable to permanently save your
changes to a Job unless you copy the Job to a branch or the trunk/master. For how to copy a Job to a branch, see How
to copy a Job to a branch.
You can create a tag for a project either in Talend Studio or in Talend Administration Center.
Close all open editors in the design workspace before trying to create a tag. Otherwise, a warning message will appear
prompting you to close all open editors.
1. Open the remote project for which you want to create a tag.
2.
Click the icon at the upper right corner of the Repository tree view to open the [Branch management]
dialog box.
3. Click the Create a tag button, and in the next dialog box select the source from the tag's source list, trunk
in this example, enter a tag name, and click OK.
Creating a tag may take some time. When done, the tag you created will appear on the tags list in the [Branch
management] dialog box and you can switch to it by following the steps in How to switch between branches
or tags.
1. Open the remote project for which you want to create a tag.
2. Click the top bar of the Repository tree view to open a drop-down menu.
3. Select More ... from the drop-down menu and then select Add Tag from the sub-menu.
4. In the [New Tag] dialog box, select the source, which can be the master or a branch based on which your tag
will be created, enter a name for your new tag, and click OK.
You cannot create tags based on a local branch you have created in the Studio.
Creating a tag may take some time. When done, the tag you created will be listed on the drop-down menu
when clicking the top bar of the Repository tree view, and you can switch to it by following the steps in
How to switch between branches or tags.
For how to create a tag in Talend Administration Center, see Talend Administration Center User Guide.
Once you open a project having different branches or tags, you can switch between the trunk/master and any of
the existing branches or tags and between different branches or tags.
Close all open editors in the design workspace before trying to switch to another branch or tag. If not, a warning message
will display prompting you to close all open editors.
1.
From Talend Studio, click the icon at the upper right corner of the Repository tree view to open the
[Branch management] dialog box.
2. Expand the branches or tags node, and select the branch or tag you want to switch to and then click Switch.
2. Click the top bar of the Repository tree view to open a drop-down menu.
3. Select the target branch or tag from the drop-down menu, and then select switch from the sub-menu.
When you switch to a local branch and changes are found on the associated remote branch, those changes are automatically
synchronized to the local branch.
The switch operation may take some time. Wait till the operation is finalized. Then, the Repository tree view
switches to show the project items of the selected branch. You can read the directory of the active branch on the
top bar of the Repository view.
1. Switch to the branch or tag. For more information, see How to switch between branches or tags.
2. Click the top bar of the Repository tree view, select More... from the drop-down menu, and then select Show
History from the sub-menu to open the Git History View.
Alternatively, you can open the Git History View using the Studio menu: select Window > Show View,
then in the [Show View] dialog box select Git History View and click OK.
3. In the Git History View, select any commit record to view its details.
The Git Merge perspective opens, and the Conflicts Navigator panel on the left displays the project items
on which conflicts have been found.
2. In the Conflicts Navigator panel, right-click a conflicted item and select from the context menu:
• Resolve in editor: to open a conflict editor in the right-hand panel of the Git Merge perspective. For more
information, see Resolving conflicts in conflict editors.
Note that this option is available only for project items mentioned in Resolving conflicts in conflict editors.
• Accept mine: to accept all the changes on the working branch to fix conflicts on the item without opening
a conflict editor.
• Accept theirs: to accept all changes on the other branch to fix conflicts on the item without opening a
conflict editor.
• Mark as resolved: to mark all conflicts on the item as resolved, leaving discrepancies between the
branches.
3. When all conflicts are fixed and marked as resolved, click Yes when the following dialog box opens, or the
icon on the top of the Conflicts Navigator panel to continue your previous action.
• Job Compare editor, if the conflicted project item is a standard Job, a Joblet, or a test case.
• EMF Compare (Eclipse Modeling Framework) editor, if the conflicted item is a context group, or a database
connection.
• Text Compare editor, if the conflicted item is a general text file, a routine, a Job script, or a SQL script.
To open a conflict editor, right-click a conflicted project item in the Conflicts Navigator tree view and select
Resolve in editor from the contextual menu.
After fixing conflicts in an editor, mark the conflicts as resolved and close the editor to continue your previous
branch operation.
The upper part of the Job Compare editor displays a tree view that shows all the design and parameter items of
the Job on which conflicts have occurred. A dark red conflict indicator is seen on the icon of each conflicted item.
In this tree view, you can expand each node and select the conflicted items to explore the details of the conflicts.
The lower part displays a comparison view that shows the details of the different versions of the selected item.
In this comparison view:
• When a design item, such as a component or a connection, is selected in the upper tree view, the item is
highlighted graphically.
• When a parameter item, such as the schema of a component, is selected in the upper tree view, a yellow warning
sign indicates each conflicted parameter, such as a schema column, of that item.
In the Job Compare editor, you can resolve conflicts on the entire Job, all the design items, or all the parameter
items in one go, or resolve conflicts on individual items, parameters, or parameter properties separately.
• To accept the version of the working branch, either right-click Job Designs Unit in the upper tree view and
select Accept mine from the contextual menu, or select Job Designs Unit in the upper tree view and click
icon in the comparison view.
• To accept the version of the other branch, either right-click Job Designs Unit in the upper tree view and select
Accept theirs from the contextual menu, or select Job Designs Unit in the upper tree view and click icon
in the comparison view.
• To accept the version of the working branch, either right-click the node in the upper tree view and select Accept
mine from the contextual menu, or select the node in the upper tree view and click icon in the comparison
view.
• To accept the version of the other branch, either right-click the node in the upper tree view and select Accept
theirs from the contextual menu, or select the node in the upper tree view and click icon in the comparison
view.
• To accept the version of the working branch, either right-click the conflicted item in the upper tree view and
select Accept mine from the contextual menu, or select the item in the upper tree view and then click the
icon in the comparison view.
• To accept the version of the other branch, either right-click the conflicted item in the upper tree view and select
Accept theirs from the contextual menu, or select the item in the upper tree view and then click the icon
in the comparison view.
When you try to accept the version of a connection from the other branch:
• If both the input and output components across the connection differ between the working branch and the other branch,
you will be prompted to accept the whole Job design from the other branch.
This, however, is not mandatory - you can try to accept the components and the connection parameters individually first.
• If the input or output component of the connection cannot be redirected to the new input or output component on the
working branch, you will be prompted to accept the whole Job design from the other branch.
This, however, is not mandatory - you can try to accept the component and the connection parameters individually first.
• If the input or output component of the connection does not exist on the working branch, you will be prompted to accept
the component first.
•
To accept the version of the working branch, click the icon for that parameter or parameter property in
the comparison view.
•
To accept the version of the other branch, click the icon for that parameter or parameter property in the
comparison view.
You can edit parameters and parameter properties manually for the working branch in the comparison view. To
do so:
1. Select the concerned parameter item in the upper tree view to show the parameters under that item in the
comparison view.
2. To edit a parameter, click the in the comparison view so that a […] button appears next to it.
To edit a property of a parameter, expand the parameter and click the property you want to edit to show a
[…] button next to it.
4. Make your changes and then click OK to close the dialog box.
With the conflicts on an item fixed, the conflict indicator on the icon of the conflicted item in the upper view and
the conflict signs in the comparison view become green.
Note that if a centralized Repository item - a context group or a file or database connection defined in the Repository
for example - is called in a Job, fixing conflicts for the Job in the Job Compare editor does not automatically
update the corresponding Repository item. When you open the Job in the Integration perspective, you will be
asked whether to update your Job.
The upper part of the EMF Compare editor gives an overview of the differences detected between the two
branches. The lower part is a comparison view that shows the different versions of the selected item between both
branches.
•
Click or to navigate through the detected differences. The details about the selected item are shown
in the comparison view.
•
Click to accept the single, currently selected change, or to reject the single, currently selected change.
•
Click to accept all non conflicting changes at once, or to reject all non conflicting changes at once.
•
For a text feature, click the button on the top of the comparison view to copy all the shown changes, or the
Accepted changes are presented by a icon, and rejected changes are presented by a icon.
Note that if a centralized Repository item - a context group or a file or database connection defined in the Repository
for example - is called in a Job, fixing conflicts for the Repository item in the EMF Compare editor does not
automatically update the corresponding Job. When you open the Job in the Integration perspective, you will be
asked whether to update your Job.
The Text Compare editor is a two-pane comparison view that displays the different versions of a text-based
project item between both branches.
•
Click to show the ancestory pane, which shows the ancestory version of the compared versions if detected.
This button is operable only for three-way comparison.
•
Click to toggle between two-way (ignoring the ancestor version) and three-way comparison.
•
Click the button to copy all the shown changes, or the button to copy the selected change, from right
to left.
•
Click or to navigate through the differences.
•
Click or to navigate through the changes.
• You can also edit text directly in the left pane to make changes to the version of the current branch.
Note that if a centralized Repository item - a routine, a Job script, or an SQL script defined in the Repository
for example - is called in a Job, fixing conflicts for the Repository item in the Text Compare editor does not
automatically update the corresponding Job. When you open the Job in the Integration perspective, you will be
asked whether to update your Job.
This chapter aims at business managers, decision makers or developers who want to model their flow management
needs at a macro level.
Designing Business Models is part of the enterprises' best practices that organizations should adopt at a very early
stage of a data integration project in order to ensure its success. Because Business Models usually help detect
and resolve quickly project bottlenecks and weak points, they help limit the budget overspendings and/or reduce
the upfront investment. Then during and after the project implementation, Business Models can be reviewed and
corrected to reflect any required change.
Generally, a typical Business Model will include the strategic systems or processes already up and running in your
company as well as new needs. You can symbolize these systems, processes and needs using multiple shapes and
create the connections among them. Likely, all of them can be easily described using repository attributes and
formatting tools.
In the design workspace of the Integration perspective of Talend Studio, you can use multiple tools in order to:
In the Repository tree view of the Integration perspective, right-click the Documentation > Business Models
nodes.
1. Right-click the Business Models node and select Create Business Model.
The creation wizard guides you through the steps to create a new Business Model.
Field Description
Name the name of the new Business Model. A message comes up if you enter prohibited characters.
Purpose Business Model purpose or any useful information regarding the Business Model use.
Description Business Model description.
Author a read-only field that shows by default the current user login.
Locker a read-only field that shows by default the login of the user who owns the lock on the current Job. This
field is empty when you are creating a Business Model and has data only when you are editing the
properties of an existing Business Model.
Version a read-only field. You can manually increment the version using the M and m buttons.
Status a list to select from the status of the Business Model you are creating.
Path a list to select from the folder in which the Business Model will be created.
You can create as many models as you want and open them all.
• the Business Model panel showing specific information about all or part of the model.
In the Business Model view, you can see information relative to the active model.
Use the Palette to drop the relevant shapes on the design workspace and connect them together with branches and
arrange or improve the model visual aspect by zooming in or out.
This Palette offers graphical representations for objects interacting within a Business Model.
The objects can be of different types, from strategic system to output document or decision step. Each one having
a specific role in your Business Model according to the description, definition and assignment you give to it.
All objects are represented in the Palette as shapes, and can be included in the model.
Note that you must click the business folder to display the library of shapes on the Palette.
3.3.1. Shapes
Select the shape corresponding to the relevant object you want to include in your Business Model. Double-click
it or click the shape in the Palette and drop it in the modeling area.
Alternatively, for a quick access to the shape library, keep your cursor still on the modeling area for a couple of
seconds to display the quick access toolbar:
For instance, if your business process includes a decision step, select the diamond shape in the Palette to add this
decision step to your model.
When you move the pointer over the quick access toolbar, a tooltip helps you to identify the shapes.
The shape is placed in a dotted black frame. Pull the corner dots to resize it as necessary.
Also, a blue-edged input box allows you to add a label to the shape. Give an expressive name in order to be able
to identify at a glance the role of this shape in the model.
Two arrows below the added shape allow you to create connections with other shapes. You can hence quickly
define sequence order or dependencies between shapes.
There are two possible ways to connect shapes in your design workspace:
Either select the relevant Relationship tool in the Palette. Then, in the design workspace, pull a link from one
shape to the other to draw a connection between them.
Or, you can implement both the relationship and the element to be related to or from, in a few clicks.
1. Simply move the mouse pointer over a shape that you already dropped on your design workspace, in order
to display the double connection arrows.
2. Select the relevant arrow to implement the correct directional connection if need be.
3. Drag a link towards an empty area of the design workspace and release to display the connections popup
menu.
4. Select the appropriate connection from the list. You can choose among Create Relationship To, Create
Directional Relationship To or Create Bidirectional Relationship To.
5. Then, select the appropriate element to connect to, among the items listed.
You can create a connection to an existing element of the model. Select Existing Element in the popup menu and
choose the existing element you want to connect to in the displaying list box.
The nature of this connection can be defined using Repository elements, and can be formatted and labelled in the
Properties panel, see Business Models.
When creating a connection, an input box allows you to add a label to the connection you have created. Choose
a meaningful name to help you identify the type of relationship you created.
You can also add notes and comments to your model to help you identify elements or connections at a later date.
Callout Details
Select Select and move the shapes and lines around in the design workspace's modeling area.
Zoom Zoom in to a part of the model. To watch more accurately part of the model. To zoom out, press
Shift and click the modeling area.
Note/Text/Note attachment Allows comments and notes to be added in order to store any useful information regarding the model
or part of it.
Alternatively right-click the model or the shape you want to link the note to, and select Add Note. Or select the
Note tool in the quick access toolbar.
A sticky note displays on the modeling area. If the note is linked to a particular shape, a line is automatically
drawn to the shape.
Type in the text in the input box or, if the latter does not show, type in directly on the sticky note.
If you want to link your notes and specific shapes of your model, click the down arrow next to the Note tool on
the Palette and select Note attachment. Pull the black arrow towards an empty area of the design workspace, and
release. The popup menu offers you to attach a new Note to the selected shape.
You can also select the Add Text feature to type in free text directly in the modeling area. You can access this
feature in the Note drop-down menu of the Palette or via a shortcut located next to the Add Note feature on the
quick access toolbar.
Place your cursor in the design area, right-click to display the menu and select Arrange all. The shapes
automatically move around to give the best possible reading of the model.
Alternatively, you can select manually the whole model or part of it.
To do so, right-click any part of the modeling area, and click Select.
From this menu you can also zoom in and out to part of the model and change the view of the model.
The Business Models view contains different types of information grouped in the Main, Appearance, Rules &
Grid, and Assignment tabs.
The Main tab displays basic information about the selected item in the design workspace, being a Business Model
or a Job. For more information about the Main tab, see How to display Job configuration tabs/views.
You can also move and manage shapes of your model using the edition tools. Right-click the relevant shape to
access these editing tools.
To display the Rulers & Grid tab, click on the Palette, then click any empty area of the design workspace
to deselect any current selection.
Click the Rulers & Grid tab to access the ruler and grid setting view.
In the Display area, select the Show Ruler check box to show the Ruler, the Show Grid check box to show the
Grid, or both heck boxes. Grid in front sends the grid to the front of the model.
In the Measurement area, select the ruling unit among Centimeters, Inches or Pixels.
In the Grid Line area, click the Color button to set the color of the grid lines and select their style from the Style list.
Select the Snap To Grid check box to bring the shapes into line with the grid or the Snap To Shapes check box
to bring the shapes into line with the shapes already dropped in the Business Model.
You can also click the Restore Defaults button to restore the default settings.
To display any assignment information in the table, select a shape or a connection in the active model, then click
the Assignment tab in the Business Model view.
You can also display the assignment list placing the mouse over the shape you assigned information to.
You can modify some information or attach a comment. Also, if you update data from the Repository tree view,
assignment information gets automatically updated.
For further information about how to assign elements to a Business Model, see Assigning repository elements to
a Business Model.
You can define or describe a particular object in your Business Model by simply associating it with various types
of information, for example by adding metadata items.
You can set the nature of the metadata to be assigned or processed, thus facilitating the Job design phase.
To assign a metadata item, simply drop it from the Repository tree view to the relevant shape in the design
workspace.
The Assignment table, located underneath the design workspace, gets automatically updated accordingly with the
assigned information of the selected object.
Element Details
Job designs If any Job Designs developed for other projects in the same repository are available, you can reuse
them as metadata in the active Business Model.
Metadata You can assign any descriptive data stored in the repository to any of the objects used in the model.
It can be connection information to a database for example.
Business Models You can use in the active model all other Business Models stored in the repository of the same project.
Documentation You can assign any type of documentation in any format. It can be a technical documentation, some
guidelines in text format or a simple description of your databases.
Routines (Code) If you have developed some routines in a previous project, to automate tasks for example, you can
assign them to your Business Model. Routines are stored in the Code folder of the Repository tree
view.
For more information about the Repository elements, see Designing a Job.
2. Edit the model name in the Name field, then click Finish to close the dialog box. The model label changes
automatically in the Repository tree view and will be reflected on the model tab of the design workspace,
the next time you open the Business Model.
If the Business Model is open, the information in the [Edit properties] dialog box will be read-only so you will not
be able to edit them.
2. Then right-click where you want to paste your Business Model, and select Paste.
2. Alternatively, simply select the relevant Business Model, then drop it into the Recycle bin of the Repository
tree view.
An asterisk displays in front of the Business Model name on the tab to indicate that changes have been made to
the model but not yet saved.
To save a Business Model and increment its version at the same time:
2. Next to the Version field, click the M button to increment the major version and the m button to increment
the minor version.
By default, when you open a Business Model, you open its last version. Any previous version of the Business Model is
read-only and thus cannot be modified.
You can access a list of the different versions of a Business Model and perform certain operations. To do that:
1. In the Repository tree view, select the Business Model you want to consult the versions of.
2. Click the Business Models>Version in succession to display the version list of the selected Job.
Select To...
Edit properties edit Business Model properties.
Note: The Business Model should not be open on the design workspace, otherwise it will
be in read-only mode.
Read Business Model consult the Business Model in read-only mode.
You can open and modify the last version of a Business Model from the Version view if you select Edit Business Model
from the drop-down list.
Via Talend Studio, you are able to design data integration Jobs that allow you to put in place up and run dataflow
management processes.
This chapter addresses the needs of programmers or IT managers who are ready to implement the technical aspects
of a Business Model (regardless of whether it was designed in the Business Modeler of the Integration perspective
of Talend Studio).
The Jobs you design can address all of the different sources and targets that you need for data integration processes
and any other related process.
• change the default setting of components or create new components or family of components to match your
exact needs.
• set connections and relationships between components in order to define the sequence and the nature of actions.
• access code at any time to edit or document the components in the designed Job.
• create and add items to the repository for reuse and sharing purposes (in other projects or Jobs or with other
users).
In order to be able to execute the Jobs you design in Talend Studio, you need to install an Oracle JVM 1.8 (IBM JVM is not
supported). You can download it from http://www.oracle.com/technetwork/java/javase/downloads/index.html.
Note that if you are a subscription-based user of one of the Talend solutions with Big Data, another type of Job
can be created to generate native MapReduce code and executed directly in Hadoop. For related situation, see the
chapter describing how to design a MapReduce Job.
1. In the Repository tree view of the Integration perspective, right-click the Job Designs node or the Standard
folder under the Job Designs node and select Create Standard Job from the contextual menu.
The [New Job] wizard opens to help you define the main properties of the new Job.
Field Description
Name the name of the new Job.
3. An empty design workspace opens up showing the name of the Job as a tab label.
The Job you created is now listed under the Job Designs node in the Repository tree view.
You can open one or more of the created Jobs by simply double-clicking the Job label in the Repository tree view.
There are several ways to add a component onto the design workspace. You can:
• find your component on the Palette by typing the search keyword(s) in the search field of the Palette and drop
it onto the design workspace.
• add a component by directly typing your search keyword(s) on the design workspace.
• add an output component by dragging from an input component already existing on the design workspace.
• drag and drop a centralized metadata item from the Metadata node onto the design workspace, and then select
the component of interest from the Components dialog box.
This section describes the first three methods. For details about how to drop a component from the Metadata
node, see Managing Metadata.
For more information regarding components and their functions, see Talend Components Reference Guide.
1. Enter the search keyword(s) in the search field of the Palette and press Enter to validate your search.
The keyword(s) can be the partial or full name of the component, or a phrase describing its functionality if
you don't know its name, for example, tfileinputde, fileinput, or read file row by row. The Palette filters to
the only families where the component can be found. If you cannot find the Palette view in the Studio, see
How to change the Palette layout and settings.
To use a descriptive phrase as keywords for a fuzzy search, make sure the Also search from Help when performing
a component searching check box is selected on the Preferences > Palette Settings view. For more information,
see Palette preferences (Talend > Palette Settings).
2. Select the component you want to use and click on the design workspace where you want to drop the
component.
Note that you can also drop a note to your Job the same way you drop components.
Each newly-added component is shown in a blue box to show that it as an individual Subjob.
Prerequisite: Make sure you have selected the Enable Component Creation Assistant check box in the Studio
preferences. For more information, see How to use centralized metadata in a Job.
1. Click where you want to add the component on the design workspace, and type your keywords, which can
be the full or partial name of the component, or a phrase describing its functionality if you don't know its
name. In our example, start typing tlog.
To use a descriptive phrase as keywords for a fuzzy search, make sure the Also search from Help when performing
a component searching check box is selected on the Preferences > Palette Settings view. For more information,
see Palette preferences (Talend > Palette Settings).
A list box appears below the text field displaying all the matching components in alphabetical order.
2. Double-click the desired component to add it on the workspace, tLogRow in our example.
2. Drag and drop the o icon where you want to add a new component.
A text field and a component list appear. The component list shows all the components that can be connected
with the input component.
3. To narrow the search, type in the text field the name of the component you want to add or part of it, or a phrase
describing the component's functionality if you don't know its name, and then double-click the component of
interest, tFileOutputDelimited in this example, on the component list to add it onto the design workspace.
The new component is automatically connected with the input component tLogRow, using a Row > Main
connection.
To use a descriptive phrase as keywords for a fuzzy search, make sure the Also search from Help when performing
a component searching check box is selected on the Preferences > Palette Settings view. For more information,
see Palette preferences (Talend > Palette Settings).
In this example, as the tLogRow and tFileOutputDelimited components are already connected, you only need
to connect the tFileInputDelimited to the tLogRow component.
2. In the contextual menu that opens, select the type of connection you want to use to link the components,
Row > Main in this example.
3. Click the target component to create the link, tLogRow in this example.
Note that a black crossed circle is displayed if the target component is not compatible with the link.
According to the nature and the role of the components you want to link together, several types of link are
available. Only the authorized connections are listed in the contextual menu.
2. When the O icon appears, click it and drag the cursor to the destination component, tLogRow in this example.
A Row > Main connection is automatically created between the two components.
While this method requires less operation steps, it works only with these types of Row connections: Main,
Lookup, Output, Filter, and Reject, depending on the nature and role of the components you are connecting.
You can also drop components in the middle of a Row link. For more information, see How to add a component
between two connected components.
For more information on using various types of connections, see Using connections.
For more advanced details regarding the components properties, see How to define component properties and
Talend Components Reference Guide.
3. Browse your system or enter the path to the input file, customers.txt in this example.
6. In the Schema Editor that opens, click three times the [+] button to add three columns.
7. Name the three columns id, CustomerName and CustomerAddress respectively and click OK to close the
editor.
8. In the pop-up that opens, click OK accept the propagation of the changes.
This allows you to copy the schema you created to the next component, tLogRow in this example.
By doing so, the contents of the customers.txt file will be printed in a table and therefore more readable.
3. Browse your system or enter the path to the output file, customers.csv in this example.
5. If needed, click the Sync columns button to retrieve the schema from the input component.
The file is read row by row and the extracted fields are displayed on the Run console and written to the specified
output file.
• TableToFile: creates a Job that outputs data from a database table to a file. For more information, see How to
output data from a file to a database table and vice versa.
• TableToTable: migrates data from one database table to another, for example. For more information, see How
to output data from one database table to another.
• FileToTable: writes data to a database table. For more information, see How to output data from a file to a
database table and vice versa.
• FileToJoblet: retrieves data from files and writes this data into a Joblet in a specific format.
1. In the Repository tree view of the Integration perspective, right-click Job Designs and select Create job
from templates in the drop-down list. A Job creation wizard displays to help you defining the new Job main
properties
2. Select the Simple Template option and click Next to open a new view on the wizard.
Field Description
Name Enter a name for your new Job. A message comes up if you enter prohibited characters.
Purpose Enter the Job purpose or any useful information regarding the Job use.
Description Enter a description, if need be, for the created Job.
Author The Author field is read-only as it shows by default the current user login.
Locker The Locker field is read-only as it shows by default the current lock.
Version The Version is read-only. You can manually increment the version using the M and m buttons. For
more information, see Managing Job versions.
Status You can define the status of a Job in your preferences. By default none is defined. To define them, go
to Window > Preferences > Talend > Status.
Path Select the folder in which the Job will be created.
4. Once you filled in the Job information, click Next to validate and open a new view on the wizard.
5. Select the template you want to use to create your Job and click Next.
6. In the Type Selection area, select from the drop-down list the input file to use, tFileInputDelimited for
example.
7. In the Main properties of the component area, click the [...] button and browse to the file you want to use
the properties of. The file should be centralized in the Repository tree view. The fields that follow in the
Detail settings area are filled automatically with the properties of the selected file. Alternatively, you can set
manually the file path and all properties fields in the Detail setting area, if needed.
Then, click Next to validate and open a new view on the wizard.
8. In the Metadata area, click the three-dot button to open the [Repository Content] dialog box and select the
schema. Alternatively, you can use the toolbar to import it or add columns manually. Then, click Next to
validate and open a new view on the wizard.
9. In the Type Selection area, select the output database type from the drop-down list.
10. In the Main properties of the component area, click the three-dot button and browse to the connection you
want to use the properties of. The Database connection should be centralized in the Repository tree view.
The fields that follow in the Detail settings area are filled automatically with the properties of the selected
connection. Alternatively, you can set manually the database details and all properties fields in the Detail
setting area, if needed.
The ready-to-run Job is created and listed under the Job Designs node in the Repository tree view.
Once the Job is created, you can modify the properties of each of the components in the Job according to your
needs.
To output data from one database table to another database table, do the following:
1. In the Repository tree view of the Integration perspective, right-click Job Designs and select Create job
from templates in the drop-down list. A Job creation wizard displays to help you defining the new Job main
properties
2. Select the From Table List option and click Next to open a new view on the wizard.
3. Select the template you want to use to create your Job and click Next, TableToTable in this example.
4. In the Main properties of the component area, click the [...] button and browse to the connection you want
to use the properties of. The database connection should be centralized in the Repository tree view. The fields
that follow in the Detail settings area are filled automatically with the properties of the selected database
table. Alternatively, you can manually set the database parameters in the Detail setting area, if needed.
Then, click Next to validate and open a new view on the wizard.
5. In the Select Schema to create area, select the check box of the table you want to use and click Next to
validate and open a new view on the wizard. Then, click Next to validate and open a new view on the wizard.
6. In the Type Selection area, select the output database type from the drop-down list.
7. In the Main properties of the component area, click the three-dot button and browse to the connection you
want to use the properties of. The Database connection should be centralized in the Repository tree view.
The fields that follow in the Detail settings area are filled automatically with the properties of the selected
connection. Alternatively,you can manually set the output database details and all properties fields in the
Detail setting area, if needed.
Then, click Next to validate and open a new view on the wizard.
8. In the Check Availability area, select the check boxes of the available option according to your needs. In
this example, we want to save the input schemas in the Repository tree view and we want to insert a tMap
component between the input and output components of the created Job.
9. In the Jobname field, enter a name for your Job, and click the check button to verify that the name chosen
for your Job is available. A dialog box opens and informs you whether the Job name is available. Click Ok
to close the dialog box.
10. Click Finish to validate and close the wizard. The ready-to-run Job is created and listed under the Job Designs
node in the Repository tree view.
Once the Job is created, you can modify the properties of each of the components in the Job according to your
needs.
The target Joblet you want to write data in must already exist and the metadata to be read have been created in the centralized
repository when using the template.
1. In the Repository tree view of the Integration perspective, right-click Job Designs and select Create job
from templates in the drop-down list. A Job creation wizard displays to help you defining the new Job main
properties
2. Select the Migrate data from file to joblet option and click Next to open a new view on the wizard.
3. Select the FileToJoblet template to create your Job and click Next.
4. In the Select Schema to create area, select the metadata you want to use as parameters to retrieve and write
the data into the target Joblet. This example uses a .csv file. Then, click Next to proceed.
5. In the Type Selection area, select the target Joblet you want to write the retrieved data in, and click Next to
validate and open a new view on the wizard.
6. In the Jobname field, type in what you want to add to complete the Job name. By default, the Job name is
Job_{CURRENT_TABLE}, type in example to complete this name as Job_example_{CURRENT_TABLE},
and click check button to see whether the Job name to be used already exists or not. If it exists, you need
type in another Job name in the Jobname field. If it does not, a [Success] dialog box pops up to prompt you
to continue. Click OK.
Do not replace or delete {CURRENT_TABLE} when you type in texts to complete the Job name.
7. Select the Create subjobs in a single job check box if you have selected several metadata files to retrieve
and write data in the target Joblet and meanwhile, you want to handle these files using subjobs in a single Job.
Keep this check box cleared if you want to handle these files in several separate Jobs.
Once the Job is created, you can modify the properties of each of the components in the Job according to your
needs.
• create a Job and display its script through the Jobscript tab to edit it if necessary.
For more information, see How to generate a Job design from a Jobscript and How to display a Job script .
You can create your Job script in any text editor and name your file with the .jobscript extension, or you can create
it with the Integration perspective of Talend Studio. When you create it with the Job script API Editor of Talend
Studio, the use of the auto-completion (Ctrl+Space) will ease the writing process of the script. Moreover, in the
Studio, the code displays in color to be easily identified and you can create template in Talend Studio's preferences.
For more information on Job script preferences, see Job script preferences (JobScript).
To access Job scripts API Editor in the Studio, expand the Code node in the Repository of the Integration
perspective.
You can also create different folders to better classify these Job scripts.
1. Open Talend Studio following the procedure detailed in the Getting Started Guide.
2. In the Repository tree view of the Integration perspective, expand the Code node
3. Right-click the Job Scripts node and select Create JobScript from the contextual menu.
The [Create JobScript] wizard opens to help you define the main properties of the new Job script.
Field Description
Name the name of the new Job script. A message comes up if you enter prohibited characters.
Purpose Job script purpose or any useful information regarding the Job script use.
Description Job script description.
Author a read-only field that shows by default the current user login.
Field Description
Locker a read-only field that shows by default the login of the user who owns the lock on the current Job. This
field is empty when you are creating a Job script and has data only when you are editing the properties
of an existing Job script.
Version a read-only field. You can manually increment the version using the M and m buttons. For more
information, see Managing Job versions.
Status a list to select from the status of the Job script you are creating.
Path a list to select from the folder in which the Job script will be created.
An empty jobscript file opens up in the workspace showing the name of the script as a tab label.
• edit it,
• export it.
The code will display in color to easily differentiate the code, the values and the comment. And the syntax is
checked for any mistake. If a parameter is wrongly set a red cross displays. Moreover, you can use template to
write your Job script. For more information on the color definition of the text or on how to manage Job script
templates, see Job script preferences (JobScript).
• component definition
• component schema
• specific settings
To add a component, type in the addComponent{} function and define its properties between the brackets:
Function Parameter Value
setComponentDefinition{} TYPE Type in the component you want to use, for example:
tRowGenerator.
NAME Type in the name you want to give to the component, for
example: tRowGenerator_1.
}
//set element parameters
setSettings{
NB_ROWS:100,
VALUES{
SCHEMA_COLUMN:tableOneColumn1,
ARRAY:TalendString.getAsciiRandomString(6),
SCHEMA_COLUMN:tableOneColumn2,
ARRAY:TalendString.getAsciiRandomString(6),
SCHEMA_COLUMN:tableOneColumn3,
ARRAY:TalendString.getAsciiRandomString(6)
}
}
//add the schema
addSchema {
NAME:"schema1",
TYPE:"FLOW",
LABEL:"tRowGenerator_1"
addColumn {
NAME:tableOneColumn1,
TYPE:id_String,
KEY:false,
NULLABLE:false
}
addColumn {
NAME:tableOneColumn2,
TYPE:id_String,
KEY:false,
NULLABLE:true
}
addColumn {
NAME:tableOneColumn3,
TYPE:id_String,
KEY:false,
NULLABLE:true
}
}
}
addComponent{
setComponentDefinition {
TYPE:tRowGenerator,
NAME:"tRowGenerator_2",
POSITION: 160, 192
}
setSettings{
NB_ROWS:100,
VALUES{
SCHEMA_COLUMN:tableTwoColumn1,
ARRAY:TalendString.getAsciiRandomString(6),
SCHEMA_COLUMN:tableTwoColumn2,
ARRAY:TalendString.getAsciiRandomString(6)
}
}
addSchema {
NAME:"schema2",
TYPE:"FLOW",
LABEL:"tRowGenerator_2"
addColumn {
NAME:tableTwoColumn1,
TYPE:id_String,
KEY:false,
NULLABLE:false
}
addColumn {
NAME:tableTwoColumn2,
TYPE:id_String,
KEY:false,
NULLABLE:true
}
}
}
addComponent{
setComponentDefinition {
TYPE:tLogRow,
NAME:"tLogRow_1",
POSITION: 544, 128
}
setSettings{
LENGTHS{
SCHEMA_COLUMN:var1,
SCHEMA_COLUMN:var2,
SCHEMA_COLUMN:var4
}
}
addSchema {
NAME:"schema3",
TYPE:"FLOW",
LABEL:"newOutPut"
addColumn {
NAME:var1,
TYPE:id_String,
KEY:false,
NULLABLE:false
}
addColumn {
NAME:var2,
TYPE:id_String,
KEY:false,
NULLABLE:true
}
addColumn {
NAME:var4,
TYPE:id_String,
KEY:false,
NULLABLE:true
}
}
}
addComponent{
setComponentDefinition{
TYPE:tMap,
NAME:"tMap_1",
POSITION: 352, 128
}
setSettings{
EXTERNAL:Map
}
addSchema {
NAME: "newOutPut",
TYPE: "FLOW",
LABEL: "newOutPut"
addColumn {
NAME: "var1",
TYPE: "id_String"
}
addColumn {
NAME: "var2",
TYPE: "id_String",
NULLABLE: true
}
addColumn {
NAME: "var4",
TYPE: "id_String",
NULLABLE: true
}
}
//add special data, input tables, an output table, and a var table in tMap
addMapperData{
//the syntax is almost the same as the metatable
addInputTable{
NAME:row1,
SIZESTATE:INTERMEDIATE,
EXPRESSIONFILTER:filter
addColumn {
NAME:tableOneColumn1,
TYPE:id_String,
NULLABLE:false
}
addColumn {
NAME:tableOneColumn2,
TYPE:id_String,
NULLABLE:true
}
addColumn {
NAME:tableOneColumn3,
TYPE:id_String,
NULLABLE:true
}
}
//add another input table
addInputTable{
NAME:row2,
SIZESTATE:MINIMIZED
addColumn {
NAME:tableTwoColumn1,
TYPE:id_String,
NULLABLE:false
}
addColumn {
NAME:tableTwoColumn2,
TYPE:id_String,
NULLABLE:true
}
}
//add the var table
addVarTable{
NAME:Var
addColumn{
NAME:var1,
TYPE:id_String,
EXPRESSION:row1.tableOneColumn1+row2.tableTwoColumn1
}
addColumn{
NAME:var2,
TYPE:id_String,
EXPRESSION:row1.tableOneColumn2
}
addColumn{
NAME:var3,
TYPE:id_String,
EXPRESSION:row1.tableOneColumn3+row2.tableTwoColumn2
}
addColumn{
NAME:var4,
TYPE:id_String
}
}
//add the output table
addOutputTable{
NAME:newOutPut
addColumn{
NAME:var1,
TYPE:id_String,
EXPRESSION:Var.var1+Var.var3
}
addColumn{
NAME:var2,
TYPE:id_String,
EXPRESSION:Var.var2
}
addColumn{
NAME:var4,
TYPE:id_String,
EXPRESSION:Var.var4
}
}
}
}
//add connections
addConnection{
TYPE:"FLOW",
NAME:"row1",
LINESTYLE:0,
SOURCE:"tRowGenerator_1",
TARGET:"tMap_1"
}
addConnection{
TYPE:"FLOW",
NAME:"row2",
LINESTYLE:8,
SOURCE:"tRowGenerator_2",
TARGET:"tMap_1"
}
addConnection{
TYPE:"FLOW",
NAME:"newOutPut",
LINESTYLE:0,
SOURCE:"tMap_1",
TARGET:"tLogRow_1"
}
2. Expand the Job Scripts node and right-click the Job script you want to modify.
3. Select Edit JobScript from the contextual menu. The Job script opens in the workspace.
The Job script can also be edited by clicking the Jobscript tab on your current Job.
You can do this using the Commandline, the equivalent of Talend Studio without GUI.
The parameter values are given as examples and need to be replaced with your actual information
(port, credentials). For more information on how to use these commands, see the help provided in the
CommandLine.
3. Connect to your project and branch/tag with the logonProject command. If you do not know the name of
your project or branch/tag, type in the listProject -b command first. Example:
logonProject -pn di_project -ul admin@company.com -up admin -br branches/v1.0.0
The parameter values are given as examples and need to be replaced with your actual information (project/
branch/tag name, credentials). For more information on how to use this command, see the help provided
in the CommandLine.
4. Type in the following command to generate a Job from your Job script:
createJob NameOfJob -sf path\yourJobscript.jobscript
The Job is created in your CommandLine workspace in the process folder: commandline-workspace
\YourProjectName\process.
If you want to open this Job in Talend Studio, you will have to import it in the Talend Studio workspace first.
For more information on how to import items in Talend Studio, see How to import items.
With the graphical interface of the Studio, you can generate this Job using your Jobscript. Proceed as follows:
2. Expand the Job Scripts node and right-click the Job script you want to generate the Job Design of.
3. Select Generate Job from the contextual menu. A Job with the same name as the Job script is created in the
Job Designs node of the Repository. You can open it and execute it, if needed.
2. Expand the Job Scripts node and right-click the Job script you want to export.
3. Select Export Job Script from the contextual menu. An [Export Job Script] wizard displays.
4. Select the folder in which you want to export your Job script to.
5. Click OK. A file with a .jobscript file extension is exported in the defined folder.
To do so,
1. In Talend Studio, click menu File, and Open/Edit Job Script. An [Open/Edit Job Script] wizard displays.
2. Browse to the jobscript file you want to open and select it.
3. Click Open. The jobscript file opens in the workspace of the Integration perspective of Talend Studio. You
can modify the script if needed and save your changes.
Jobscript files can be opened and edited in the Studio but they cannot be saved in the Repository.
For more information about Job creation without Job script, see Creating a Job.
1. In the Respository tree view, double click the Job you want to open.
2. Click on the Jobscript tab situated under the design workspace to view and edit the script if necessary.
From this view, you can visualize and/or modify all the elements of your Job: the script version, the context variables, the
Job parameters including project information, Job settings, and routines called in the Job, the names of the components,
their schema, their parameters, the connections used to link them, their position, the text notes added to the Job (if any),
the subjob information, etc.
For more information, see How to write a Job script and How to edit a Job script.
The examples below show different options for you to insert a tMap between a tFileInputDelmited and
tMySqlOutput linked by a Row > Main connection. For how to connect components in a Job, see Connecting
the components together. For more information about various types of connections, see Connection types.
If you are prompted to give a name to the output connection from the newly added component, which is true
in the case of a tMap, type in a name and click OK to close the dialog box.
You may be asked to retrieve the schema of the target component. In that case, click OK to accept or click No to deny.
The component is inserted in the middle of the connection, which is now divided into two connections.
1. Click on the connection that links the two existing components to select it.
2. Type the name of the new component you want to add, tMap in this example, and double click the component
on the suggested list to add it onto the connection.
3. If you are prompted to give a name to the output connection from the newly added component, which is true
in the case of a tMap, type in a name and click OK to close the dialog box.
You may be asked to retrieve the schema of the target component. In that case, click OK to accept or click No to deny.
The component is inserted in the middle of the connection, which is now divided in two connections.
Adding the component to the design workspace and moving the existing connection
1. Add the new component, tMap in this example, onto the design workspace by either dropping it from the
Palette or clicking in the design workspace and typing the component name.
2. Select the connection and move your mouse pointer towards the end of the connection until the mouse pointer
becomes a + symbol.
3. Drag the connection from the tMySqlOutput component and drop it onto the tMap component.
4. Connect the tMap component to the tMySqlOutput using a Row > Main connection.
Each component is defined by basic and advanced properties shown respectively on the Basic Settings tab and
the Advanced Settings tab of the Component view of the selected component in the design workspace. The
Component view gathers also other collateral information related to the component in use, including View and
Documentation tabs.
For more an example of a basic Job design, see Getting started with a basic Job.
Each component has specific basic settings according to its function requirements within the Job. For a detailed
description of each component properties and use, see Talend Components Reference Guide.
Some components require code to be input or functions to be set. Make sure you use Java code in properties.
For File and Database components, you can centralize properties in metadata files located in the Metadata
directory of the Repository tree view. This means that on the Basic Settings tab you can set properties on the
spot, using the Built-In Property Type or use the properties you stored in the Metadata Manager using the
Repository Property Type. The latter option helps you save time.
Select Repository as Property Type and choose the metadata file holding the relevant information. Related topic:
Managing Metadata.
Alternatively, you can drop the Metadata item from the Repository tree view directly to the component already
dropped on the design workspace, for its properties to be filled in automatically.
If you selected the Built-in mode and set manually the properties of a component, you can also save those
properties as metadata in the Repository. To do so:
1. Click the floppy disk icon. The metadata creation wizard corresponding to the component opens.
2. Follow the steps in the wizard. For more information about the creation of metadata items, see Managing
Metadata.
For all components that handle a data flow (most components), you can define a Talend schema in order to
describe and possibly select the data to be processed. Like the Properties data, this schema is either Built-in or
stored remotely in the Repository in a metadata file that you created. A detailed description of the Schema setting
is provided in the next sections.
A schema created as Built-in is meant for a single use in a Job, hence cannot be reused in another Job.
Select Built-in in the Property Type list of the Basic settings view, and click the Edit Schema button to create
your built-in schema by adding columns and describing their content, according to the input file definition.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a
data file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has
a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column
names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
In all output properties, you also have to define the schema of the output. To retrieve the schema defined in the
input schema, click the Sync columns tab in the Basic settings view.
When creating a database table, you are recommended to specify the Length field for all columns of type String, Integer or
Long and specify the Precision field for all columns of type Double, Float or BigDecimal in the schema of the component
used. Otherwise, unexpected errors may occur.
If you often use certain database connections or specific files when creating your data integration Jobs, you can
avoid defining the same properties over and over again by creating metadata files and storing them in the Metadata
node in the Repository tree view of the Integration perspective.
To recall a metadata file into your current Job, select Repository in the Schema list and then select the relevant
metadata file. Or, drop the metadata item from the Repository tree view directly to the component already dropped
on the design workspace. Then click Edit Schema to check that the data is appropriate.
You can edit a repository schema used in a Job from the Basic settings view. However, note that the schema hence
becomes Built-in in the current Job.
You can also use a repository schema partially. For more information, see How to use a repository schema
partially.
You cannot change the schema stored in the repository from this window. To edit the schema stored remotely, right-click it
under the Metadata node and select the corresponding edit option (Edit connection or Edit file) from the contextual menu.
When using a repository schema, if you do not want to use all the predefined columns, you can select particular
columns without changing the schema into a built-in one:
The following describes how to use a repository schema partially for a database input component. The procedure
may vary slightly according to the component you are using.
1. Click the [...] button next to Edit schema on the Basic settings tab. The [Edit parameter using repository]
dialog box appears. By default, the option View schema is selected.
2. Click OK. The [Schema] dialog box pops up, which displays all columns in the schema. The Used Column
check box before each column name indicates whether the column is used.
4. Click OK. A message box appears, which prompts you to do a guess query.
The guess query operation is needed only for the database metadata.
5. Click OK to close the message box. The [Propagate] dialog box appears. Click Yes to propagate the changes
and close the dialog box.
6. On the Basic settings tab, click Guess Query. The selected column names are displayed in the Query area
as expected.
For more information about how to set a repository schema, see How to set a repository schema.
On any field of your Job/component Properties view, you can use the Ctrl+Space bar to access the global and
context variable list and set the relevant field value dynamically.
3. Select on the list the relevant parameters you need. Appended to the variable list, a information panel provides
details about the selected parameter.
This can be any parameter including: error messages, number of lines processed, or else... The list varies
according to the component in selection or the context you are working in.
The content of the Advanced settings tab changes according to the selected component.
Generally you will find on this tab the parameters that are not required for a basic or usual use of the component
but may be required for a use out of the standard scope.
You can also find in the Advanced settings view the option tStatCatcher Statistics that allows you, if selected,
to display logs and statistics about the current Job without using dedicated components. For more information
regarding the stats & log features, see How to automate the use of statistics & logs.
The Dynamic settings tab, on the Component view, allows you to customize these parameters into code or
variable.
This feature allows you, for example, to define these parameters as variables and thus let them become context-
dependent, whereas they are not meant to be by default.
Another benefit of this feature is that you can now change the context setting at execution time. This makes full
sense when you intend to export your Job in order to deploy it onto a Job execution server for example.
To customize these types of parameters, as context variables for example, follow the following steps:
1. Select the relevant component basic settings or advanced settings view that contains the parameter you want
to define as a variable.
3. Click the plus button to display a new parameter line in the table.
4. Click the Name of the parameter displaying to show the list of available parameters. For example: Print
operations
5. Then click in the facing Code column cell and set the code to be used. For example: context.verbose if you
create the corresponding context variable, called verbose.
The corresponding lists or check boxes thus become unavailable and are highlighted in yellow in the Basic
settings or Advanced settings tab.
If you want to set a parameter as context variable, make sure you create the corresponding variable in the Contexts view.
For more information regarding the context variable definition, see How to define context variables in the Contexts view.
For use cases showing how to define a dynamic parameter, see the scenario of tMysqlInput about reading data
from MySQL databases through context-based dynamic connections and the scenario of tContextLoad in Talend
Components Reference Guide.
The dynamic column retrieves the columns which are undefined in the schema. This means that source columns
which are unknown when the Job is designed, become known at runtime and are added to the schema. This can
make Job design much easier as it allows for simple one to one mapping of many columns. There are many uses for
dynamic columns. For instance in data migration tasks, developers can copy columns of data to another location
without having to map each column individually.
While the dynamic schema feature significantly eases Job designs, it does not work in all components. For a list of
components that support this feature, see Which components provide the Dynamic Schema feature.
For further information about defining dynamic schemas, see How to define dynamic schema columns.
For further information regarding the mapping of dynamic columns, see How to map dynamic columns.
For further information regarding the use of dynamic schemas in Jobs, see the scenarios of tMysqlInput and
tMysqlOutput components in Talend Components Reference Guide.
The dynamic schema columns are easy to define. To define dynamic columns for the Databases Input and Output
components, or for tFileInputDelimited and tFileOutputDelimited:
1. In the component's Basic settings tab, set the Property Type as Built-In.
3. In the last row added to the schema, enter a name for the dynamic column in the Column field.
4. Click the Type field and then the arrow which appears to select Dynamic from the list.
The dynamic column must be defined in the last row of the schema.
In the Database Input components, the SELECT query must include the * wildcard, to retrieve all of the columns from the
table selected.
For further information regarding the mapping of dynamic columns, see How to map dynamic columns.
For further information concerning the use of dynamic schemas in Jobs, see Talend Components Reference Guide.
It is easy to map dynamic columns in the tMap component, as in the Map Editor, simply dropping the dynamic
column from the input schema to the output schema does not change any of the column values:
• The dynamic column must be mapped on a one to one basis and cannot undergo any transformations.
• The dynamic column cannot be renamed in output tables and cannot be used as a join condition.
Dynamic schemas can be mapped to several outputs and can also be mapped from lookup inputs.
For further information about defining dynamic schemas, see How to define dynamic schema columns.
You can graphically highlight both Label and Hint text with HTML formatting tags:
In the Documentation tab, you can add your text in the Comment field. Then, select the Show Information
check box and an information icon display next to the corresponding component in the design workspace.
You can show the Documentation in your hint tooltip using the associated variable _COMMENT_, so that when
you place your mouse on this icon, the text written in the Comment field displays in a tooltip box.
For advanced use of Documentations, you can use the Documentation view in order to store and reuse any type
of documentation.
Drop a component to the design workspace, all possible start components take a distinctive bright green
background color. Notice that most of the components, can be Start components.
Only components which do not make sense to trigger a flow, will not be proposed as Start components, such as
the tMap component for example.
To distinguish which component is to be the Start component of your Job, identify the main flow and the secondary
flows of your Job.
• The main flow should be the one connecting a component to the next component using a Row type link. The
Start component is then automatically set on the first component of the main flow (icon with green background).
• The secondary flows are also connected using a Row-type link which is then called Lookup row on the design
workspace to distinguish it from the main flow. This Lookup flow is used to enrich the main flow with more data.
Be aware that you can change the Start component hence the main flow by changing a main Row into a Lookup
Row, simply through a right-click the row to be changed.
Related topics:
From the Palette, you can search for all the Jobs that use the selected component. To do so:
1. In the Palette, right-click the component you want to look for and select Find Component in Jobs.
A progress indicator displays to show the percentage of the search operation that has been completed then
the [Find a Job] dialog box displays listing all the Jobs that use the selected component.
2. From the list of Jobs, click the desired Job and then click OK to open it on the design workspace.
At present, only tFileInputDelimited, tFileInputExcel, and tFixedFlowInput support default values in the schema.
In the following example, the company and city fields of some records of the source CSV file are left blank, as
shown below. The input component reads data from the source file and completes the missing information using
the default values set in the schema, Talend and Paris respectively.
id;firstName;lastName;company;city;phone
1;Michael;Jackson;IBM;Roma;2323
2;Elisa;Black;Microsoft;London;4499
3;Michael;Dujardin;;;8872
4;Marie;Dolvina;;;6655
5;Jean;Perfide;;;3344
6;Emilie;Taldor;Oracle;Madrid;2266
7;Anne-Laure;Paldufier;Apple;;4422
1. Double-click the input component tFileInputDelimited to show its Basic settings view.
In this example, the metadata for the input component is stored in the Repository. For information about
metadata creation in the Repository, see Managing Metadata.
2. Click the [...] button next to Edit schema, and select the Change to built-in property option from the pop-
up dialog box to open the schema editor.
3. Enter Talend between quotation marks in the Default field for the company column, enter Paris between
quotation marks in the Default field for the city column, and click OK to close the schema editor.
4. Configure the output component tLogRow to display the execution result the way you want, and then run
the Job.
In the output data flow, the missing information is completed according to the set default values.
Right-click a component on the design workspace to display a contextual menu that lists all available connections
for the selected component.
Main
This type of row connection is the most commonly used connection. It passes on data flows from one component
to the other, iterating on each row and reading input data according to the component properties setting (schema).
Data transferred through main rows are characterized by a schema definition which describes the data structure
in the input file.
You cannot connect two Input components together using a Row > Main connection. Only one incoming Row connection
is possible per component. You will not be able to link twice the same target component using a main Row connection. The
second Row connection will be called Lookup.
To connect two components using a Main connection, right-click the input component and select Row > Main
on the connection list.
Alternatively, you can click the component to highlight it, then right-click it or click the O icon that appears on
side of it and drag the cursor towards the destination component. This will automatically create a Row > Main
type of connection.
Lookup
This row connection connects a sub-flow component to a main flow component (which should be allowed to
receive more than one incoming flow). This connection is used only in the case of multiple input flows.
A Lookup row can be changed into a main row at any time (and reversely, a main row can be changed to a lookup
row). To do so, right-click the row to be changed, and on the pop-up menu, click Set this connection as Main.
Filter
This row connection connects specifically a tFilterRow component to an output component. This row connection
gathers the data matching the filtering criteria. This particular component offers also a Reject connection to fetch
the non-matching data flow.
Rejects
This row connection connects a processing component to an output component. This row connection gathers the
data that does NOT match the filter or are not valid for the expected output. This connection allows you to track
the data that could not be processed for any reason (wrong type, undefined null value, etc.). On some components,
this connection is enabled when the Die on error option is deactivated. For more information, refer to the relevant
component properties available in Talend Components Reference Guide.
ErrorReject
This row connection connects a tMap component to an output component. This connection is enabled when you
clear the Die on error check box in the tMap editor and it gathers data that could not be processed (wrong type,
undefined null value, unparseable dates, etc.).
Output
This row connection connects a tMap component to one or several output components. As the Job output can be
multiple, you get prompted to give a name for each output row created.
The system also remembers deleted output connection names (and properties if they were defined). This way, you do not
have to fill in again property data in case you want to reuse them.
Uniques/Duplicates
The Uniques connection gathers the rows that are found first in the incoming flow. This flow of unique data is
directed to the relevant output component or else to another processing subjob.
The Duplicates connection gathers the possible duplicates of the first encountered rows. This reject flow is directed
to the relevant output component, for analysis for example.
Multiple Input/Output
Some components help handle data through multiple inputs and/or multiple outputs. These are often processing-
type components such as the tMap.
If this requires a join or some transformation in one flow, you want to use the tMap component, which is dedicated
to this use.
For further information regarding data mapping, see Mapping data flows.
For properties regarding the tMap component as well as use case scenarios, see Talend Components Reference
Guide.
Combine
When right-clicking the CombinedSQL component to be connected to the next one, select Row > Combine.
Related topics: CombinedSQL components in the ELT Components chapter of the Talend Components Reference
Guide.
A component can be the target of only one Iterate connection. The Iterate connection is mainly to be connected
to the start component of a flow (in a subjob).
Some components such as the tFileList component are meant to be connected through an iterate connection with
the next component. For how to set an Iterate connection, see Iterate connection settings.
The name of the Iterate connection is read-only unlike other types of connections.
The connection in use will create a dependency between Jobs or subjobs which therefore will be triggered one
after the other according to the trigger nature.
OnSubjobOK (previously Then Run): This connection is used to trigger the next subjob on the condition that
the main subjob completed without error. This connection is to be used only from the start component of the Job.
These connections are used to orchestrate the subjobs forming the Job or to easily troubleshoot and handle
unexpected errors.
OnSubjobError: This connection is used to trigger the next subjob in case the first (main) subjob do not complete
correctly. This "on error" subjob helps flagging the bottleneck or handle the error if possible.
OnComponentOK and OnComponentError are component triggers. They can be used with any source
component on the subjob.
OnComponentOK will only trigger the target component once the execution of the source component is complete
without error. Its main use could be to trigger a notification subjob for example.
OnComponentError will trigger the sub-job or component as soon as an error is encountered in the primary Job.
Run if triggers a subjob or component in case the condition defined is met. For further information about Run
if, see Run if connection settings.
The Link connection therefore does not handle actual data but only the metadata regarding the table to be operated
on.
When right-clicking the ELT component to be connected, select Link > New Output.
Be aware that the name you provide to the connection must reflect the actual table name.
In fact, the connection name will be used in the SQL statement generated through the ETL Mapper, therefore the
same name should never be used twice.
The Advanced settings vertical tab lets you monitor the data flow over the connection in a Job without using a
separate tFlowMeter component. The measured information will be interpreted and displayed in a monitoring tool
such Talend Activity Monitoring Console (available with Talend subscription-based products). For information
about Talend Activity Monitoring Console, see Talend Activity Monitoring Console User Guide.
To monitor the data over the connection, perform the following settings in the Advanced settings vertical tab:
2. From the Mode list, select Absolute to log the actual number of rows passes over the connection, or Relative
to log the ratio (%) of the number of rows passed over this connection against a reference connection. If you
select Relative, you need to select a reference connection from the Connections List list.
3. Click the plus button to add a line in the Thresholds table and define a range of the number of rows to be
logged.
For more information about flow metrics, see the documentation of the tFlowMeterCatcher component in Talend
Components Reference Guide and see Talend Activity Monitoring Console User Guide.
When working in a remote project, you can define checkpoints on OnSubjobOK and OnSubjobError trigger
connections, so that the execution of your Job can be recovered, in case of Job execution failure, from the last
checkpoint previous to the error through the Error Recovery Management page in Talend Administration Center.
To define a checkpoint on a subjob trigger connection, perform the following settings in the Error recovery
vertical tab of the connection's Component view:
3. Fill in any text that can explain the failure in the Failure instructions text field.
Related topic: Recovering Job execution in Talend Administration Center User Guide.
In the Basic settings view of a Run if connection, you can set the condition to the Subjob in Java.
You can use variables in your condition. Pressing Ctrl+Space allows you to access all global and context variables.
For more information, see How to use variables in a Job.
When adding a comment after the condition, be sure to enclose it between /* and */ even if it is a single-line comment.
In the following example, a message is triggered if the input file contains 0 rows of data.
1. Create a Job and drop three components to the design workspace: a tFileInputDelimited, a tLogRow, and
a tMsgBox.
• Right-click the tFileInputDelimited component, select Row > Main from the contextual menu, and click
the tLogRow component.
• Right-click the tFileInputDelimited component, select Trigger > Run if from the contextual menu, and
click the tMsgBox component.
3. Configure the tFileInputDelimited component so that it reads a file that contains no data rows.
4. Select the Run if connection between the tFileInputDelimited component and the tMsgBox component,
and click the Component view. In the Condition field on the Basic settings tab, pressing Ctrl+Space to
access the variable list, and select the NB_LINE variable of the tFileInputDelimited component. Edit the
condition as follows:
((Integer)globalMap.get("tFileInputDelimited_1_NB_LINE"))==0
5. Go to the Component view of the tMsgBox component, and enter a message, "No data is read from the file"
for example, in the Message field.
6. Save and run the Job. You should see the message you defined in the tMsgBox component.
To define a breakpoint, perform the following settings in the Breakpoint vertical tab:
2. If you want to combine simple filtering and advanced mode, select your logical operator in the Logical
operator used to combine conditions list.
3. Click the [+] button to add as many filtering conditions as you want in the Conditions table. These conditions
will be performed one after another for each row. Each condition includes the input column to operate the
selected function on, the function to operate, an operator to combine the input column and the value to be
filtered, and the value.
4. If the standard functions are not sufficient to carry out your operation, select the Use advanced mode check
box and fill in a regular expression in the text field.
Upon defining your breakpoint, run your Job in Traces Debug mode. For more information about breakpoint usage
in Traces Debug mode, see Breakpoint monitoring.
Note that the Parallelization tab is available only on the condition that you have subscribed to one of the Talend
Platform solutions or Big Data solutions.
For further information about how to partition a data flow for parallelized executions, see How to enable
parallelization of data flows.
Depending on the circumstances the Job is being used in, you might want to manage it differently for various
execution types, known as contexts (Prod and Test in the example given below). For instance, there might be
various testing stages you want to perform and validate before a Job is ready to go live for production use.
A context is characterized by parameters. These parameters are mostly context-sensitive variables which will be
added to the list of variables for reuse in the component-specific properties on the Component view through the
Ctrl+Space keystrokes.
Talend Studio offers you the possibility to create multiple context data sets. Furthermore you can either create
context data sets on a one-shot basis from the context tab of a Job, or you can centralize the context data sets in
the Contexts node of the Repository tree view in order to reuse them in different Jobs.
You can define the values of your context variables when creating them, or load your context parameters
dynamically, either explicitly using the tContextLoad component or implicitly using the Implicit Context Load
feature, when your Jobs are executed.
This section describes how to create contexts and variables and define context parameter values. For an example
of loading context parameters dynamically using the tContextLoad component, see the documentation of
tContextLoad in the Talend Components Reference Guide. For an example of loading context parameters
dynamically using the Implicit Context Load feature, see Using the Implicit Context Load feature.
• Using the Contexts view of the Job. See How to define context variables in the Contexts view.
• Using the F5 key from the Component view of a component. See How to define variables from the Component
view.
If you cannot find the Contexts view on the tab system of Talend Studio, go to Window > Show view > Talend, and select
Contexts.
The Contexts tab view shows all of the variables that have been defined for each component in the current Job
and context variables imported into the current Job.
• Import variables from a Repository context source for use in the current Job.
• Edit Repository-stored context variables and update the changes to the Repository.
The following example will demonstrate how to define two contexts named Prod and Test and a set of variables
- host, port, database, username, password, and table_name - under the two contexts for a Job.
Defining contexts
2. Select the Contexts tab view and click the [+] button at the upper right corner of the view.
The [Configure Contexts] dialog box pops up. A context named Default has been created and set as the
default one by the system.
3. Select the context Default, click the Edit... button and enter Prod in the [Rename Context] dialog box that
opens to rename the context Default to Prod.
4. Click the New... button and enter Test in the [New Context] dialog box. Then click OK to close the dialog
box.
5. Select the check box preceding the context you want to set it as the default context. You can also set the default
context by selecting the context name from the Default context environment list in the Contexts tab view.
If needed, move a context up or down by selecting it and clicking the Up or Down button.
In this example, set Test as the default context and move it up.
6. Click OK to validate your context definition and close the [Configure Contexts] dialog box.
The newly created contexts are shown in the context variables table of the Contexts tab view.
Defining variables
1. Click the [+] button at the bottom of the Contexts tab view to add a parameter line in the table.
2. Click in the Name field and enter the name of the variable you are creating, host in this example.
3. From the Type list, select the type of the variable corresponding to the component field where it will be used,
String for the variable host in this example.
4. If needed, click in the Comment field and enter a comment to describe the variable.
5. Click in the Value field and enter the variable value under each context.
For different variable types, the Value field appear slightly different when you click in it and functions
differently:
It is recommended that you enclose the values of string type variables between double quotation marks to avoid
possible errors during Job execution.
6. If needed, select the check box next to the variable of interest and enter the prompt message in the
corresponding Prompt field. This allows you to see a prompt for the variable value and to edit it at the
execution time.
You can show/hide a Prompt column of the table by clicking the black right/left pointing triangle next to
the relevant context name.
7. Repeat the steps above to define all the variables in this example.
All the variables created and their values under different contexts are displayed in the table and are ready for
use in your Job. You can further edit the variables in this view if needed.
You can also add a built-in context variable to the Repository to make it reusable across different Jobs. For
more information, see How to add a built-in context variable to the Repository.
128 Talend Data Integration Studio User Guide
How to define context variables for a Job
Related topics:
• How to define variables from the Component view
1. On the relevant Component view, place your cursor in the field you want to parameterize.
3. Give a Name to this new variable, fill in the Comment field if needed, and choose the Type.
4. Enter a Prompt to be displayed to confirm the use of this variable in the current Job execution (generally
used for test purpose only), select the Prompt for value check box to display the prompt message and an
editable value field at the execution time.
5. If you filled in a value already in the corresponding properties field, this value is displayed in the Default
value field. Else, type in the default value you want to use for one context.
7. Go to the Contexts view tab. Notice that the context variables tab lists the newly created variables.
The variable name should follow some typing rules and should not contain any forbidden characters, such as space character.
The variable created this way is automatically stored in all existing contexts, but you can subsequently change the
value independently in each context. For more information on how to create or edit a context, see Defining contexts.
Related topics:
• Creating a context group using the [Create / Edit a context group] wizard. See How to create a context group
and define context variables in it for details.
• Adding a built-in context variable to an existing or new context group in the Repository. See How to add a
built-in context variable to the Repository for details.
• Saving a context from metadata. See How to create a context from a Metadata for more information.
1. Right-click the Contexts node in the Repository tree view and select Create context group from the
contextual menu.
A 2-step wizard appears to help you define the various contexts and context parameters.
2. In Step 1 of 2, type in a name for the context group to be created, TalendDB in this example, and add any
general information such as a description if required. The information you provide in the Description field
will appear as a tooltip when you move your mouse over the context group in the Repository.
3. Click Next to go to Step 2 of 2, which allows you to define the various contexts and variables that you need.
A context named Default has been created and set as the default one by the system.
4. Click the [+] button at the upper right corner of the wizard to define contexts. The [Configure Contexts]
dialog box pops up.
5. Select the context Default, click the Edit... button and enter Prod in the [Rename Context] dialog box that
opens to rename the context Default to Prod.
6. Click the New... button and enter Test in the [New Context] dialog box. Then click OK to close the dialog
box.
7. Select the check box preceding the context you want to set as the default context. You can also set the default
context by selecting the context name from the Default context environment list on the wizard.
If needed, move a context up or down by selecting it and clicking the Up or Down button.
In this example, set Test as the default context and move it up.
8. Click OK to validate your context definition and close the [Configure Contexts] dialog box.
The newly created contexts are shown in the context variables table of the wizard.
1. Click the [+] button at the bottom of the wizard to add a parameter line in the table.
2. Click in the Name field and enter the name of the variable you are creating, host in this example.
3. From the Type list, select the type of the variable corresponding to the component field where it will used,
String for the variable host in this example.
4. If needed, click in the Comment field and enter a comment to describe the variable.
5. Click in Value field and enter the variable value under each context.
For different variable types, the Value field appear slightly different when you click in it and functions
differently:
It is recommended that you enclose the values of string type variables between double quotation marks to avoid
possible errors during Job execution.
6. If needed, select the check box next the variable of interest and enter the prompt message in the corresponding
Prompt field. This allows you to see a prompt for the variable value and to edit it at the execution time.
You can show/hide a Prompt column of the table by clicking the black right/left pointing triangle next to
the relevant context name.
7. Repeat the steps above to define all the variables in this example.
All the variables created and their values under different contexts are displayed in the table and are ready for
use in your Job. You can further edit the variables if needed.
Once you created and adapted as many context sets as you want, click Finish to validate. The group of contexts thus
displays under the Contexts node in the Repository tree view. You can further edit the context group, contexts,
and context variables in the wizard by right-clicking the Contexts node and selecting Edit context group from
the contextual menu.
Related topics:
• How to add a built-in context variable to the Repository
1. In the Context tab view of a Job, right-click the context variable you want to add to the Repository and select
Add to repository context from the contextual menu to open the [Repository Content] dialog box.
• to add your context variable to a new context group, select Create new context group and enter a name
for the new context group in the Group Name field, and then click OK.
• to add your context variable to an existing context group, select the context group and click OK.
When adding a built-in context variable to an existing context group, make sure that the variable does not already
exist in the context group.
In this example, add the built-in context variable password to a new context group named DB_login.
The context variable is added to the Repository context group of your choice, along with the defined built-in
contexts, and it appears as a Repository-stored context variable in the Contexts tab view.
Related topics:
• How to create a context group and define context variables in it
of the Repository. To do so, complete your connection details and click the Export as context button in the second
step of the wizard.
For more information about this feature, see Exporting metadata as context and reusing context parameters to
set up a connection.
• Drop a context group. This way, the group is applied as a whole. See How to drop a context group onto a Job
for details.
•
Use the button. This way, the variables of a context group can be applied separately. See How to apply
context variables to a Job using the context button for details.
2. Once the Job is opened, drop the context group of your choice either onto the Job workspace or onto the
Contexts view beneath the workspace.
The Contexts view shows all the contexts and variables of the group. You can:
• edit the contexts by clicking the [+] button at the upper right corner of the Contexts view.
• delete the whole group or any variable by selecting the group name or the variable and clicking the [X]
button.
• save any imported context variable as a built-in variable by right-click it and selecting Add to built-in
from the contextual menu.
• double-click any context variable to open the context group in the [Create / Edit a context group] wizard
and update changes to the Repository.
2. Once the Job is opened in the workspace, click the Contexts view beneath the workspace to open it.
3.
At the bottom of the Contexts view, click the button to open the wizard to select the context variables
to be applied.
4. In the wizard, select the context variables you need to apply or clear those you do not need to.
The context variables that have been applied are automatically selected and cannot be cleared.
The Contexts view shows the context group and the selected context variables. You can edit the contexts by
clicking the [+] button at the upper right corner of the Contexts view, delete the whole group or any variable
by selecting the group name or the variable and clicking the [X] button, but you cannot edit Repository-stored
variables in this view.
1. In the relevant Component view, place your mouse in the field you want to parameterize and press Ctrl
+Space to display a full list of all the global variables and those context variables defined in or applied to
your Job.
The list grows along with new user-defined variables (context variables).
Related topics:
• How to define context variables for a Job
In database output components, when parallel execution is enabled, it is not possible to use global variables to retrieve
the return values in a SubJob.
Click the Run Job tab, and in the Context area, select the relevant context among the various ones you created.
If you did not create any context, only the Default context shows on the list.
All the context variables you created for the selected context display, along with their respective values, in a table
underneath.
To make a change permanent in a variable value, you need to change it on the Context view if your variable is
of type built-in or in the Context group of the repository.
Related topics:
• How to define context variables for a Job
4.7.6. StoreSQLQuery
StoreSQLQuery is a user-defined variable and is mainly dedicated to debugging.
StoreSQLQuery is different from other context variables in the fact that its main purpose is to be used as parameter
of the specific global variable called Query. It allows you to dynamically feed the global query variable.
The global variable Query, is available on the proposals list (Ctrl+Space bar) for some DB input components.
For further details on StoreSQLQuery settings, see Talend Components Reference Guide, and in particular the
scenarios of the tDBInput component.
Talend Studio allows you to implement different types of parallelization depending on ranging circumstances.
These circumstances could be:
1. Parallel executions of multiple Subjobs. For further information, see How to execute multiple Subjobs in
parallel.
2. Parallel iterations for reading data. For further information, see How to launch parallel iterations to read data.
3. Orchestrating executions of Subjobs. For further information, see How to orchestrate parallel executions of
Subjobs.
4. Speeding-up data writing into a database. For further information, see How to write data in parallel.
5. Speeding-up processing of a data flow. For further information, see How to enable parallelization of data flows.
Parallelization is an advanced feature and requires basic knowledge about a Talend Job such as how to design
and execute a Job or a Subjob, how to use components and how to use the different types of connections that link
components or Jobs. If you feel that you need to acquire this kind of knowledge, see Designing a Job.
As explained in the previous sections, a Job opened in the workspace can contain several Subjobs and you are
able to arrange their execution order using the trigger links such as OnSubjobOK. However, when the Subjobs
do not have any dependencies between them, you might want to launch them at the same time. For example, the
following image presents four Subjobs within a Job and with no dependencies in between.
The tRunJob component is used in this example to call each Subjob they represent. For further information about
tRunJob, see Talend Components Reference Guide.
Then with the Job opened in the workspace, you need simply proceed as follows to run the Subjobs in parallel:
1. Click the Job tab, then the Extra tab to display it.
2. Select the Multi thread execution check box to enable the parallel execution.
When the Use project settings check box is selected, the Multi thread execution check box could be greyed
out and become unavailable . In this situation, clear the Use project settings check box to activate the Multi
thread execution check box.
This feature is optimal when the number of threads (in general a Subjob count one thread) do not exceed the
number of processors of the machine you use for parallel executions. Otherwise, some of the Subjobs have to wait
until any processor is freed up.
For a use case of using this feature to run Jobs in parallel, see Using the Multi-thread Execution feature to run
Jobs in parallel.
1. Simply select the Iterate link of your subjob to display the related Basic settings view of the Components
tab.
2. Select the Enable parallel execution check box and set the number of executions to be carried out in parallel.
When executing your Job, the number of parallel iterations will be distributed onto the available processors.
3. Select the Statistics check box of the Run view to show the real time parallel executions on the design
workspace.
This feature is especially useful when you need to use the Iterate connection to pass context variables to a Subjob.
In that situation, the variables will be read in parallel in the Subjob and thus the processes handled by the Subjob
will be simultaneously run using those variables.
When a Job contains several Subjobs, you might want to execute some of the Subjobs in parallel and then
synchronize the executions of the other Subjobs at the end of the parallel executions.
To do this, you can simply use tParallelize to orchestrate all of the Subjobs to be executed.
In the example presented in the image, tParallelize launches at first the following Subjobs: workflow_sales,
workflow_rd and workflow_finance; after the executions are completed, it launches workflow_hr.
For further information about tParallelize, see Talend Components Reference Guide.
Note that when parallel execution is enabled, it is not possible to use global variables to retrieve return values
in a Subjob.
The Advanced settings for all database output components include the option Enable Parallel Execution which,
if selected, allows to perform high-speed data processing, that is treating multiple data flows simultaneously.
When you select the Enable parallel execution check box, the Number of parallel executions field displays
where you can enter the number by which the current processed data is devised to achieve N level of parallel
processings.
The current processed data being executed across N fragments might execute N times faster than it would if
processed as a single fragment.
You can also set the data flow parallelization parameters from the design workspace of the Integration perspective.
To do that:
1. Right-click a DB output component on the design workspace and select Parallelize from the drop-down list
to display a dialog box.
2. Select the Enable parallel execution check box and enter the number of parallel executions in the
corresponding field. Alternatively, press Ctrl + Space and select the appropriate context variable from the list.
The number of parallel executions displays next to the DB output component in the design workspace.
Note that this type of parallelization is available only on the condition that you have subscribed to one of the
Talend Platform solutions or Big Data solutions.
You can use dedicated components or the Set parallelization option in the contextual menu within a Job to
implement this type of parallel execution.
The dedicated components are tPartitioner, tCollector, tRecollector and tDepartitioner. For related
information, see Talend Components Reference Guide.
The following sections explains how to use the Set parallelization option and the related Parallelization vertical
tab associated with a Row connection.
You can enable or disable the parallelization by one single click, and then the Studio automates the implementation
across a given Job.
The implementation of the parallelization requires four key steps as explained as follows:
1.
Partitioning ( ): In this step, the Studio splits the input records into a given number of threads.
2.
Collecting ( ): In this step, the Studio collects the split threads and sends them to a given component for
processing.
3.
Departitioning ( ): In this step, the Studio groups the outputs of the parallel executions of the split threads.
4.
Recollecting ( ): In this step, the Studio captures the grouped execution results and outputs them to a given
component.
Once the automatic implementation is done, you can alter the default configuration by clicking the corresponding
connection between components.
You define the parallelization properties on your row connections according to the following table.
Field/Option Description
Partition row Select this option when you need to partition the input records into a specific number of threads.
Departition row Select this option when you need to regroup the outputs of the processed parallel threads.
Repartition row Select this option when you need to partition the input threads into a specific number of threads and
regroup the outputs of the processed parallel threads.
It is not available to the first or the last row connection of the flow.
None Default option. Select this option when you do not want to take any action on the input records.
Merge sort partitions Select this check box to implement the Mergesort algorithm to ensure the consistency of data.
This check box appears when you select the Departition row or Repartition row option.
Number of Child Threads Type in the number of threads into which you want to split the input records.
This field appears when you select the Partition row or Departition row option.
Buffer Size Type in the number of rows to cache for each of the threads generated.
This field does not appear if you select the None option.
Use a key hash for partitions Select this check box to use the hash mode for dispatching the input records, which will ensure
the records meeting the same criteria are dispatched to the same threads. Otherwise, the dispatch
mode is Round-robin.
This check box appears if you select the Partition row or Repartition row option.
In the Key Columns table that appears after you select the check box, set the columns on which
you want to use the hash mode.
1. In the Integration perspective of your Studio, create an empty Job from the Job Designs node in the
Repository tree view.
For further information about how to create a Job, see Designing a Job.
2. Drop the following components onto the workspace: tFileInputDelimited, tSortRow and
tFileOutputDelimited.
The tFileInputDelimited component (labeled test file in this example) reads the 20 million customer records
from a .txt file generated by tRowGenerator.
For further information about the components used in this scenario, see Talend Components Reference Guide.
Enabling parallelization
• Right-click the start component of the Job, tFileInputDelimited in the scenario, and from the contextual
menu, select Set parallelization.
2. In the File name/Stream field, browse to, or enter the path to the file storing the customer records to be read.
3.
Click the button to open the schema editor where you need to create the schema to reflect the structure
of the customer data.
4.
Click the button five times to add five rows and rename them as follows: FirstName, LastName, City,
Address and ZipCode.
In this scenario, we leave the data types with their default value String. In the real-world practice, you can
change them depending on the data types of your data to be processed.
5. Click OK to validate these changes and accept the propagation prompted by the pop-up dialog box.
6. If needs be, complete the other fields of the Component view with values corresponding to your data to be
processed. In this scenario, we leave them as is.
1. Click the link representing the partitioning step to open its Component view and click the Parallelization tab.
The Partition row option has been automatically selected in the Type area. If you select None, you are
actually disabling parallelization for the data flow to be handled over this link. Note that depending on the
link you are configuring, a Repartition row option may become available in the Type area to repartition a
data flow already departitioned.
• Number of Child Threads: the number of threads you want to split the input records up into. We
recommend that this number be N-1 where N is the total number of CPUs or cores on the machine
processing the data.
• Buffer Size: the number of rows to cache for each of the threads generated.
• Use a key hash for partitions: this allows you to use the hash mode to dispatch the input records into
threads.
Once selecting it, the Key Columns table appears, in which you set the column(s) you want to apply the
hash mode on. In the hash mode, the records meeting the same criteria are dispatched into the same threads.
If you leave this check box clear, the dispatch mode is Round-robin, meaning records are dispatched one-
by-one to each thread, in a circular fashion, until the last record is dispatched. Be aware that this mode
cannot guarantee that records meeting the same criteria go into the same threads.
2. In the Number of Child Threads field, enter the number of the threads you want to partition the data flow
into. In this example, enter 3 because we are using 4 processors to run this Job.
3. If required, change the value in the Buffer Size field to adapt the memory capacity. In this example, we leave
the default one.
At the end of this link, the Studio automatically collect the split thread to accomplish the collecting step.
Configuring tSortRow
2.
Under the Criteria table, click the button three times to add three rows to the table.
3. In the Schema column column, select, for each row, the schema column to be used as the sorting criterion.
In this example, select ZipCode, City and Address, sequentially.
4. In the Sort num or alpha? column, select alpha for all the three rows.
5. In the Order asc or desc column, select asc for all the three rows.
6. If the schema does not appear, click the Sync columns button to retrieve the schema from the preceding
component.
8. Select Sort on disk. Then the Temp data directory path field and the Create temp data directory if not
exist check box appear.
9. In Temp data directory path, enter the path to, or browse to the folder you want to use to store the temporary
data processed by tSortRow. In this approach, tSortRow is enabled to sort considerably more data.
As the threads will overwrite each other if they are written in the same directory, you need to create the folder
for each thread to be processed using its thread ID.
To use the variable representing the thread IDs, you need to click Code to open its view and in that view,
find this variable by searching for thread_id. In this example, this variable is tCollector_1_THREAD_ID.
Then you need to enter the path using this variable This path reads like:
"E:/Studio/workspace/temp"+((Integer)globalMap.get("tCollector_1_THREAD_ID")).
10. Ensure that the Create temp data directory if not exists check box is selected.
1. Click the link representing the departitioning step to open its Component view and click the Parallelization
tab.
The Departition row option has been automatically selected in the Type area. If you select None, you are
actually disabling parallelization for the data flow to be handled over this link. Note that depending on the
link you are configuring, a Repartition row option may become available in the Type area to repartition a
data flow already departitioned.
• Buffer Size: the number of rows to be processed before the memory is freed.
• Merge sort partitions: this allows you to implement the Mergesort algorithm to ensure the consistency
of data.
2. If required, change the values in the Buffer Size field to adapt the memory capacity. In this example, we
leave the default value.
At the end of this link, the Studio automatically accomplish the recollecting step to group and output the execution
results to the next component.
2. In the File Name field, browse to the file, or enter the directory and the name of the file, that you want to
write the sorted data in. At runtime, this file will be created if it does not exist.
Once done, you can check the file holding the sorted data and the temporary folders created by tSortRow for
sorting data on disk. These folders were emptied once the sorting had been done.
For more information about the principles of using this component, see Designing a Job.
For examples of Jobs using this component, see tMap in Talend Components Reference Guide.
You can create a query using the SQLbuilder whether your database table schema is stored in the Repository tree
view or built-in directly in the Job.
Fill in the DB connection details and select the appropriate repository entry if you defined it.
Remove the default query statement in the Query field of the Basic settings view of the Component panel. Then
click the [...] button to open the [SQL Builder] editor.
• Current Schema,
• Database structure,
• Schema view.
The Database structure shows the tables for which a schema was defined either in the repository database entry
or in your built-in connection.
The schema view, in the bottom right corner of the editor, shows the column description.
The connection to the database, in case of built-in schema or in case of a refreshing operation of a repository schema might
take quite some time.
Click the refresh icon to display the differences between the DB metadata tables and the actual DB tables.
The Diff icons point out that the table contains differences or gaps. Expand the table node to show the exact column
containing the differences.
The red highlight shows that the content of the column contains differences or that the column is missing from
the actual database table.
The blue highlight shows that the column is missing from the table stored in Repository > Metadata.
1. Right-click the table or on the table column and select Generate Select Statement on the pop-up list.
2. Click the empty tab showing by default and type in your SQL query or press Ctrl+Space to access the
autocompletion list. The tooltip bubble shows the whole path to the table or table section you want to search in.
Alternatively, the graphical query Designer allows you to handle tables easily and have real-time generation
of the corresponding query in the Edit tab.
3. Click the Designer tab to switch from the manual Edit mode to the graphical mode.
You may get a message while switching from one view to the other as some SQL statements cannot be interpreted
graphically.
4. If you selected a table, all columns are selected by default. Clear the check box facing the relevant columns
to exclude them from the selection.
5. Add more tables in a simple right-click. On the Designer view, right-click and select Add tables in the pop-
up list then select the relevant table to be added.
If joins between these tables already exist, these joins are automatically set up graphically in the editor.
You can also create a join between tables very easily. Right-click the first table columns to be linked and
select Equal on the pop-up list, to join it with the relevant field of the second table.
The SQL statement corresponding to your graphical handlings is also displayed on the viewer part of the
editor or click the Edit tab to switch back to the manual Edit mode.
In the Designer view, you cannot include graphically filter criteria. You need to add these in the Edit view.
6.
Once your query is complete, execute it by clicking the icon on the toolbar.
The toolbar of the query editor allows you to access quickly usual commands such as: execute, open, save
and clear.
The results of the active query are displayed on the Results view in the lower left corner.
7. If needed, you can select the context mode check box to keep the original query statement and customize
it properly in the Query area of the component. For example, if a context parameter is used in the query
statement, you cannot execute it by clicking the icon on the toolbar.
8. Click OK. The query statement will be loaded automatically in the Query area of the component.
In the [SQL Builder] editor, click the icon on the toolbar to bind the query with the DB connection and schema
in case these are also stored in the repository.
The query can then be accessed from the Database structure view, on the left-hand side of the editor.
Therefore, checkpoints within Job design can be defined as reference points that can precede or follow a failure
point during Job execution.
The Error recovery settings can be edited only in a remote project. For information about opening a remote project, see
How to open a remote project.
1. In the design workspace and after designing your Job, click the trigger connection you want to set as a
checkpoint.
2. Click the Error recovery tab in the lower left corner to display the Error recovery view.
3. Select the Recovery Checkpoint check box to define the selected trigger connection as a checkpoint in the
Job data flow. The icon is appended on the selected trigger connection.
4. In the Label field, enter a name for the defined checkpoint. This name will display in the Label column
in the Recovery checkpoints view in Talend Administration Center. For more information, see Talend
Administration Center User Guide.
5. In the Failure Instructions field, enter a free text to explain the problems and what do you think the
failure reason could be. These instructions will display in the Failure Instructions column in the Recovery
checkpoints view in Talend Administration Center, for more information, see Talend Administration Center
User Guide.
6. Save your Job before closing or running it in order for the define properties to be taken into account.
Later, and in case of failure during the execution of the designed Job, you can recover this Job execution from the
latest checkpoint previous to the failure through the Error Recovery Management page in Talend Administration
Center.
For more information, see the recovering job execution chapter in Talend Administration Center User Guide.
A click on the Exchange link on the toolbar of Talend Studio opens the Exchange tab view on the design
workspace, where you can find lists of:
• components you downloaded and installed in previous versions of Talend Studio but not installed yet in your
current Studio,
• components you have created and uploaded to Talend Exchange to share with other Talend Community users.
Note that the approach explained in this section is to be used for the above-mentioned components only.
• Before you can download community components or upload your own components to the community, you need to sign in
to Talend Exchange from your Studio first. If you did not sign in to Talend Exchange when launching the Studio, you
still have a chance to sign in from the Talend Exchange preferences settings page. For more information, see Exchange
preferences (Talend > Exchange).
• The community components available for download are not validated by Talend. This explains why you may encounter
component loading errors sometimes when trying to install certain community components, why an installed community
component may have a different name in the Palette than in the Exchange tab view, and why you may not be able to find
a component in the Palette after it is seemingly installed successfully.
1. Click the Exchange link on the toolbar of Talend Studio to open the Exchange tab view on the design
workspace.
2. In the Available Extensions view, if needed, enter a full component name or part of it in the text field and
click the fresh button to find quickly the component you are interested in.
3. Click the view/download link for the component of interest to display the component download page.
4. View the information about the component, including component description and review comments from
community users, or write your own review comments and/or rate the component if you want. For more
information on reviewing and rating a community component, see How to review and rate a community
component.
If needed, click the left arrow button to return to the component list page.
5. Click the Install button in the right part of the component download page to start the download and installation
process.
A progress indicator appears to show the completion percentage of the download and installation process.
Upon successful installation of the component, the Downloaded Extensions view opens and displays the
status of the component, which is Installed.
To reinstall a community component you already downloaded or update an installed one, do the following:
1. From the Exchange tab view, click Downloaded Extensions to display the list of components you have
already downloaded from Talend Exchange.
In the Downloaded Extensions view, the components you have installed in your previous version of Talend
Studio but not in your current Studio have an Install link in the Install/Update column, and those with
updates available in Talend Exchange have an Update link.
2. Click the Install or Update link for the component of interest to start the installation process.
A progress indicator appears to show the completion percentage of the installation process. Upon successful
installation, the Downloaded Extensions view displays the status of the component, which is Installed.
1. From the Available Extensions view, click the view/download link for the component you want to review
or rate to open the community component download page.
2. On the component download page, click the write a review link to open the [Review the component] dialog
box.
3. Fill in the required information, including a title and a review comment, click one of the five stars to rate the
component, and click Submit Review to submit you review to the Talend Exchange server.
Upon validation by the Talend Exchange moderator, your review is published on Talend Exchange and
displayed in the User Review area of the component download page.
1. From the Exchange tab view, click My Extensions to open the My Extensions view.
2. Click the Add New Extension link in the upper right part of the view to open the component upload page.
3. Complete the required information, including the component title, initial version, Studio compatibility
information, and component description, fill in or browse to the path to the source package in the File field,
and click the Upload Extension button.
Upon successful upload, the component is listed in the My Extensions view, where you can update, modify
and delete any component you have uploaded to Talend Exchange.
1.
From the My Extensions view, click the icon in the Operation column for the component your want to
update to open the component update page.
2. Fill in the initial version and Studio compatibility information, fill in or browse to the path to the source
package in the File field, and click the Update Extension button.
Upon successful upload of the updated component, the component is replaced with the new version on Talend
Exchange and the My Extension view displays the component's new version and update date.
To modify the information of a component uploaded to Talend Exchange, complete the following:
1.
From the My Extensions view, click the icon in the Operation column for the component your want to
modify information for to open the component information editing page.
2. Complete the Studio compatibility information and component description, and click the Modify Extension
button to update the component information to Talend Exchange.
To delete a component you have uploaded to Talend Exchange, click icon for the component from the My
Extensions view. The component is then removed from Talend Exchange and is no longer displayed on the
component list in the My Extensions view.
As tPrejob and tPostjob are not meant to take part in any data processing, they cannot be part of a multi-thread execution.
They are meant to help you make your Job design clearer.
To use these tPrejob and tPostjob components, simply drop them onto the design workspace as you would do
with any other components, and then connect tPrejob to a component or subjob that is meant to perform a pre-job
task, and tPostjob to a component or subjob that is meant to perform a post-job task, using Trigger connections.
An orange square on the pre- and post-job parts indicates that they are different types of subjobs.
• Cleaning up temporary files created during the processing of the main data Job.
• Any task required to be executed, even if the preceding Job or subjobs failed.
For use cases that use the tPrejob and tPostjob components, see Talend Components Reference Guide.
The Use Output Stream feature can be found in the Basic settings view of a number of components such as
tFileOutputDelimited.
To use this feature, select Use Output Stream check box in the Basic settings view of a component that has this
feature. In the Output Stream field that is thus enabled, define your output stream using a command.
Prior to use the output stream feature, you have to open a stream. For a detailed example of the illustration of this prerequisite
and the usage of the Use Output Stream feature, see Using the output stream feature. For an example of Job using this
feature, see the second scenario of tFileOutputDelimited in Talend Components Reference Guide.
1. In the Repository tree view of the Integration perspective, right-click Job Designs and select Create folder
from the contextual menu.
2. In the Label field, enter a name for the folder and then click Finish to confirm your changes and close the
dialog box.
The created folder is listed under the Job Designs node in the Repository tree view.
If you have already created Jobs that you want to move into this new folder, simply drop them into the folder.
This option has been added to all database connection components in order to reduce the number of connections
to open and close.
The Use or register a shared DB Connection option of all database connection components is incompatible with the Use
dynamic job and Use an independent process to run subjob options of the tRunJob component. Using a shared database
connection together with a tRunJob component with either of these two options enabled will cause your Job to fail.
Assume that you have two related Jobs (a parent Job and a child Job) that both need to connect to your remote
MySQL database. To use a shared database connection in the two Jobs, to the following:
1. Add a tMysqlConnection (assuming that you work with a MySQL database) to both the parent and the child
Job, if they are not using a database connection component.
2. Connect each tMysqlConnection to the relevant component in your Jobs using a Trigger > On Subjob Ok
link.
3. In the Basic settings view of the tMysqlConnection component that will run first, fill in the database
connection details if the database connection is not centrally stored in the Repository.
4. Select the Use or register a shared DB Connection check box, and give a name to the connection in the
Shared DB Connection Name field.
You are now able to re-use the connection in your child Job.
5. In the Basic settings view of the other tMysqlConnection component, which is in the other Job, simply
select Use or register a shared DB Connection check box, and fill the Shared DB Connection Name field
with the same name as in the parent Job.
Among the different Jobs sharing the same database connection, you need to define the database connection details
only in the first Job that needs to open the database connection.
For a complete use case, see the scenario of the tMysqlConnection component showing how to share a database
connection between different Jobs in Talend Components Reference Guide.
For more information about how to use the Connection components, see Talend Components Reference Guide.
Mouse over the component, to display the tooltip messages or warnings along with the label. This context-sensitive
help informs you about any missing data or component status.
When the tooltip messages of a component indicate that a module is required, you must install this module for this component
using the Module view. This view is hidden by default. For further information about how to install external modules using
this view, see the Talend Installation Guide.
The error icon displays as well on the tab next to the Job name when you open the Job on the design workspace.
The compilation or code generation does only take place when carrying out one of the following operations:
• opening a Job,
When you execute the Job, a warning dialog box opens to list the source and description of any error in the current
Job.
Click Cancel to stop your Job execution or click Continue to continue it.
For information on errors on components, see Warnings and error icons on components.
You can change the note format. To do so, select the note you want to format and click the Basic setting tab of
the Component view.
Select the Opacity check box to display the background color. By default, this box is selected when you drop a
note on the design workspace. If you clear this box, the background becomes transparent.
You can select options from the Fonts and Colors list to change the font style, size, color, and so on as well as
the background and border color of your note.
You can select the Adjust horizontal and Adjust vertical boxes to define the vertical and horizontal alignment
of the text of your note.
The content of the Text field is the text displayed on your note.
The [Data Viewer] dialog box displays the content of the component selected.
You can set the display parameters and filter the content, as described in the table below:
Parameter Description
Rows/page Enter the maximum number of rows to be displayed per page.
Limits Enter the maximum number of rows to be displayed in the viewer.
Null Select the Null check box above a given column to filter any null values from the column.
Condition Enter a condition on which to filter the content displayed.
3. Click Set parameters and continue to go to the [Select context] dialog box.
From the drop-down context list, you can select the context variables you want to verify.
This file content viewer shows the data as it is in the file disregarding your setting. This can be convenient to spot
the files that are not well formed.
The Information panel is composed of two tabs, Outline and Code Viewer, which provide information regarding
the displayed diagram (either Job or Business Model).
4.10.6.1. Outline
The Outline tab offers a quick view of the business model or the open Job on the design workspace and also a
tree view of all used elements in the Job or Business Model.As the design workspace, like any other window area,
can be resized to suit your needs, the Outline view provides a convenient way for you to check out where on your
design workspace you are located.
This graphical representation of the diagram highlights in a blue rectangle the diagram part showing in the design
workspace.
Click the blue-highlighted view and hold down the mouse button. Then, move the rectangle over the Job.
The Outline view can also be displaying a folder tree view of components in use in the current diagram. Expand
the node of a component, to show the list of variables available for this component.
To switch from the graphical outline view to the tree view, click either icon docked at the top right of the panel.
This view only concerns the Job design code, as no code is generated from Business Models.
Using a graphical colored code view, the tab shows the code of the component selected in the design workspace.
This is a partial view of the primary Code tab docked at the bottom of the design workspace, which shows the
code generated for the whole Job.
This blue highlight helps you easily distinguish one subjob from another.
A Job can be made of one single subjob. An orange square shows the prejob and postjob parts which are different types
of subjobs.
For more information about prejob and postjob, see How to use the tPrejob and tPostjob components.
In the Basic setting view, select the Show subjob title check box if you want to add a title to your subjob, then
fill in a title.
1. In the Basic settings view, click the Title color/Subjob color button to display the [Colors] dialog box.
2. Set your colors as desired. By default, the title color is blue and the subjob color is transparent blue.
Click the minus sign ([-]) to collapse the subjob. When reduced, only the first component of the subjob is displayed.
To remove the background color of all your subjobs, click the Toggle Subjobs icon on the toolbar of Talend Studio.
To remove the background color of a specific subjob, right-click the subjob and select the Hide subjob option
on the pop-up menu.
The Stats & Logs tab allows you to automate the use of Stats & Logs features and the Context loading feature.
For more information, see How to automate the use of statistics & logs.
The Extra tab lists various options you can set to automate some features such as the context parameters use, in
the Implicit Context Loading area. For more information, see How to use the features in the Extra tab.
For more information regarding the Log component, see Talend Components Reference Guide.
The Stats & Logs panel is located on the Job tab underneath the design workspace and prevents your Jobs Designs
to be overloaded by components.
This setting supersedes the log-related components with a general log configuration.
2. Select the Stats & Logs panel to display the configuration view.
3. Set the relevant details depending on the output you prefer (console, file or database).
You can save the settings into your Project Settings by clicking the button. This way, you can
access such settings via File > Edit project settings > Job settings > Stats & Logs or via the button on the toolbar.
When you use Stats & Logs functions in your Job, you can apply them to all its subjobs.
To do so, click the Apply to subjobs button in the Stats & Logs panel of the Job view and the selected stats &
logs functions of the main Job will be selected for all of its subjobs.
• Select the Multithread execution check box to allow two Job executions to start at the same time.
• Set the Implicit tContextLoad option parameters to avoid using the tContextLoad component on your Job
and automate the use of context parameters.
Choose between File and Database as source of your context parameters and set manually the file or database
access.
Set notifications (error/warning/info) for unexpected behaviors linked to context parameter setting.
For an example of loading context parameters dynamically using the Implicit Context Load feature, see Using
the Implicit Context Load feature.
• When you fill in Implicit tContextLoad manually, you can store these parameters in your project by clicking
the Save to project settings button, and thus reuse these parameters for other components in different Jobs.
• Select the Use Project Settings check box to recuperate the context parameters you have already defined in
the Project Settings view.
The Implicit tContextLoad option becomes available and all fields are filled in automatically.
• Click Reload from project settings to update the context parameters list with the latest context parameters
from the project settings.
These management procedures include importing and exporting Jobs and items between different projects or
machines, scheduling Job execution, running and deploying Jobs on distant servers and copying Jobs onto different
SVN or Git branches.
When a component or a subjob is deactivated, you are not able to create or modify links from or to it. Moreover,
at runtime, no code is generated for the deactivated component or subjob.
1. Right-click the component you want to activate or deactivate, the tFixedFlowInput component for example.
• Business Models
• Jobs Designs
• Routines
• Documentation
• Metadata
Talend Studio allows any authorized user to import any project item from a local repository into the remote
repository and share them with other users.
To import items, right-click any entry such as Job Designs or Business Models in the Repository tree view and
select Import Items from the contextual menu or directly click the icon on the toolbar to open the [Import
items] dialog box and then select an import option.
1. Click the Select root directory option in the [Import items] dialog box.
2. Click Browse to browse down to the relevant project folder within the workspace directory. It should
correspond to the project name you picked up.
3. If you only want to import very specific items such as some Job Designs, you can select the specific folder,
such as Process where all the Job Designs for the project are stored. If you only have Business Models to
import, select the specific folder: BusinessProcess, and click OK.
But if your project gathers various types of items (Business Models, Jobs Designs, Metadata, Routines...),
we recommend you to select the project folder to import all items in one go, and click OK.
4. If needed, select the overwrite existing items check box to overwrite existing items with those having the
same names to be imported. This will refresh the Items List.
5. From the Items List which displays all valid items that can be imported, select the items that you want to
import by selecting the corresponding check boxes.
To import items from an archive file (including source files and scripts), do the following:
1. Click the Select archive file option in the [Import items] dialog box.
3. If needed, select the overwrite existing items check box to overwrite existing items with those having the
same names to be imported. This will refresh the Items List.
4. From the Items List which displays all valid items that can be imported, select the items that you want to
import by selecting the corresponding check boxes.
1. Click the Select archive file option in the [Import items] dialog box. Then, click BrowseTalendExchange
to open the [Select an item from Talend Exchange] dialog box.
2. Select the desired category from the Category list, and select the desired version from the
TOS_VERSION_FILTER list.
A progress bar appears to indicate that the extensions are being downloaded. At last, the extensions for the
selected category and version will be shown in the dialog box.
3. Select the extension that you want to import from the list.
4. If needed, select the overwrite existing items check box to overwrite existing items with those having the
same names to be imported. This will refresh the Items List.
5. From the Items List which displays all valid items that can be imported, select the items that you want to
import by selecting the corresponding check boxes.
If there are several versions of the same items, they will all be imported into the Project you are running, unless you already
have identical items.
You can now use and share your Jobs and all related items in your collaborative work. For more information about
how to collaborate on a project, see Working collaboratively on project items.
By executing build scripts generated from the templates defined in Project Settings, the Build Job feature adds all
of the files required to execute the Job to an archive, including the .bat and .sh along with any context-parameter
files or other related files.
Your Talend Studio provides a set of default build script templates. You can customize those templates to meet your actual
needs. For more information, see Customizing Maven build script templates.
By default, when a Job is built, all the required jars are included in the .bat or .sh command. For a complex Job that
involves many Jars, the number of characters in the batch command may exceed the limitation of command length
on certain operating systems. To avoid failure of running the batch command due to this limitation, before building
your Job, go to Window > Preferences, select Talend > Import/Export, and then select the Add classpath jar
in exported jobs check box to wrap the Jars in a classpath.jar file added to the built Job.
The above-mentioned option is incompatible with JobServer. If your built Job will be deployed and executed in Talend
Administration Center, make sure to clear the check box before building your Job.
1. In the Repository tree view, right-click the Job you want to build, and select Build Job to open the [Build
Job] dialog box.
You can show/hide a tree view of all created Jobs in Talend Studio directly from the [Build Job] dialog box by clicking
the and the buttons respectively. The Jobs you earlier selected in the Studio tree view display with selected
check boxes. This accessibility helps to modify the selected items to be exported directly from the dialog box without
having to close it and go back to the Repository tree view in Talend Studio to do that.
2. In the To archive file field, browse to the directory where you want to save your built Job.
3. From the Select the Job version area, select the version number of the Job you want to build if you have
created more than one version of the Job.
4. Select the Build Type from the list between Standalone Job, Axis Webservice (WAR), Axis Webservice
(Zip) and OSGI Bundle For ESB.
If the data service Job includes the tRESTClient or tESBConsumer component, and none of the Service
Registry, Service Locator or Service Activity Monitor is enabled in the component, the data service Job can
be built as OSGI Bundle For ESB or Standalone Job. With the Service Registry, Service Locator or Service
Activity Monitor enabled, the data service Job including the tRESTClient or tESBConsumer component
can only be built as OSGI Bundle For ESB.
5. Select the Extract the zip file check box if you want the archive file to be automatically extracted in the
target directory.
6. In the Options area, select a build type between Binaries and Sources (Maven) and the file type(s) you want
to add to the archive file. The check boxes corresponding to the file types necessary for the execution of the
Job are selected by default. You can clear these check boxes depending on what you want to build.
Option Description
Binaries / Sources (Maven) Select Binaries from the list box to build your Job as an executable Job.
Select Sources (Maven) to build the sources of your Job and include in the archive file the
Maven build scripts generated from the templates defined in Project Settings so that you can
rebuild your Job in an Apache Maven system.
Shell launcher Select this check box to export the .bat and/or .sh files necessary to launch the built Job.
To export only one context, select the context that fits your needs from the
Context scripts list, including the .bat or .sh files holding the appropriate context
parameters. Then you can, if you wish, edit the .bat and .sh files to manually modify
the context type.
Apply to children Select this check box if you want to apply the context selected from the list to all child Jobs.
Custom log4j level Select this check box to activate the Log4j output level list and select an output level for
the built Job.
If you select the Items or Source files check box, you can reuse the built Job in
a Talend Studio installed on another machine. These source files are only used in
Talend Studio.
Execute tests Select this check box to execute the test case(s) of the Job, if any, when building the Job, and
include the test report files in the sunfire-reports folder of the build archive.
This check box is available only when the Binaries option is selected.
For more information on how to create test cases, see Testing Jobs using test cases.
Add test sources Select this check box to include the sources of the test case(s) of the Job, if any, in the build
archive.
This check box is available only when the Sources (Maven) option is selected.
For more information on how to create test cases, see Testing Jobs using test cases.
Java sources Select this check box to export the .java file holding Java classes generated by the Job when
designing it.
This check box is available only when the Binaries option is selected.
Include libraries Select this check box to include dependencies of the Job in the build archive.
Option Description
This check box is available only when the Sources (Maven) option is selected.
In the window which opens you can update, add or remove context parameters and values of the Job context
you selected in the list.
8. Click Finish to validate your changes, complete the build operation and close the dialog box.
If the Job to be built calls a user routine that contains one or more extra Java classes in parallel with the public class named
the same as the user routine, the extra class or classes will not be included in the exported file. To export such classes, you
need to include them within the class with the routine name as inner classes. For more information about user routines, see
Managing user routines. For more information about classes and inner classes, see relevant Java manuals.
If you want to include an Ant or Maven script for each built Job, select the Add build script check boxes, and
then select Ant or Maven option button.
Select a context from the list when offered. Then once you click the Override parameters' values button below
the Context scripts check box, the opened window will list all of the parameters of the selected context. In this
window, you can configure the selected context as needs.
All contexts parameter files are exported along in addition to the one selected in the list.
After being exported, the context selection information is stored in the .bat or .sh file and the context settings are stored
in the context .properties file.
Select the type of archive you want to use in your Web application.
Once the archive is produced, place the WAR or the relevant Class from the ZIP (or unzipped files) into the
relevant location, of your Web application server.
http://localhost:8080/Webappname/services/JobName?method=runJob&args=null
The call return from the Web application is 0 when there is no error and different from 0 in case of error. For a
real-life example of creating and building a Job as a Webservice and calling the built Job from a browser, see An
example of building a Job as a Web service.
The tBufferOutput component was especially designed for this type of deployment. For more information
regarding this component, see Talend Components Reference Guide.
3. In the design workspace, select tFixedFlowInput, and click the Component tab to define the basic settings
for tFixedFlowInput.
4. Set the Schema to Built-In and click the [...] button next to Edit Schema to describe the data structure you
want to create from internal variables. In this scenario, the schema is made of three columns, now, firstname,
and lastname.
5. Click the [+] button to add the three parameter lines and define your variables, and then click OK to close
the dialog box and accept propagating the changes when prompted by the system.
The three defined columns display in the Values table of the Basic settings view of tFixedFlowInput.
6. In the Value cell of each of the three defined columns, press Ctrl+Space to access the
global variable list, and select TalendDate.getCurrentDate(), talendDatagenerator.getFirstName, and
talendDataGenerator.getLastName for the now, firstname, and lastname columns respectively.
8. In the design workspace, select tFileOutputDelimited, click the Component tab for tFileOutputDelimited,
and browse to the output file to set its path in the File name field. Define other properties as needed.
If you press F6 to execute the Job, three rows holding the current date and first and last names will be written
to the set output file.
2. Click the Browse... button to select a directory to archive your Job in.
3. In the Job Version area, select the version of the Job you want to build as a web service.
4. In the Build type area, select the build type you want to use in your Web application (WAR in this example)
and click Finish. The [Build Job] dialog box disappears.
5. Copy the War folder and paste it in the Tomcat webapp directory.
The return code from the Web application is 0 when there is no error and 1 if an error occurs.
For a real-life example of creating and building a Job as a Webservices using the tBufferOutput component,
see the tBufferOutput component in Talend Components Reference Guide.
1. In the Job Version area, select the version number of the Job you want to build if you have created more
than one version of the Job.
2. In the Build type area, select OSGI Bundle For ESB to build your Job as an OSGI Bundle.
The extension of your build automatically change to .jar as it is what Talend ESB Container is expecting.
3. If you want to rebuild the built Job into your own JAR with Maven, select the Add maven script check box
in the Options area to include the required Maven script in the target archive, which is a .zip file in this case.
4. Click the Browse... button to specify the folder in which building your Job.
To do so:
1. In the Repository tree view, select the items you want to export.
2. To select several items at a time, press the Ctrl key and select the relevant items.
If you want to export a database table metadata entry, make sure you select the whole DB connection, and not only
the relevant table as this will prevent the export process to complete correctly.
3. Right-click while maintaining the Ctrl key down and select Export items on the pop-up menu:
You can select additional items on the tree for exportation if required.
4. Click Browse to browse to where you want to store the exported items. Alternatively, define the archive file
where to compress the files for all selected items.
If you have several versions of the same item, they will all be exported.
Select the Export Dependencies check box if you want to set and export routine dependencies along with Jobs you
are exporting. By default, all of the user routines are selected. For further information about routines, see What are
routines.
5. Click Finish to close the dialog box and export the items.
If you want to change the context selection, simply edit the .bat/.sh file and change the following setting: --
context=Prod to the relevant context.
If you want to change individual parameters in the context selection, edit the .bat/.sh file and add the following
setting according to your need:
Operation Setting
To change value1 for parameter key1 --context_param key1=value1
To change value1 and value2 for respective --context_param key1=value1 --context_param key2=value2
parameters key1 and key2
To change a value containing space characters --context_param key1="path to file"
such as in a file path
When you modify any of the parameters of an entry in the Repository tree view, all Jobs using this repository entry
will be impacted by the modification. This is why the system will prompt you to propagate these modifications
to all the Jobs that use the repository entry.
Talend Studio also provides advanced analyzing capabilities, namely impact analysis and data lineage, on
repository items. For more information, see How to analyze repository items.
The following sections explain how to modify the parameters of a repository entry and how to propagate the
modifications to all or some of the Jobs that use the entry in question.
1. Expand the Metadata, or Contexts, or Joblets Designs node in the Repository tree view and browse to the
relevant entry that you need to update.
2. Right-click this entry and select the corresponding edit option in the contextual menu.
A respective wizard displays where you can edit each of the definition steps for the entry parameters.
When updating the entry parameters, you need to propagate the changes throughout numerous Jobs or all
your Jobs that use this entry.
A prompt message pops up automatically at the end of your update/modification process when you click the
Finish button in the wizard.
3. Click Yes to close the message and implement the changes throughout all Jobs impacted by these changes.
For more information about the first way of propagating all your changes, see How to update impacted Jobs
automatically.
Click No if you want to close the message without propagating the changes. This will allow you to propagate
your changes on the impacted Jobs manually on one by one basis. For more information on another way of
propagating changes, see How to update impacted Jobs manually.
1. In the [Modification] dialog box, click Yes to let the system scan your Repository tree view for the Jobs
that get impacted by the changes you just made. This aims to automatically propagate the update throughout
all your Jobs (open or not) in one click.
The [Update Detection] dialog box displays to list all Jobs impacted by the parameters that are modified.
You can open the [Update Detection] dialog box any time if you right-click the item centralized in the Repository
tree view and select Manage Dependencies from the contextual menu. For more information, see How to update
impacted Jobs manually.
2. If needed, clear the check boxes that correspond to the Jobs you do not wish to update. You can update them
any time later through the Detect Dependencies menu. For more information, see How to update impacted
Jobs manually.
3. Click OK to close the dialog box and update all selected Jobs.
1. In the Repository tree view, expand the node holding the entry you want to check what Jobs use it.
A progress bar indicates the process of checking for all Jobs that use the modified metadata or context
parameter. Then a dialog box displays to list all Jobs that use the modified item.
3. Select the check boxes corresponding to the Jobs you want to update with the modified metadata or context
parameter and clear those corresponding to the Jobs you do not want to update.
The Jobs that you choose not to update will be switched back to Built-in, as the link to the Repository cannot be maintained.
It will thus keep their setting as it was before the change.
All items on which you want to execute impact analysis or data lineage must be centralized in the Repository tree view under
any of the following nodes: Joblet Designs, Contexts, SQL Templates, Referenced project or Metadata.
Impact analysis also analyzes the data flow in each of the listed Jobs to show all the components and stages the data
flow passes through and the transformation done on data from the source component up to the target component.
Talend Studio also allows you to produce detail documentation in HTML and XML of the results of the impact
analysis. For more information, see How to export the results of impact analysis/data lineage to HTML and How
to export the results of impact analysis/data lineage to XML.
All items on which you want to execute impact analysis or data lineage must be centralized in the Repository tree view under
any of the following nodes: Joblet Designs, Contexts, SQL Templates, Referenced project or Metadata.
The example below shows an impact analysis done on a database connection item stored under the Metadata
node in the Repository tree view.
To analyze data flow in each of the listed Jobs from the source component up to the target component, complete
the following:
1. In the Repository tree view, expand Metadata and browse to the metadata entry you want to analyze,
employees under the DB connection mysql in this example.
2. Right-click the entry you want to analyze and select Impact Analysis.
A progress bar indicates the process of checking for all Jobs that use the modified metadata parameter. The
[Impact Analysis] view appears in the Studio to list all Jobs that use the selected metadata entry. The names
of the selected database connection and table schema are displayed in the corresponding fields.
You can also open this view if you select Window > Show View > Talend > Impact Analysis.
Select... To...
Open Job open the corresponding Job in the Studio workspace.
Expand/Collapse expand/collapse all the items included in the selected Job.
Thus, you have an outline of the Jobs that use the selected metadata entry.
4. From the Column list, select the column name for which you want to analyze the data flow from the
data source (input component), through various components and stages, to the data destination (output
component), Name in this example.
The Last version check box is selected by default. This option allows you to select the last version of your Job instead
of displaying all versions of your Job in the analysis results.
5. Click Analysis....
A bar displays to indicate the progress of the analysis operation and the analysis results display in the view.
Alternatively, you can directly right-click a particular column in the Repository tree view and select Impact Analysis from
the contextual menu to display the analysis results regarding that column in the [Impact Analysis] view.
The impact analysis results trace the components and transformations the data in the source column Name passes
through before being written in the output column Name.
Talend Studio also allows you to produce detail documentation in HTML and XML of the results of the data
lineage. For more information, see How to export the results of impact analysis/data lineage to HTML and How
to export the results of impact analysis/data lineage to XML.
All items on which you want to execute impact analysis or data lineage must be centralized in the Repository tree view under
any of the following nodes: Joblets Designs, Contexts, SQL Templates, Reference project or Metadata.
The example below shows the data lineage made on a database connection item stored under the Metadata node
in the Repository tree view.
1. In the Repository tree view, expand Metadata > Db Connection and then expand the database connection
you want to analyze, mysql in this example.
2. Right-click the centralized table schema of which you want to analyze the life cycle of the data flow,
employees in this example.
The Impact Analysis view displays the Jobs that use the selected table schema. The names of the selected
database connection and table schema are displayed in the corresponding fields.
3. From the Column list, select the column name for which you want to analyze the data flow from the
data destination (output component), through various components and stages, to the data source (input
component). The column to be analyzed in this example is called Name.
You can skip this step by right-clicking the column Name in the Repository tree view and selecting Impact
Analysis from the contextual menu.
A bar appears to indicate the progress of the analysis operation and the analysis results are displayed in the
view.
5. Right-click a listed Job and select Open Job from the contextual menu.
The data lineage results trace backward the components and transformations the data in the output column
Name passes through before being written in this column.
To generate an HTML document of an impact analysis or data lineage with customization, complete the following:
1. After you analyze a given repository item as outlined in Impact analysis or Data lineage and in the Impact
Analysis view, click the Export to HTML button.
2. Enter the path to where you want to store the generated documentation archive or browse to the desired
location and then give a name for this HTML archive.
3. Select the Custom CSS template to export check box to activate the CSS File field if you need use your
own CSS file to customize the exported HTML files. The destination folder for HTML will contain the html
file, a css file, an xml file and a pictures folder.
4. Click Finish to validate the operation and close the dialog box.
An archive file that contains all required files along with the HTML output file is created in the specified path.
5. Double-click the HTML file in the generated archive to open it in your favorite browser.
You can also set CSS customization as a preference for exporting HTML. To do this, see Documentation preferences
(Talend > Documentation).
The archive file gathers all generated documents including the HTML that gives a description of the project that
holds the analyzed Jobs in addition to a preview of the analysis graphical results.
To generate an XML document of the results of impact analysis or data lineage on the selected a repository item,
complete the following:
1. After you analyze a given repository item as outlined in Impact analysis or Data lineage and in the Impact
Analysis view, click the Export to XML button.
2. Enter the path to where you want to store the generated XML document or browse to the desired location
and then give a name for this XML file.
3. Select the Overwrite existing files without warning check box to suppress the warning message if the
specified filename already exists.
4. Click Finish to validate the operation and close the dialog box.
An XML file that contains the impact analysis or data lineage information is created in the specified path.
The figure below illustrates an example of a generated XML file, opened in a text editor.
1.
On Talend Studio toolbar, click to open the [Find a Job] dialog box that lists automatically all the Jobs
you created in the current Studio.
2. Enter the Job name or part of the Job name in the upper field.
When you start typing your text in the field, the Job list is updated automatically to display only the Job(s)
which name(s) match(es) the letters you typed in.
3. Select the desired Job from the list and click Link Repository to automatically browse to the selected Job
in the Repository tree view.
4. If needed, click Cancel to close the dialog box and then right-click the selected Job in the Repository tree
view to perform any of the available operations in the contextual menu.
Otherwise, click OK to close the dialog box and open the selected Job on the design workspace.
You can create as many versions of the same Job as you want. To do that:
1. Close your Job if it is open on the design workspace. Otherwise, its properties will be read-only and thus
you cannot modify them.
2. In the Repository tree view, right-click your Job and select Edit properties from the drop-down list to open
the [Edit properties] dialog box.
3. Next to the Version field, click the M button to increment the major version and the m button to increment
the minor version.
By default, when you open a Job, you open its last version.
Any previous version of the Job is read-only and thus cannot be modified.
1. Close your Job if it is open on the design workspace. Otherwise, its properties will be read-only and thus
you cannot modify them.
2. In the Repository tree view, right-click your Job and select Open another version from the drop-down list.
3. In the dialog box, select the Create new version and open it check box and click the M button to increment
the major version and the m button to increment the minor version.
4. Click Finish to validate the modification and open this new version of your Job.
You can also save your currently active Job and increment its version at the same time, by clicking File > Save
As... and setting a new version in the [Save As] dialog box.
If you give your Job a new name, this option does not overwrite your current Job, but it saves your Job as a new one with
the same version of the current Job or with a new version if you specify one.
You can access a list of the different versions of a Job and perform certain operations. To do that:
1. In the Repository tree view, select the Job you want to consult the versions of.
2. On the configuration tabs panel, click the Job tab and then click Version to display the version list of the
selected Job.
Select To...
Edit Job open the last version of the Job.
This option is available only when you select the last version of the Job.
Note: The Job should not be open on the design workspace, otherwise it will be in
read-only mode.
This option is available only when you select the last version of the Job.
You can also manage the version of several Jobs and/or metadata at the same time, as well as Jobs and their
dependencies and/or child Jobs from the Project Settings. For more information, see Version management.
• The properties of the project where the selected Jobs have been created,
• The properties and settings of the selected Jobs along with preview pictures of each of the Jobs,
• The list of all the components used in each of the selected Jobs and component parameters.
1. In the Repository tree view, right-click a Job entry or select several items to produce multiple
documentations.
3. Browse to the location where the generated documentation archive should be stored.
4. In the same field, type in a name for the archive gathering all generated documents.
5. Select the Use CSS file as a template to export check box to activate the CSS File field if you need to
use a CSS file.
6. In the CSS File field, browse to, or enter the path to the CSS file to be used.
The archive file is generated in the defined path. It contains all required files along with the Html output file. You
can open the HTML file in your favorite browser.
Now every time a Job is created, saved or updated, the related documentation is generated.
The generated documents are shown directly in the Documentation folder of the Repository tree view.
2. Then look into the Generated folder where all Jobs and Joblets auto-documentation is stored.
3. Double-click the relevant Job or Joblet label to open the corresponding Html file as a new tab in the design
workspace.
This documentation gathers all information related to the Job or Joblet. You can then export the documentation
in an archive file as html and pdf:
1. In the Repository tree view, right-click the relevant documentation you want to export.
The archive file contains all needed files for the Html to be viewed in any web browser.
In addition, you can customize the autogenerated documentation using your own logo and company name with
different CSS (Cascading Style Sheets) styles. The destination folder for HTML will contain the html file, a css
file, an xml file and a pictures folder. To do so:
2. In the User Doc Logo field, browse to the image file of your company logo in order to use it on all auto-
generated documentation.
4. Select the Use CSS file as a template when export to HTML check box to activate the CSS File field if
you need to use a CSS file.
5. In the CSS File field, browse to, or enter the path to the CSS file to be used.
To update a single document, right-click the relevant Job or Joblet generateddocumentation entry and select
Update documentation.
The autogenerated documentation is saved every time you close your Job or Joblet, but you can:
• Update all generated documents in one go: Right-click the Generated folder and select Update all projects
documentation.
• Update only all Jobs' generated documents in one go: Right-click the Jobs folder and select Update all jobs'
documentation.
• Update all Joblets' generated documents in one go: Right-click the Joblets folder and select Update all joblet
documentation.
For more information about how to establish a connection between projects, check managing references in Talend
Administration Center.
All Jobs part of the added referenced project(s) will by default show under the relevant folder in the Repository
tree view, but you can change their display mode to show them as a separate tree. For more information, see How
to set the display mode of referenced projects.
Names of the items in the referenced projects look unavailable and are followed by the referenced project name
they are part of to distinguish them from item names in the open project.
You can open a read-only copy of the referenced item by simply double clicking it in the tree view.
As you can see in the above figure, all resources in Project1 and Project3 are accessible directly from Project2. The
resources in the two referenced projects are in read-only mode: they are available for reuse but cannot be modified.
If you right-click a Job of a referenced project in the tree view, you display a drop-down list where you can select
the type of action you want to carry out on the referenced project.
Option Description
Read job Open the Job in read-only mode.
Open another version Choose another version, if any, for the Job in the referenced project and open it in read-only mode.
Option Description
Open job hierarchy Consult the Job hierarchy.
Edit properties Open the [Edit properties] dialog box in read-only mode.
View documentation Open detailed documentation of the selected Job in the referenced Project. This documentation offers:
description of the referenced project, Job and Job settings, Job preview, the list of all components used and
the list of the contexts used along with their values.
Impact Analysis Run an impact analysis on the Job.
Copy Make a copy of the selected Job in the referenced project.
Run job Execute the selected Job in the referenced project.
Build job Deploy and execute the selected Job in the referenced project on any server, regardless of Talend Studio.
Export items Export repository items of the selected Job in the referenced project to an archive file, for deploying outside
Talend Studio.
Items and resources in the referenced projects will always be up-to-date since the refresh option in Talend Studio will refresh
EVERY item in the studio. However, the refresh operation could be quite long if you have too many references and items
in each of your projects.
It is then preferable not to use very big referenced projects, especially if you use the database repository.
To display referenced Jobs together with other Jobs in the Job Designs folder, click on the toolbar.
To display referenced projects in a separate folder in the tree view, click again.
• compare the same Job in different releases of the Studio, in order to see if any modifications were done on the
Job in the previous/current release, for example,
• compare Jobs that have been designed using the same template, but different parameters, to check the differences
among these Jobs.
Differences between the compared Jobs are displayed in the Compare Result view. The result detail are grouped
under the three categories: Jobsettings, Components and Connectors.
The table below gives the description of the comparison results under each of the above categories.
Category Description
Jobsettings lists all differences related to the settings of the compared Job.
Components lists the differences in the components and component parameter used in the two Jobs. A
minus sign appended on top of a component listed in the Compare Result view indicates
that this component is missing in the design of one of the two compared Jobs. A plus sign
appended on top of a component listed in the view indicates that this component is added in
one of the two compared Jobs. All differences in the component parameters will be listed in
tables that display under the corresponding component.
Connectors lists differences in all the links used to connect components in the two Jobs.
The procedure to compare two Jobs or two different versions of the same Job are the same.
To compare two different versions of the same Job, complete the following:
1. In the Repository tree view, right-click the Job version you want to compare with another version of the
same Job and then select Compare Job from the contextual menu.
The Compare Result view displays in the Studio workspace. The selected Job name and version show, by
default, in the corresponding fields.
2. If the other version of the Job with which you want to compare the current version is on another branch,
select the branch from the Another Branch list.
3. Click the three-dot button next to the Another job field to open the Select a Job/Joblet dialog box.
4. In the Name Filter field, type in the name of the Job or Joblet you want to use for this comparison. The dialog
box returns you this Job or Joblet you are searching for.
5. Select the returned Job or Joblet from the list in the dialog box and click OK.
6. From the Current version and Another version lists select the Job versions you want to compare.
7.
Click the button to launch the compare operation.
The two indicated versions of the Job display in the design workspace.
The differences between the two versions are listed in the Compare Result view
In this example, differences between the two Job versions are related to components and links (connectors). The
figure below shows the differences in the components used in the two versions.
For example, there is one difference in the output schemas used in the tMap and tFileOutputXML components:
the length of the Revenue column is 15 in the second version of the Job while the length is 11 in the first version
of the same Job. The minus sign appended on top of tMysqlOutput indicates that this component is missing in
the design of one of the two compared Jobs. The plus sign appended on top of tOracleOutput indicates that this
component is added in one of the two compared Jobs.
If you click any of the components listed in the Compare Result view, the component will be automatically selected, and
thus identified, in the open Job in the design workspace.
The figure below shows the differences in the links used to link the components in the two versions of the same Job.
In this example, there is one difference related to the reject link used in the two versions: the target of this link in
the first version is a tMysqlOutput component, while it is a tOracleOutput component in the second version.
You can export the Job compare results to an html file by clicking Export to html. Then browse to the directory you to
want to save the file in and enter a file name. You have the option of using a default CSS template or a customized one.
The destination folder will contain the html file, a css file, an xml file and a pictures folder. For related topic, see How to
export the results of impact analysis/data lineage to HTML.
2. Click the Basic Run tab to access the normal execution mode.
3. In the Context area to the right of the view, select in the list the proper context for the Job to be executed
in. You can also check the variable values.
If you have not defined any particular execution context, the context parameter table is empty and the context is
the default one. Related topic: Using contexts and variables.
2. On the same view, the console displays the progress of the execution. The log includes any error message
as well as start and end messages. It also shows the Job output in case of a tLogRow component is used
in the Job design.
3. To define the lines of the execution progress to be displayed in the console, select the Line limit check box
and type in a value in the field.
4. Select the Wrap check box to wrap the text to fit the console width. This check box is selected by default.
When it is cleared, a horizontal scrollbar appears, allowing you to view the end of the lines.
Before running again a Job, you might want to remove the execution statistics and traces from the designing
workspace. To do so, click the Clear button.
If for any reason, you want to stop the Job in progress, simply click the Kill button. You will need to click the
Run button again, to start again the Job.
Talend Studio offers various informative features displayed during execution, such as statistics and traces,
facilitating the Job monitoring and debugging work. For more information, see the following sections.
2. Click the Debug Run tab to access the debug execution modes.
Before running your Job in Debug mode, add breakpoints to the major steps of your Job flow.
This will allow you to get the Job to automatically stop at each breakpoint. This way, components and their
respective variables can be verified individually and debugged if required.
To add breakpoints to a component, right-click it on the design workspace, and select Add breakpoint on the
contextual menu.
A pause icon displays next to the component where the break is added.
To switch to debug mode, click the Java Debug button on the Debug Run tab of the Run panel. Talend Studio's
main window gets reorganized for debugging.
You can then run the Job step by step and check each breakpoint component for the expected behavior and variable
values.
To switch back to Talend Studio designer mode, click Window, then Perspective and select Integration.
It provides a row by row view of the component behavior and displays the dynamic result next to the Row link
on the design workspace.
This feature allows you to monitor all the components of a Job, without switching to the debug mode, hence
without requiring advanced Java knowledge.
Exception is made for external components which cannot offer this feature if their design does not include it.
You can activate or deactivate Traces or decide what processed columns to display in the traces table that
displays on the design workspace when launching the current Job.You can either choose to monitor the whole data
processing or monitor the data processing row by row or at a certain breakpoint. For more information about the
row by row execution of the Traces mode, see Row by row monitoring. For more information about the breakpoint
usage with the Traces mode, see Breakpoint monitoring.
2. Click the Debug Run tab to access the debug and traces execution modes.
3. Click the down arrow of the Java Debug button and select the Traces Debug option. An icon displays under
every flow of your Job to indicate that process monitoring is activated.
2. Select Disable Traces from the list. A red minus sign replaces the green plus sign on the icon to indicate that
the Traces mode has been deactivated for this flow.
To choose which columns of the processed data to display in the traces table, do the following:
1. Right-click the Traces icon for the relevant flow, then select Setup Traces from the list. The [Setup Traces]
dialog box appears.
2. In the dialog box, clear the check boxes corresponding to the columns you do not want to display in the
Traces table.
Monitoring data processing starts when you execute the Job and stops at the end of the execution.
To remove the displayed monitoring information, click the Clear button in the Debug Run tab.
To manually monitor the data processing of your Job row by row, simply click the Next Row button and the
processed rows will display below its corresponding link on the design workspace.
You can go back to previous rows by clicking the Previous Row button, within a limit of five rows back.
If, for any reason, you want to stop the Job in progress, simply click the Kill button; if you want to execute the
Job to the end, click the Basic Run button.
To remove the displayed monitoring information from the design workspace, click the Clear button in the Debug
Run tab.
You can monitor data processing the same way from inside the tMap editor. For further information, see Previewing data.
Before monitoring your data processing at certain breakpoints, you need to add breakpoints to the relevant Job
flow(s).
This will allow you to automatically stop the Job at each defined breakpoint. This way, components and their
respective variables can be verified individually and debugged if required.
1. Right-click it on the design workspace, and select Show Breakpoint Setup on the popup menu.
2. On the Breakpoint view, select the Activate conditional breakpoint check box and set the Conditions in
the table.
A pause icon displays below the link on which the break is added when you access the Traces mode.
Once the breakpoints are defined, switch to the Traces mode. To do so:
2. Click the down arrow of the Java Debug button and select the Traces Debug option.
3. Click the Traces Debug to execute the Job in Traces mode. The data will be processed until the first defined
breakpoint.
4. Click the Next Breakpoint button to continue the data process until the next breakpoint.
If, for any reason, you want to stop the Job in progress, simply click the Kill button; if you want to execute
the Job to the end, click the Basic Run button.
To remove the displayed monitoring information from the design workspace, click the Clear button in the
Debug Run tab.
• Statistics, this feature displays processing performance rate. For more information, see How to display Statistics.
• Exec time, this feature displays the execution time in the console at the end of the execution. For more
information, see How to display the execution time and other options.
• Save Job before execution, this feature allows to automatically save the Job before its execution.
• Clear before run, this feature clears all the results of a previous execution before re-executing the Job.
• log4jLevel, this feature allows you to change the output level at runtime for log4j loggers activated in
components in the Job. For more information, see How to customize log4j output level at runtime.
• JVM Setting, this feature allows you to define the parameters of your JVM according to your needs. For an
example of how this can be used, see How to specify the number of MB used in each streaming chunk by Talend
Data Mapper.
It shows the number of rows processed and the processing time in row per second, allowing you to spot straight
away any bottleneck in the data processing flow.
For trigger links like OnComponentOK, OnComponentError, OnSubjobOK, OnSubjobError and If, the
Statistics option displays the state of this trigger during the execution time of your Job: Ok or Error and True
or False.
Exception is made for external components which cannot offer this feature if their design does not include it.
In the Run view, click the Advanced settings tab and select the Statistics check box to activate the Stats feature
and clear the box to disable it.
The calculation only starts when the Job execution is launched, and stops at the end of it.
Click the Clear button from the Basic or Debug Run views to remove the calculated stats displayed. Select the
Clear before Run check box to reset the Stats feature before each execution.
The statistics thread slows down Job execution as the Job must send these stats data to the design workspace in order to
be displayed.
You can also save your Job before the execution starts. Select the relevant option check box.
This way you can test your Job before going to production.
You can also clear the design workspace before each Job execution by selecting the check box Clear before Run.
You can also save your Job before the execution starts. Select the relevant option check box.
2. Click the New button and then, in the [Set the VM Argument] dialog box that opens, enter the argument
to use.
You can change the logging output level for an execution of your Job. To do so, take the following steps:
2. Select the log4jLevel check box, and select the desired output level from the drop-down list.
This check box is displayed only when log4j is activated in components. For more information, see Log4j
settings.
For more information on the logging output levels, see Apache documentation at http://logging.apache.org/
log4j/1.2/apidocs/org/apache/log4j/Level.html.
3. Run your Job. All the logging messages of and higher than the level you set are output to the defined target.
For information on how to activate log4j in components and how to customize log4j configuration, see Log4j
settings.
For more information regarding the components with which you can use the log4j feature, see https://
help.talend.com/display/KB/List+of+components+that+support+the+log4j+feature.
You can click Run on the Memory Run tab to monitor the JVM resource usage by your Job at any time
even after you launch your Job from the Basic Run tab.
The Studio console displays curve graphs showing the JVM heap usage and CPU usage respectively during
the Job execution. Warning messages are shown in red on the Job execution information area when the
relevant thresholds are reached.
3. To view the information about resources used at a certain point of time during the Job execution, move the
mouse onto that point of time on the relevant graph. Depending on the graph on which you move your mouse
pointer, you can see the information about allocated heap size, the 90% heap threshold, and the 70% heap
threshold, or the CPU usage, at the point of time.
4. To run the Garbage Collector at a particular interval, select the With Garbage Collector pace set to check
box and select an interval in seconds. The Garbage Collector automatically runs at the specified interval.
To run the Garbage Collector once immediately, click the Trigger GC button.
5. To export the log information into a text file, click the Export button and select a file to save the log.
2. In the JVM settings area of the tab view, select the Use specific JVM arguments check box to activate
the Argument table.
3. Next to the Argument table, click the New... button to pop up the [Set the VM argument] dialog box.
This argument can be applied for all of your Job executions in Talend Studio. For further information about
how to apply this JVM argument for all of the Job executions, see Debug and Job execution preferences
(Talend > Run/Debug).
You can also use the CommandLine to transfer your Jobs from your Talend Studio to a remote JobServer for Job
execution. To use the CommandLine, make sure you have logged on to a remote project via a remote connection.
To run a Job on a remote JobServer and via a remote CommandLine, make sure that:
• you have set the remote JobServer and/or CommandLine details in the Preferences > Talend > Run/Debug >
Remote window of Talend Studio, as described in Distant run configuration (Talend > Run/Debug).
4. If the Enable Commandline server option is selected in the Preferences > Talend > Run/Debug > Remote
window, the Commandline Server list appears. From this list, select the relevant remote CommandLine
server.
5. Click the Run button of the Basic Run tab, as usual, to connect to the server and deploy then execute in
one go the current Job.
If you get a connection error, check that the agent is running, the ports are available and the server IP address is correct.
You can also execute your Job on the specified JobServer by clicking the Run button of the Memory Run
tab if you want to monitor JVM resource usage during the Job execution.
Prerequisites:
• SSL is enabled in the JobServer configuration file conf/TalendJobServer.properties. For more information see
the Talend Installation Guide.
To configure the remote server with SSL support on the Studio side:
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Expand the Talend and the Run/Debug nodes in succession and then click Remote.
3. In the Remote Jobs Servers area, click the [+] button to add a new line in the table.
4. Fill in all fields as configured for the Job execution server: Name, Host name (or IP address), Standard
port, Username, Password, and File transfer Port. The Username and Password fields are not required
if you have not configured users into the configuration file of the JobServer. For more information about Job
execution server configuration, see Talend Installation Guide.
For more information on how to add a remote CommandLine, see Distant run configuration (Talend > Run/Debug).
Once these operations are complete, you have to select the processing remote server in the Run console. To do so:
3. From the list, select the remote server you have just created.
4. Click the Run button of the Basic Run tab, as usual, to connect to the server and deploy then execute in one
go the current Job with SSL enabled.
You can also execute your Job on the specified JobServer by clicking the Run button of the Memory Run
tab if you want to monitor JVM resource usage during the Job execution. For more information on how to
enable resource usage monitoring, see Distant run configuration (Talend > Run/Debug).
If you get a connection error, check that the agent is running, the ports are available and the server IP address is correct.
Check also that SSL is configured at Studio and JobServer sides.
To access the Job Conductor and schedule the execution of your Jobs, open your preferred browser and connect
to Talend Administration Center. For more information regarding the Job Conductor and scheduling operation,
see Talend Administration Center User Guide.
1. On the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Expand the Talend > Import/Export nodes in succession and select SpagoBI Server to display the relevant
view.
3. Select the Enable/Disable Deploy on SpagoBI check box to activate the deployment operation.
4. Click New to open the [Create new SpagoBi server] dialog box and add a new server to the list.
Field Description
Engine Name Internal engine name used in Talend Studio. This name is not used in the generated code.
Short description Free text to describe the server entry you are recording.
Host IP address or host name of the machine running the SpagoBI server.
Login User name required to log on to the SpagoBI server.
Password Password for SpagoBI server logon authentication.
6. Click OK to validate the details of the new server entry and close the dialog box.
The newly created entry is added to the table of available servers. You can add as many SpagoBI entries
as you need.
Then if required, simply create a new entry including the updated details.
1. In the Repository tree view, expand Job Designs and right-click the Job to deploy.
3. As for any Job export, select a Name for the Job archive that will be created and fill it in the To archive
file field.
5. The Label, Name and Description fields come from the Job main properties.
The Jobs are now deployed onto the relevant SpagoBI server. Open your SpagoBI administrator to execute your
Jobs.
With Talend Studio, you can set checkpoints in your Job design at specified intervals (On Subjob Ok and On
Subjob Error connections) in terms of bulks of the data flow.
With Talend Administration Center, and in case of failure during Job execution, the execution process can be
restarted from the latest checkpoint previous to the failure rather than from the beginning.
1. Define checkpoints manually on one or more of the trigger connections you use in the Job you design in Talend
Studio.
For more information on how to initiate recovery checkpoints, see How to set checkpoints on trigger
connections.
2. In case of failure during the execution of the designed Job, recover Job execution from the latest checkpoint
previous to the failure through the Error recovery Management page in Talend Administration Center.
This section describes how to create, set up, and execute a test case based on the Job example elaborated in tMap
Job example.
Prerequisites: Before creating a test case for a Job, make sure all the components of your Job have been
configured.
1. Open the Job for which you want to create a test case.
2. Right-click the functional part of the Job you want to test, which is the tMap component in this example,
and select Create Test Case from the contextual menu.
3. In the [Create Test Case] dialog box, enter a name for the test case in the Name field, and the optional
information, if needed, such as purpose and description in the corresponding fields.
4. Select the Create a Test Skeleton check box so that the components required for the test case to work are
automatically added, and click Finish.
If you clear this check box, you will need to complete the test case by adding components of your choice manually.
The test case is then created and opened in the design workspace, with all the required components
automatically added. In the Repository tree view, the newly created test case appears under your Job.
• one or more tFileInputDelimited components, depending on the number of input flows in the Job, to load
the input file(s),
• one or more tCreateTemporaryFile components, depending on the number of output flows in the Job, to
create one or more temporary files to hold the output data,
• one or more tFileOutputDelimited components, depending on the number of output flows in the Job, to
write data from the output flow(s) to the previously created temporary file(s),
• one or more tFileCompare components, depending on the number of output flows in the Job, to compare
the temporary output file(s) with the reference file(s). The test is considered successful if the compared
pair of files are identical.
• one or more tAssert components, depending on the number of output flows in the Job, to provide an alert
message if the compared pair of files are different, indicating a failure of the test.
In addition, depending on the number of input and output flows, a variable number of context variables are
automatically created to specify the input and reference files.
• read the source data for the testing from two input files, one main input and one lookup input,
• process the data in the tMap component, which is the part under test,
• compare the temporary output file with a reference file, which contains the expected result of data
processing.
Upon creation, a test case has one test instance named Default. You can add as many instances as you need to run
the same test case with different sets of data files. From the Test Cases view, you run an instance individually or
run all the instances of the test case at the same time. To add a test instance, do the following:
1. From the Repository tree view, select the test case or the Job for which you created the test case and go to
the Test Cases view.
If you have created more than one test cases for a Job, when you select the Job from the Repository tree view, all its
test cases are displayed in the Test Cases view.
2. On the left panel of the Test Cases, right-click the test case you want to set up, and select Add Instance
from the contextual menu.
The newly created test instance appears under the test case name node.
You can remove the instance, add test data to all existing instances, or run the instance by right-clicking on
the instance and select the relevant item of the contextual menu. You can also remove a test data item by
right-clicking it and select Remove TestData from the context menu.
Note that if you remove a test data item from an instance, this item is also removed from all the other instances.
4. Specify a new context for the newly created test instance. For more information, see the procedure below.
Before you can run the test case or its instances, you need to specify the input and reference files in the Contexts
view and/or define embedded data sets in the Test Cases view.
By default, the required variables have been created under the context named Default. You can define as
many contexts as you need to test your Job for different environments or using different text instances. For
more information on how to define contexts and variables, see Using contexts and variables.
2. Click in the Value field of the variable for the file you want to specify, click the button, browse to your
file in the [Open] dialog box, and double-click it to specify the file path for the variable.
3. In the Test Cases view, click each test instances on the left panel and select the related context from the
context list box on the right panel.
4. Expand each test instance to show the test data, click each test data item on the left panel and check the
context variable mapped to the data set. If needed, select the desired variable from the Context Value list
box on the right panel.
2. Select the data file to be defined from the left panel, click the File Browse button from the right panel, browse
to your file in the [Open] dialog box, and double-click it to load the file to the repository.
Once a data file is loaded, the warning sign on the data set icon disappears, the text field at the lower part of
the right panel displays the content of the loaded file, and the test case will use the data from the repository
rather than from the local file system.
While you can run a test case from the Run view for debugging purposes, it is the standard way to run test cases from the Test
Cases view. The Test Cases view allows you to add instances for the same test case and execute all of them simultaneously,
and view the test case execution history.
2. If you have defined different contexts for your test case, select the desired context for the test form the
Context list.
3. Click Run on the Basic Run vertical tab to run the test case like any other Job, or debug it on the Debug
Run vertical tab view.
The Run console shows whether the compared files are identical.
2. Right-click the test case name on the left panel and select Run TestCase from the contextual menu.
All the instances of the test case are executed at the same time. The right panel displays the test case execution
results, including history execution information.
To view the execution results of a test instance including the execution history, or the details of a particular
execution, click the corresponding [+] button.
From the Test Cases view, you can also run a test instance individually. To do so, right-click the test instance on
the left panel and select Run Instance from the contextual menu.
To run a test case or all test cases of a Job from the Repository tree view
1. To run a particular test case, right-click the test case in the Repository tree view, and select Run TestCase
from the contextual menu.
To run all the test cases of a Job, right-click the Job and select Run All TestCases from the contextual menu.
2. When the execution is complete, go to the Test Cases view to check the execution result.
All the instances of the test case(s) are executed at the same time. The left panel displays the selected test case
or all the test cases of the selected Job, and the right panel displays the test case execution results, including
history execution information.
• Select the Job to display all its test cases in the Test Cases view.
• Expand the Job and select the test case of interest to show it in the Test Cases view.
• Expand the Job and double-click the test case of interest to open it in the design workspace.
• Expand the Job and right-click the test case of interest to open, run, open a read-only copy of, rename, or delete it.
• When importing the Job, you can selectively import one or more of its test cases together with the Job. However,
you cannot export a test case without exporting the Job it was created for.
• When building a Job that has test cases, you can select whether to execute the test cases created for it upon
the Job build process.
• When working collaboratively, you can lock and unlock a test case independently of the Job for which is was
created.
For more information on importing, exporting and building a Job, see Importing/exporting items and building Jobs.
For more information on working collaboratively on project items, see Working collaboratively on project items.
SVN only:
When you open a Job/Joblet on the trunk or on any of the SVN branches or tags, the Job/Joblet title in the design workspace
will show the Job version and the revision number of the SVN.
Before being able to use this version control system, the Administrator must create the branches for a specific
project from Talend Administration Center. For more information about how to create branches in a project, see
Talend Administration Center User Guide.
Once the Administrator has created one or more branches for a specific project from Talend Administration Center,
you can, in the same project, copy a Job from the trunk to any of the created branches or vice versa, and copy a
Job from one branch to another.
Because a tag is a read-only copy of a project, you can copy a Job from a tag to the trunk or a branch but not vice versa.
2. In the Repository tree view, expand Job Designs and right-click the Job you want to copy to one of the
created branches.
The [Copy to branch] dialog box appears. This dialog box lists all the Repository items with a selected check
box for the source Job, and all the existing branches of the project.
In this check box, you can select more than one Job to copy.
4. Select the Job dependencies you want to carry over with the Job, or select the Select dependencies check
box to get all the required dependencies automatically selected.
5. Expand branches, select the branch to which you want to copy the Job and then click OK.
The selected Job and Job dependencies are copied to the selected branch, with the same Job folder structure
automatically created on the target branch.
If the Job you want to copy onto a branch already exists in that branch, a dialog box appears. In this dialog box,
you can select either of the following two options:
Option Description
Over Write Replaces the old Job with the new one.
Compare job Opens the Compare Result view in the Studio. This view lists the differences in the items
used in the two Jobs. For more information on this view and the information presented in it,
see Comparing Jobs.
You can follow the same procedure to copy a Job from any of the existing branches or tags to the trunk.
Once a project item is changed, a > symbol appears in front of it in the Repository tree view.
At any point while you are working on a tag, you can discard all the changes you made to a particular project item
since the tag creation by reverting the item to its initial state, without affecting the changes to other project items.
1. In the Repository tree view, right-click the item and select Revert from the contextual menu.
If you revert a project item created on the tag, the whole item will be deleted.
This chapter explains the theory behind how those mapping components can be used, by taking as example the
typical ones which you can refer to for the use of the other mapping components. For further information or
scenarios and use cases about the mapping components, see Talend Components Reference Guide.
You can minimize and restore the Map Editor and all tables in the Map Editor using the window icons.
This figure presents the interface of tMap. Those of the other mapping components differ slightly in appearance.
For example, in addition to the Schema editor and the Expression editor tabs on the lower part of this interface,
tXMLMap has a third tab called Tree schema editor. For further information about tXMLMap, see tXMLMap
operation.
• The Input panel is the top left panel on the editor. It offers a graphical representation of all (main and lookup)
incoming data flows. The data are gathered in various columns of input tables. Note that the table name reflects
the main or lookup row from the Job design on the design workspace.
• The Variable panel is the central panel in the Map Editor. It allows the centralization of redundant information
through the mapping to variable and allows you to carry out transformations.
• The Search panel is above the Variable panel. It allow you to search in the editor for columns or expressions
that contain the text you enter in the Find field.
• The Output panel is the top right panel on the editor. It allows mapping data and fields from Input tables and
Variables to the appropriate Output rows.
• Both bottom panels are the Input and Output schemas description. The Schema editor tab offers a schema view
of all columns of input and output tables in selection in their respective panel.
• Expression editor is the edition tool for all expression keys of Input/Output data, variable expressions or
filtering conditions.
The name of input/output tables in the Map Editor reflects the name of the incoming and outgoing flows (row
connections).
The following sections present separately different mapping components of which each is able to map flows of
a specific nature.
• data rejecting.
As all these operations of transformation and/or routing are carried out by tMap, this component cannot be a start
or end component in the Job design.
tMap uses incoming connections to pre-fill input schemas with data in the Map Editor. Therefore, you cannot
create new input schemas directly in the Map Editor. Instead, you need to implement as many Row connections
incoming to tMap component as required, in order to create as many input schemas as needed.
The same way, create as many output row connections as required. However, you can fill in the output with content
directly in the Map Editor through a convenient graphical editor.
Note that there can be only one Main incoming rows. All other incoming rows are of Lookup type. Related topic:
Row connection.
Lookup rows are incoming connections from secondary (or reference) flows of data. These reference data might
depend directly or indirectly on the primary flow. This dependency relationship is translated with a graphical
mapping and the creation of an expression key.
The Map Editor requires the connections to be implemented in your Job in order to be able to define the input
and output flows in the Map Editor. You also need to create the actual mapping in your Job in order to display
the Map Editor in the Preview area of the Basic settings view of the tMap component.
To open the Map Editor in a new window, double-click the tMap icon in the design workspace or click the three-
dot button next to the Map Editor in the Basic settings view of the tMap component.
The following sections give the information necessary to use the tMap component in any of your Job designs.
For this priority reason, you are not allowed to move up or down the Main flow table. This ensures that no Join
can be lost.
Although you can use the up and down arrows to interchange Lookup tables order, be aware that the Joins between
two lookup tables may then be lost.
For more information about setting a component schema, see How to define component properties.
For more information about setting an input schema in the Map Editor, see Setting schemas in the Map Editor.
The Main Row connection determines the Main flow table content. This input flow is reflected in the first table
of the Map Editor's Input panel.
The Lookup connections' content fills in all other (secondary or subordinate) tables which displays below the
Main flow table. If you have not define the schema of an input component yet, the input table displays as empty
in the Input area.
The key is also retrieved from the schema defined in the Input component. This Key corresponds to the key defined
in the input schema where relevant. It has to be distinguished from the hash key that is internally used in the Map
Editor, which displays in a different color.
Variables
You can use global or context variables or reuse the variable defined in the Variables area. Press Ctrl+Space bar
to access the list of variables. This list gathers together global, context and mapping variables.
The list of variables changes according to the context and grows along new variable creation. Only valid mappable
variables in the context show on the list.
Docked at the Variable list, a metadata tip box display to provide information about the selected column.
Simply drop column names from one table to a subordinate one, to create a Join relationship between the two
tables. This way, you can retrieve and process data from multiple inputs.
The join displays graphically as a purple link and creates automatically a key that will be used as a hash key to
speed up the match search.
You can create direct joins between the main table and lookup tables. But you can also create indirect joins from
the main table to a lookup table, via another lookup table. This requires a direct join between one of the Lookup
table to the Main one.
You cannot create a Join from a subordinate table towards a superior table in the Input area.
The Expression key field which is filled in with the dragged and dropped data is editable in the input schema,
whereas the column name can only be changed from the Schema editor panel.
You can either insert the dragged data into a new entry or replace the existing entries or else concatenate all
selected data into one cell.
For further information about possible types of drag and drops, see Mapping the Output setting .
If you have a big number of input tables, you can use the minimize/maximize icon to reduce or restore the table size in the
Input area. The Join binding two tables remains visible even though the table is minimized.
Creating a Join automatically assigns a hash key onto the joined field name. The key symbol displays in violet on
the input table itself and is removed when the Join between the two tables is removed.
Related topics:
Along with the explicit Join you can select whether you want to filter down to a unique match or if you allow
several matches to be taken into account. In this last case, you can choose to consider only the first or the last
match or all of them.
1. Click the tMap settings button at the top of the table to which the Join links to display the table properties.
2. Click in the Value field corresponding to Match Model and then click the three-dot button that appears to
open the [Options] dialog box.
3. In the [Options] dialog box, double-click the wanted match model, or select it and click OK to validate the
setting and close the dialog box.
Unique Match
This is the default selection when you implement an explicit Join. This means that only the last match from the
Lookup flow will be taken into account and passed on to the output.
First Match
This selection implies that several matches can be expected in the lookup. The First Match selection means that
in the lookup only the first encountered match will be taken into account and passed onto the main output flow.
All Matches
This selection implies that several matches can be expected in the lookup flow. In this case, all matches are taken
into account and passed on to the main output flow.
This option avoids that null values are passed on to the main output flow. It allows also to pass on the rejected
data to a specific table called Inner Join Reject table.
If the data searched cannot be retrieved through the explicit Join or the filter Join, in other words, the Inner Join
cannot be established for any reason, then the requested data will be rejected to the Output table defined as Inner
Join Reject table if any.
Simply drop column names from one table to a subordinate one, to create a Join relationship between the two
tables. The Join is displayed graphically as a purple link and creates automatically a key that will be used as a
hash key to speed up the match search.
1. Click the tMap settings button at the top of the table to which the Join links to display the table properties.
2. Click in the Value field corresponding to Join Model and then click the three-dot button that appears to open
the [Options] dialog box.
3. In the [Options] dialog box, double-click the wanted Join type, or select it and click OK to validate the
setting and close the dialog box.
An Inner Join table should always be coupled to an Inner Join Reject table. For how to define an output table as an Inner
Join Reject table, see Lookup Inner Join rejection.
You can also use the filter button to decrease the number of rows to be searched and improve the performance
(in Java).
Related topics:
The output corresponds to the Cartesian product of both table (or more tables if need be).
If you create an explicit or an inner Join between two tables, the All rows option is no longer available. You then have to
select Unique match, First match or All matches. For more information, see How to use Explicit Join and How to use
Inner Join.
In the Filter field, type in the condition to be applied. This allows to reduce the number of rows parsed against
the main flow, enhancing the performance on long and heterogeneous flows.
You can use the Auto-completion tool via the Ctrl+Space bar keystrokes in order to reuse schema columns in
the condition statement.
If you remove Input entries from the Map Editor schema, this removal also occurs in your component schema definition.
You can also use the Expression field of the Var table to carry out any transformation you want to, using Java
Code.
Variables help you save processing time and avoid you to retype many times the same data.
• Type in freely your variables in Java. Enter the strings between quotes or concatenate functions using the
relevant operator.
• Add new lines using the plus sign and remove lines using the red cross sign. And press Ctrl+Space to retrieve
existing global and context variables.
Select an entry on the Input area or press Shift key to select multiple entries of one Input table.
Press Ctrl to select either non-appended entries in the same input table or entries from various tables. When
selecting entries in the second table, notice that the first selection displays in grey. Hold the Ctrl key down to drag
all entries together. A tooltip shows you how many entries are in selection.
Then various types of drag-and-drops are possible depending on the action you want to carry out.
Appended to the variable list, a metadata list provides information about the selected column.
Press Ctrl or Shift and click fields for multiple selection then click the red cross sign.
1. Double-click the tMap component in your job design to open the Map Editor.
2. In the lower half of the editor, click the Expression editor tab to open the corresponding view.
To edit an expression, select it in the Input panel and then click the Expression editor tab and modify the expression
as required.
3. Enter the Java code according to your needs. The corresponding expression in the output panel is
synchronized.
Refer to the Java documentation for more information regarding functions and operations.
To open the [Expression Builder] dialog box, click the three-dot button next to the expression you want to open
in the Var or Output panel of the Map Editor.
For a use case showing the usage of the expression editor, see the following section.
The following example shows the use of Expression Builder in a tMap component.
• From the DB input, comes a list of names made of a first name and a last name separated by a space char.
In the tMap, use the expression builder to: First, replace the blank char separating the first and last names with an
underscore char, and second, change the states from lower case to upper case.
1. In the tMap, set the relevant inner join to set the reference mapping. For more information regarding tMap,
see tMap operation and Map editor interfaces.
2. From the main (row1) input, drop the Names column to the output area, and the State column from the lookup
(row2) input towards the same output area.
3. Then click in the first Expression field (row1.Name) to display the three-dot button.
4. In the Category area, select the relevant action you want to perform. In this example, select StringHandling
and select the EREPLACE function.
5. In the Expression area, paste row1.Name in place of the text expression, in order to get:
StringHandling.EREPLACE(row1.Name," ","_"). This expression will replace the separating space char
with an underscore char in the char string given.
Note that the CHANGE and EREPLACE functions in the StringHandling category are used to substitute
all substrings that match the given regular expression in the given old string with the given replacement and
returns a new string. Their three parameters are:
6. Now check that the output is correct, by typing in the relevant Value field of the Test area, a dummy value,
e.g: Chuck Norris and clicking Test!. The correct change should be carried out, for example, Chuck_Norris.
7. Click OK to validate the changes, and then proceed with the same operation for the second column (State).
8. In the tMap output, select the row2.State Expression and click the [...] button to open the Expression builder
again.
This time, the StringHandling function to be used is UPCASE. The complete expression says:
StringHandling.UPCASE(row2.State).
9. Once again, check that the expression syntax is correct using a dummy Value in the Test area, for example
indiana. The Test! result should display INDIANA for this example. Then, click OK to validate the changes.
These changes will be carried out along the flow processing. The output of this example is as shown below.
You can also add an Output schema in your Map Editor, using the plus sign from the tool bar of the Output area.
You have as well the possibility to create a join between your output tables. The join on the tables enables you
to process several flows separately and unite them in a single output. For more information about the output join
tables feature, see Talend Components Reference Guide.
When you click the [+] button to add an output schema or to make a join between your output tables, a dialog
box opens. You have then two options.
Select... To...
New output Add an independent table.
Create join table from Create a join between output tables. In order to do so, select in the drop down
list the table from which you want to create the join. In the Named field, type
in the name of the table to be created.
Unlike the Input area, the order of output schema tables does not make such a difference, as there is no
subordination relationship between outputs (of Join type).
Once all connections, hence output schema tables, are created, you can select and organize the output data via
drag & drops.
You can drop one or several entries from the Input area straight to the relevant output table.
Press Ctrl or Shift, and click entries to carry out multiple selection.
Or you can drag expressions from the Var area and drop them to fill in the output schemas with the appropriate
reusable data.
Note that if you make any change to the Input column in the Schema Editor, a dialog prompts you to decide to
propagate the changes throughout all Input/Variable/Output table entries, where concerned.
Action Result
Drag & Drop onto existing expressions. Concatenates the selected expression with the existing expressions.
Drag & Drop to insertion line. Inserts one or several new entries at start or end of table or between two
existing lines.
Drag & Drop + Ctrl. Replaces highlighted expression with selected expression.
Drag & Drop + Shift. Adds the selected fields to all highlighted expressions. Inserts new lines if
needed.
Drag & Drop + Ctrl + Shift. Replaces all highlighted expressions with selected fields. Inserts new lines if
needed.
Click the Expression field of your input or output table to display the [...] button. Then click this three-dot button
to open the Expression Builder.
For more information regarding the Expression Builder, see How to write code using the Expression Builder.
6.2.4.2. Filters
Filters allow you to make a selection among the input fields, and send only the selected fields to various outputs.
Click the [+] button at the top of the table to add a filter line.
You can enter freely your filter statements using Java operators and functions.
Drop expressions from the Input area or from the Var area to the Filter row entry of the relevant Output table.
An orange link is then created. Add the required Java operator to finalize your filter formula.
You can create various filters on different lines. The AND operator is the logical conjunction of all stated filters.
It groups data which do not satisfy one or more filters defined in the standard output tables. Note that as standard
output tables, are meant all non-reject tables.
This way, data rejected from other output tables, are gathered in one or more dedicated tables, allowing you to
spot any error or unpredicted case.
The Reject principle concatenates all non Reject tables filters and defines them as an ELSE statement.
1. Click the tMap settings button at the top of the output table to display the table properties.
2. Click in the Value field corresponding to Catch output reject and then click the [...] button that appears to
display the [Options] dialog box.
3. In the [Options] dialog box, double-click true, or select it and click OK to validate the setting and close
the dialog box.
You can define several Reject tables, to offer multiple refined outputs. To differentiate various Reject outputs,
add filter lines, by clicking on the plus arrow button.
Once a table is defined as Reject, the verification process will be first enforced on regular tables before taking in
consideration possible constraints of the Reject tables.
Note that data are not exclusively processed to one output. Although a data satisfied one constraint, hence is routed
to the corresponding output, this data still gets checked against the other constraints and can be routed to other
outputs.
To define an Output flow as container for rejected Inner Join data, create a new output component on your Job
that you connect to the Map Editor. Then in the Map Editor, follow the steps below:
1. Click the tMap settings button at the top of the output table to display the table properties.
2. Click in the Value field corresponding to Catch lookup inner join reject and then click the [...] button that
appears to display the [Options] dialog box.
3. In the [Options] dialog box, double-click true, or select it and click OK to validate the setting and close
the dialog box.
Deactivating the Die on error option will allow you to skip the rows on error and complete the process for error-
free rows on one hand, and to retrieve the rows on error and manage them if needed.
1. Double-click the tMap component on the design workspace to open the Map Editor.
2. Click the Property Settings button at the top of the input area to display the [Property Settings] dialog box.
3. In [Property Settings] dialog box, clear the Die on error check box and click OK.
A new table called ErrorReject appears in the output area of the Map Editor. This output table automatically
comprises two columns: errorMessage and errorStackTrace, retrieving the message and stack trace of the error
encountered during the Job execution. Errors can be unparseable dates, null pointer exceptions, conversion issues,
etc.
You can also drag and drop columns from the input tables to this error reject output table. Those erroneous data
can be retrieved with the corresponding error messages and thus be corrected afterward.
Once the error reject table is set, its corresponding flow can be sent to an output component.
To do so, on the design workspace, right-click the tMap component, select Row > ErrorReject in the menu, and
click the corresponding output component, here tLogRow.
When you execute the Job, errors are retrieved by the ErrorReject flow.
The result contains the error message, its stack trace, and the two columns, id and date, dragged and dropped to
the ErrorReject table, separated by a pipe "|".
To retrieve the schema structure of the selected table from the Repository:
1. Click the tMap Settings button at the top of the table to display the table properties.
2. Click in the Value field of Schema Type, and then click the three-dot button that appears to open the
[Options] dialog box.
3. In the [Options] dialog box, double-click Repository, or select it and click OK, to close the dialog box and
display the Schema Id property beneath Schema Type.
If you close the Map Editor now without specifying a Repository schema item, the schema type changes back to
Built-In.
4. Click in the Value field of Schema Id, and then click the [...] button that appears to display the [Repository
Content] dialog box.
5. In the [Repository Content] dialog box, select your schema as you define a centrally stored schema for any
component, and then click OK.
The Value field of Schema Id is filled with the schema you just selected, and everything in the Schema
editor panel for this table becomes read-only.
Changing the schema type of the subordinate table across a Join from Built-In to Repository causes the Join to get lost.
Changes to the schema of a table made in the Map Editor are automatically synchronized to the schema of the
corresponding component connected with the tMap component.
Use the tool bar below the schema table, to add, move or remove columns from the schema.
You can also load a schema from the repository or export it into a file.
Metadata Description
Column Column name as defined on the Map Editor schemas and on the Input or Output component
schemas.
Key The Key shows if the expression key data should be used to retrieve data through the Join link. If
unchecked, the Join relation is disabled.
Type Type of data: String, Integer, Date, etc.
Length -1 shows that no length value has been defined in the schema.
Precision Defines the number of digits to the right of the decimal point.
Nullable Clear this check box if the field value should not be null.
Default Shows any default value that may be defined for this field.
Comment Free text field. Enter any useful comment.
Input metadata and output metadata are independent from each other. You can, for instance, change the label of a column
on the output side without the column label of the input schema being changed.
However, any change made to the metadata are immediately reflected in the corresponding schema on the tMap
relevant (Input or Output) area, but also on the schema defined for the component itself on the design workspace.
A Red colored background shows that an invalid character has been entered. Most special characters are prohibited
in order for the Job to be able to interpret and use the text entered in the code. Authorized characters include lower-
case, upper-case, figures except as start character.
When handling large data sources, including for example, numerous columns, large number of lines or of column
types, your system might encounter memory shortage issues that prevent your Job, to complete properly, in
particular when using a tMap component for your transformation.
A feature has been added (in Java only for the time being) to the tMap component, in order to reduce the memory
in use for lookup loading. In fact, rather than storing the temporary data in the system memory and thus possibly
reaching the memory limitation, the Store temp data option allows you to choose to store the temporary data
onto a directory of your disk instead.
This feature comes as an option to be selected in the Lookup table of the input data in the Map Editor.
1. Double-click the tMap component in your Job to launch the Map Editor.
2. In input area, click the Lookup table describing the temporary data you want to be loaded onto the disk rather
than in the memory.
4. Click in the Value field corresponding to Store temp data, and then click the [...] button to display the
[Options] dialog box.
5. In the [Options] dialog box, double-click true, or select it and click OK, to enable the option and close the
dialog box.
For this option to be fully activated, you also need to specify the directory on the disk, where the data will be stored,
and the buffer size, namely the number of rows of data each temporary file will contain. You can set the temporary
storage directory and the buffer size either in the Map Editor or in the tMap component property settings.
To set the temporary storage directory and the buffer size in the Map Editor:
1. Click the Property Settings button at the top of the input area to display the [Property Settings] dialog box.
2. In [Property Settings] dialog box, fill the Temp data directory path field with the full path to the directory
where the temporary data should be stored.
3. In the Max buffer size (nr of rows) field, specify the maximum number of rows each temporary file can
contain. The default value is 2,000,000.
4. Click OK to validate the settings and close the [Property Settings] dialog box.
To set the temporary storage directory in the tMap component property settings without opening the Map Editor:
1. Click the tMap component to select it on the design workspace, and then select the Component tab to show
the Basic settings view.
2. In the Store on disk area, fill the Temp data directory path field with the full path to the directory where
the temporary data should be stored.
Alternatively, you can use a context variable through the Ctrl+Space bar if you have set the variable in a
Context group in the repository. For more information about contexts, see Using contexts and variables.
This way, you will limit the use of allocated memory per reference data to be written onto temporary files stored
on the disk.
As writing the main flow onto the disk requires the data to be sorted, note that the order of the output rows cannot be
guaranteed.
On the Advanced settings view, you can also set a buffer size if needed. Simply fill out the field Max buffer size
(nb of rows) in order for the data stored on the disk to be split into as many files as needed.
In order to adapt to the multiple processing types as well as to address performance issues, the tMap component
supports different lookup loading modes.
• Load once: Default setting. Select this option to load the entire lookup flow before processing the main flow.
This is the preferred option if you have a great number of data from your main flow that needs to be requested
in your lookup, or if your reference (or lookup) data comes from a file that can be easily loaded.
• Reload at each row: At each row, the lookup gets loaded again. This is mainly interesting in Jobs where the
lookup volume is large, while the main flow is pretty small. Note that this option allows you to use dynamic
variable settings such as where clause, to change/update the lookup flow on the fly as it gets loaded, before
the main flow join is processed. This option could be considered as the counter-part of the Store temp data
option that is available for file lookups.
• Reload at each row (cache): Expressions (in the Lookup table) are assessed and looked up in the cache first.
The results of joins that have already been solved, are stored in the cache, in order to avoid loading the same
results twice. This option optimizes the processing time and helps improve processing performance of the tMap
component.
Note that for the time being, you cannot use Reload at each row (cache) and Store temp data at the same time.
1. Click the tMap settings button at the top of the lookup table to display the table properties.
2. Click in the Value field corresponding to Lookup Model, and then click the [...] button to display the
[Options] dialog box.
3. In the [Options] dialog box, double-click the wanted loading mode, or select it and then click OK, to validate
the setting and close the dialog box.
For use cases using these options, see the tMap section of Talend Components Reference Guide.
When your lookup is a database table, the best practise is to open the connection to the database in the beginning of your
job design in order to optimize performance.
1. The main flow has much less rows than the lookup flow (for example, with a ratio of 1000 or more).
The advantage of this approach, with both conditions satisfied, is that it helps deal with the fact that the amount
of lookup data increases over time, since you can run queries against the data from the main flow in the database
component in order to select only that lookup data which is assumed to be relevant for each record in the main
flow, such as in the following example which uses lookup data from a MySQL database.
The schemas of the main flow, the lookup flow and the output flow read as follows:
You can select from the MySQL database only the data that matches the values of the id column of the main flow.
To do this, proceed as follows:
2. Click the [+] button to add one row and name the Key to id and the Value to row1.id.
4. In the Query field, enter the query to select the data that matches the id column of the main flow. In this
example, this query reads:
For further information about the components used in this example, see Talend Components Reference Guide.
By default, when multiple lookup flows are handled in the tMap component, these lookup flows are loaded and
processed one after another, according to the sequence of the lookup connections. When a large amount of data
is processed, the Job execution speed is slowed down. To maximize the Job execution performance, the tMap
component allows parallel loading of multiple lookup flows.
2. Click the Property Settings button at the top of the input area to open the [Property Settings] dialog box.
3. Select the Lookup in parallel check box and click OK to validate the setting and close the dialog box.
With this option enabled, all the lookup flows will be loaded and processed in the tMap component simultaneously,
and then the main input flow will be processed.
Like the Traces Debug mode of the Run view that allows you to monitor data processing during Job execution,
the tMap component offers you that same functionality in its editor. This allows you to monitor the data mapping
and processing before the execution while configuring the tMap component.
For further information about monitoring data processes from the Run view, see Row by row monitoring.
1. Activate the Traces Debug mode in the Run view. For more information regarding the Traces Debug
execution mode and how to activate it, see How to run a Job in Traces Debug mode.
A new Preview column displays in the main input table and in the output tables showing a preview of the
data processed, and a new tool bar displays on the top left hand corner of the Map Editor.
Click the Previous Row button to display the data preview of the previous row, within a limit of five rows back.
Click the Next Row button to display the data preview of the next row.
Click the Next Breakpoint button to display the data preview of the next breakpoint.
To monitor your data processing at a breakpoint you first need to define one on the relevant link. To do
so, right-click the relevant link on the design workspace, select Show Breakpoint Setup on the popup
menu and select the Activate conditional breakpoint check box and set the Conditions in the table. A
pause icon displays below the link when you access the Traces Debug mode.
Click the Kill button to stop the data processing.
2. Go back to the Run view and click the Basic Run tab.
tXMLMap is fine-tuned to leverage the Document data type for processing XML data, a case of transformation
that often mixes hierarchical data (XML) and flat data together. This Document type carries a complete user-
specific XML flow. In using tXMLMap, you are able to add as many input or output flows as required into a
visual map editor to perform, on these flows, the operations as follows:
• data matching via different models, for example, the Unique match mode (related topic: How to use Explicit
Join),
• Automated XML tree construction on both of the input and the output sides,
• inner join and left outer join (related topic: How to use Inner Join)
• lookup between data sources whatever they are flat or XML data using models like Load once (related topic:
Handling Lookups),
• data rejecting.
Like tMap, a map editor is required to configure these operations. To open this map editor, you can double-click
the tXMLMap icon in the design workspace, or alternatively, click the three-dot button next to the Map Editor
in the Basic settings view of the tXMLMap component.
tXMLMap and tMap use the common approaches to accomplish most of these operations. Therefore, the
following sections explain only the particular operations to which tXMLMap is dedicated for processing the
hierarchical XML data.
Different from tMap, tXMLMap does not provide the Store temp data option for storing temporary data onto the directory
of your disk. For further information about this option of tMap, see Solving memory limitation issues in tMap use.
The following figure presents an example in which the input flow, Customer, is set up as the Document type. To
replicate it, in the Map editor, you can simply click the [+] button to add one row on the input side of the Schema
editor, rename it and select Document from the drop-down list of the given data types.
In practice for most cases, tXMLMap retrieves the schema of its preceding or succeeding components, for
example, from a tFileInputXML component or in the ESB use case, from a tESBProviderRequest component.
This avoids many manual efforts to set up the Document type for the XML flow to be processed. However, to
continue to modify the XML structure as the content of a Document row, you need still to use the given Map editor.
Be aware that a Document flow carries a user-defined XML tree and is no more than one single field of a schema, which,
same as the other schemas, may contain different data types between each field. For further information about how to set
a schema, see Basic Settings tab.
Once the Document type is set up for a row of data, in the corresponding data flow table in the map editor, a basic
XML tree structure is created automatically to reflect the details of this structure. This basic structure represents
the minimum element required by a valid XML tree in using tXMLMap:
• The root element: it is the minimum element required by an XML tree to be processed and when needs be, the
foundation to develop a sophisticated XML tree.
• The loop element: it determines the element over which the iteration takes place to read the hierarchical data of
an XML tree. By default, the root element is set as loop element.
This figure gives an example with the input flow, Customer. Based on this generated XML root tagged as root by
default, you can develop the XML tree structure of interest.
1. Import the custom XML tree structure from one of the following types of sources:
• XML or XSD files (related topic: How to import the XML tree structure from XML and XSD files)
When you import an XSD file, you will create the XML structure this XSD file describes.
• file XML connections created and stored in the Repository of your Studio (related topic: How to import
the XML tree structure from the Repository).
If needs be, you can develop the XML tree of interest manually using the options provided on the contextual menu.
2. Reset the loop element for the XML tree you are creating, if needs be. You can set as many loops as you need
to. At this step, you may have to consider the following situations:
• If you have to create several XML trees, you need to define the loop element for each of them.
• If you import the XML tree from the Repository, the loop element will have been set depending on the
set of the source structure. But you can still reset the loop element.
For further details, see How to set or reset a loop element for an imported XML structure
If needed, you can continue to modify the imported XML tree using the options provided in the contextual menu.
The following table presents the operations you can perform through the available options.
Options Operations
Create Sub-element and Create Attribute Add elements or attributes to develop an XML tree. Related topic: How to
add a sub-element or an attribute to an XML tree structure
Set a namespace Add and manage given namespaces on the imported XML tree. Related
topic: How to manage a namespace
Delete Delete an element or an attribute. Related topic: How to delete an element
or an attribute from the XML tree structure
Rename Rename an element or an attribute.
As loop element Set or reset an element as loop element. Multiple loop elements and optional
loop element are supported.
As optional loop This option is not available unless to the loop element you have defined.
When the corresponding element exists in the source file, an optional loop
element works the same way as a normal loop element; otherwise, it resets
automatically its parent element as loop element or in absence of parent
element in the source file, it takes the element of the higher level until the
root element. But in the real-world practice, with such differences between
the XML tree and the source file structure, we recommend adapting the
XML tree to the source file for better performance.
Options Operations
As group element On the XML tree of the output side, set an element as group element. Related
topic: How to group the output data
As aggregate element On the XML tree of the output side, set an element as aggregate element.
Related topic: How to aggregate the output data
Add Choice Set the Choice element. Then all of its child elements developed underneath
will be contained in this declaration. This Choice element originates from
one of the XSD concepts. It enables tXMLMap to perform the function of
the XSD Choice element to read or write a Document flow.
The following sections present more details about the process of creating the XML tree.
6.3.1.2. How to import the XML tree structure from XML and XSD
files
To import the XML tree structure from an XML file, proceed as follows:
1. In the input flow table of interest, right-click the column name to open the contextual menu. In this example,
it is Customer.
3. In the pop-up dialog box, browse to the XML file you need to use to provide the XML tree structure of
interest and double-click the file.
To import the XML tree structure from an XSD file, proceed as follows:
1. In the input flow table of interest, right-click the column name to open the contextual menu. In this example,
it is Customer.
3. In the pop-up dialog box, browse to the XSD file you need to use to provide the XML tree structure of interest
and double-click the file.
4. In the dialog box that appears, select an element from the Root list as the root of your XML tree, and click
OK. Then the XML tree described by the XSD file imported is established.
• When importing either an input or an output XML tree structure from an XSD file, you can choose an element as
the root of your XML tree.
• Once an XML structure is imported, the root tag is renamed automatically with the name of the XML source. To
change this root name manually, you need use the tree schema editor. For further information about this editor, see
Editing the XML tree schema.
Then, you need to define the loop element in this XML tree structure. For further information about how to define
a loop element, see How to set or reset a loop element for an imported XML structure.
6.3.1.3. How to import the XML tree structure from the Repository
To do this, proceed as follows:
1. In any input flow table, right click the column name to open the contextual menu. In this example, it is
Customer.
3. In the pop-up repository content list, select the XML connection or the MDM connection of interest to import
the corresponding XML tree structure.
To import an XML tree structure from the Repository, the corresponding XML connection should have been created.
For further information about how to create a file XML connection in the Repository, see Centralizing XML file
metadata.
The XML tree structure is created and a loop is defined automatically as this loop was already defined during the
creation of the current Repository-stored XML connection.
1. In the created XML tree structure, right-click the element you need to define as loop. For example, you need
to define the Customer element as loop in the following figure.
2. From the pop-up contextual menu, select As loop element to define the selected element as loop.
Once done, this selected element is marked with the text: loop.
If you close the Map Editor without having set the required loop element for a given XML tree, its root element will be
set automatically as loop element.
1. In the XML tree you need to edit, right-click the element to which you need to add a sub-element or an
attribute underneath and select Create Sub-Element or Create Attribute according to your purpose.
2. In the pop-up [Create New Element] wizard, type in the name you need to use for the added sub-element
or attribute.
3. Click OK to validate this creation. The new sub-element or attribute displays in the XML tree structure you
are editing.
1. In the XML tree you need to edit, right-click the element or the attribute you need to delete.
Then the selected element or attribute is deleted, including all of the sub-elements or the attributes attached
to it underneath.
Defining a namespace
1. In the XML tree of the input or the output data flow you need to edit, right click the element for which you
need to declare a namespace. For example, in a Customer XML tree of the output flow, you need to set a
namespace for the root.
2. In the pop-up contextual menu, select Set a namespace. Then the [Namespace dialog] wizard displays.
4. If you need to set a prefix for this namespace you are editing, select the Prefix check box in this wizard and
type in the prefix you need. In this example, we select it and type in xhtml.
1. In the XML tree that the namespace you need to edit belongs to, right-click this namespace to open the
contextual menu.
Deleting a namespace
1. In the XML tree that the namespace you need to edit belongs to, right-click this namespace to open the
contextual menu.
Once the group element is set, all of its sub-elements except the loop one are used as conditions to group the
output data.
You have to carefully design the XML tree view for the optimized usage of a given group element. For further
information about how to use a group element, see tXMLMap in Talend Components Reference Guide.
tXMLMap provides group element and aggregate element to classify data in the XML tree structure. When handling a row
of XML data flow, the behavioral difference between them is:
• The group element processes the data always within one single flow.
• The aggregate element splits this flow into separate and complete XML flows.
1. In the XML tree view on the output side of the Map editor, right-click the element you need to set as group
element.
Then this element of selection becomes the group element. The following figure presents an example of an
XML tree with the group element.
1. In the XML tree view on the output side of the Map editor, right-click the element you have defined as
group element.
1. To define an element as aggregate element, simply right-click this element of interest in the XML tree view
on the output side of the Map editor and from the contextual menu, select As aggregate element.
Then this element becomes the aggregate element. Texts in red are added to it, reading aggregate. The
following figure presents an example.
2. To revoke the definition of the aggregate element, simply right-click the defined aggregate element and from
the contextual menu, select Remove aggregate element.
To define an element as aggregate element, ensure that this element has no child element and the All in one feature is being
disabled. The As aggregate element option is not available in the contextual menu until both of the conditions are respected.
For further information about the All in one feature, see How to output elements into one document.
For an example about how to use the aggregate element with tXMLMap, see Talend Components Reference
Guide.
tXMLMap provides group element and aggregate element to classify data in the XML tree structure. When handling one
row of data ( one complete XML flow), the behavioral difference between them is:
• The group element processes the data always within one single flow.
• The aggregate element splits this flow into separate and complete XML flows.
1. Click the pincer icon to open the map setting panel. The following figure presents an example.
2. Click the All in one field and from the drop-down list, select true or false to decide whether the output XML
flow should be one single flow.
• If you select true, the XML data is output all in one single flow. In this example, the single flow reads
as follows:
• If you select false, the XML data is output in separate flows, each loop being one flow, neither grouped
nor aggregated. In this example, these flows read as follows:
Each flow contains one complete XML structure. To take the first flow as example, its structure reads:
The All in one feature is disabled if you are using the aggregate element. For further information about the aggregate element,
see How to aggregate the output data
By contrast, in some scenarios, you do not need to output the empty element while you have to keep them in the
output XML tree for some reasons.
tXMLMap allows you to set the boolean for the creation of empty element. To do this, on the output side of the
Map editor, perform the following operations:
2. In the panel, click the Create empty element field and from the drop-down list, select true or false to decide
whether to output the empty element.
• If you select true, the empty element is created in the output XML flow and output, for example,
<customer><LabelState/></customer>.
For example, in this figure, the types element is the primary loop and the outputted data will be sorted by the
values of this element.
In this case in which one output loop element receives several input loop elements, a [...] button appears next to
this receiving loop element or for the flat data, appears on the head of the table representing the flat data flow.
To define the loop sequence, do the following:
1. Click this [...] button to open the sequence arrangement window as presented by the figure used earlier in
this section.
To access this schema editor, click the Tree schema editor tab on the lower part of the map editor.
The left half of this view is used to edit the tree schema of the input flow and the right half to edit the tree schema
of the output flow.
The following table presents further information about this schema editor.
Metadata Description
XPath Use it to display the absolute paths pointing to each element or attribute in a XML tree and edit the
name of the corresponding element or attribute.
Key Select the corresponding check box if the expression key data should be used to retrieve data
through the Join link. If unchecked, the Join relation is disabled.
Type Type of data: String, Integer, Document, etc.
Nullable Select this check box if the field value could be null.
Pattern Define the pattern for the Date data type.
Input metadata and output metadata are independent from each other. You can, for instance, change the label of a column
on the output side without the column label of the input schema being changed.
However, any change made to the metadata are immediately reflected in the corresponding schema on the
tXMLMap relevant (Input or Output) area, but also on the schema defined for the component itself on the design
workspace.
For detailed use cases about the multiple operations that you can perform using tXMLMap, see Talend
Components Reference Guide.
In the Integration perspective of the studio, the Metadata folder stores reusable information on files, databases,
and/or systems that you need to create your Jobs.
Various corresponding wizards help you store these pieces of information and use them later to set the connection
parameters of the relevant input or output components, but you can also store the data description called "schemas"
in your studio.
This chapter provides procedures to create and manage various metadata items in the Repository that can be used
in all your Job designs. For how to use a Repository metadata item, see How to use centralized metadata in a
Job and How to set a repository schema.
Before starting any metadata management processes, you need to be familiar with the Graphical User Interface
(GUI) of your studio. For more information, see the appendix describing GUI elements.
7.1. Objectives
The Metadata folder in the Repository tree view stores reusable information on files, databases, and/or systems
that you need to create your Jobs.
Various corresponding wizards help you store these pieces of information that can be used later to set the
connection parameters of the relevant input or output components and the data description called "schemas" in a
centralized manner in Talend Studio.
The procedures of different wizards slightly differ depending on the type of connection chosen.
Click Metadata in the Repository tree view to expand the folder tree. Each of the connection nodes will gather
the various connections and schemas you have set up.
From Talend Studio, you can set up the following, amongst others:
• a DB connection,
• a JDBC schema,
• a SAS connection,
• an SAP connection,
• a file schema,
• an LDAP schema,
• a Salesforce schema,
• a generic schema,
• a MDM connection,
• a Drools connection,
• a WSDL schema,
• a Validation rule,
• a FTP connection,
• a HL7 connection.
This setup procedure is made of two separate but closely related major tasks:
Prerequisites: Talend Studio requires specific third-party Java libraries or database drivers (.jar files) to be
installed in order to connect to sources or targets. Due to license restrictions, Talend may not be able to ship
certain required libraries or drivers; in that situation, the connection wizard to be presented in the following
sections displays related information to help you identify and install the libraries or drivers in question. For more
information, see the Talend Installation Guide.
To centralize database connection parameters you have defined in a Job, click the icon in the Basic settings
view of the relevant database component with its Property Type set to Built-in to open the database connection
setup wizard.
To modify an existing database connection, right-click the connection item from the Repository tree view, and
select Edit connection to open the connection setup wizard.
Then define the general properties and parameters of the connection in the wizard.
2. Fill in the optional Purpose and Description fields as required. The information you fill in the Description
field will appear as a tooltip when you move your mouse pointer over the connection.
3. If needed, set the connection version and status in the Version and Status fields respectively. You can also
manage the version and status of a repository item in the [Project Settings] dialog box. For more information,
see Version management and Status management respectively.
4. If needed, click the Select button next to the Path field to select a folder under the Db connections node
to hold your newly created database connection. Note that you cannot select a folder if you are editing an
existing database connection, but you can drag and drop a connection to a new folder whenever you want.
5. Click Next when completed. The second step requires you to fill in or edit database connection data.
When you are creating the database connection of some databases like AS/400, HSQDB, Informix, Microsoft SQL,
MySQL, Oracle, Sybase, or Teradata, you can specify additional connection properties through the Additional
parameters field in the Database Settings area.
In Talend Studio 6.0 and onwards, due to limitations of Java 8, ODBC is no longer supported for Access database
connections, and the only supported database driver type is JDBC.
Also due to Java 8 limitations, you cannot create Generic ODBC or Microsoft SQL Server (ODBC) connections in
Talend Studio 6.0 and onwards unless you import such connections created in an earlier version of Talend Studio -
in that case, you can create Generic ODBC and Microsoft SQL Server (ODBC) connections but they work only with
Java 7.
For an MS SQL Server (JDBC) connection, when Microsoft is selected from the Db Version list, you need to
download the Microsoft JDBC driver for SQL Server on Microsoft Download Center, unpack the downloaded zip
file, choose a jar in the unzipped folder based on your JRE version, rename the jar to mssql-jdbc.jar and install
it manually. For more information about choosing the jar, see the System Requirements information on Microsoft
Download Center.
If you need to connect to Hive, we recommend using one of the Talend solutions with Big Data.
If you are creating an MSSQL connection, in order to be able to retrieve all table schemas in the database, be sure to:
- enter dbo in the Schema field if you are connecting to MSSQL 2000,
- remove dbo from the Schema field if you are connecting to MSSQL 2005/2008.
If the connection fails, a message box is displayed to indicate this failure and from that message box. From
that message box, click the Details button to read further information.
If a missing library or driver (.jar file) has provoked this failure, you can read that from the Details panel
and then install the specified library or driver.
The Studio provides multiple approaches to automate the installation. For further information, see the chapter
describing how to install external modules of the Talend Installation Guide.
3. If you are creating a Teradata connection, select Yes for the User SQL Mode option at the bottom of the
wizard to use the SQL queries to retrieve metadata. The JDBC driver is not recommended with this database
because of possible bad performance.
4. If needed, fill in the database properties information. That is all for the first operation on database connection
setup. Click Finish to close the connection setup wizard.
The newly created database connection is now available in the Repository tree view and it displays several
folders including Queries (for SQL queries you save) and Table schemas that will gather all schemas linked
to this database connection upon table schema retrieval.
Now you can drag and drop the database connection onto the design workspace as a database component to
reuse the defined database connection details in your Job.
For information on locking and unlocking a project item and on different lock types, see Working collaboratively on project
items.
To retrieve table schemas from the database connection you have just set up, right-click the connection item from
the Repository tree view, and select Retrieve schema from the contextual menu.
An error message will appear if there are no tables to retrieve from the selected database or if you do not have the correct
rights to access this database.
A new wizard opens up where you can filter and show different objects (tables, views and synonyms) in your
database connection, select tables of interest, and define table schemas.
For the time being, synonyms option works for Oracle, IBM DB2 and MSSQL only.
1. In the Select Filter Conditions area, select the Use the Name Filter option.
2. In the Select Types area, select the check box(es) of the database object(s) you want to filter or display.
3. In the Set the Name Filter area, click Edit... to open the [Edit Filter Name] dialog box.
4. Enter the filter you want to use in the dialog box. For example, if you want to recuperate the database objects
which names start with "A", enter the filter "A%", or if you want to recuperate all database objects which
names end with "type", enter "%type" as your filter.
6. Click Next to open a new view on the wizard that lists the filtered database objects.
1. In the Select Filter Conditions area, select the Use Sql Filter option.
2. In the Set the Sql Filter field, enter the SQL query you want to use to filter database objects.
3. Click Next to open a new view that lists the filtered database objects.
1. Select one or more database objects on the list and click Next to open a new view on the wizard where you
can see the schemas of the selected object.
If no schema is visible on the list, click the Check connection button below the list to verify the database connection
status.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
If your source database table contains any default value that is a function or an expression rather than a string, be
sure to remove the single quotation marks, if any, enclosing the default value in the end schema to avoid unexpected
results when creating database tables using this schema.
By default, the schema displayed on the Schema panel is based on the first table selected in the list of schemas
loaded (left panel). You can change the name of the schema and according to your needs. You can also
customize the schema structure in the schema panel.
The tool bar allows you to add, remove or move columns in your schema. In addition, you can load an XML
schema from a file or export the current schema as XML.
To retrieve a schema based on one of the loaded table schemas, select the DB table schema name in the drop-
down list and click Retrieve schema. Note that the retrieved schema then overwrites any current schema
and does not retain any custom edits.
When done, click Finish to complete the database schema creation. All the retrieved schemas are displayed
in the Table schemas sub-folder under the relevant database connection node.
Now you can drag and drop any table schema of the database connection from the Repository tree view onto
the design workspace as a new database component or onto an existing component to reuse the metadata. For
more information, see How to use centralized metadata in a Job and How to set a repository schema.
To centralize database connection parameters you have defined in a Job into a JDBC connection, click the
icon in the Basic settings view of the relevant database component with its Property Type set to Built-
in to open the database connection setup wizard.
To modify an existing JDBC connection, right-click the connection item from the Repository tree view, and
select Edit connection to open the connection setup wizard.
2. Fill in the schema generic information, such as the connection Name and Description, and then click Next
to proceed to define the connection details.
For further information, see the section on defining general properties in Setting up a database connection.
• In the Driver jar field, select the jar driver validating your connection to the database.
• In the Class name field, fill in the main class of the driver allowing to communicate with the database.
• Fill the Mapping File field with the mapping allowing the database Type to match the Java type of data
on the schema by clicking the [...] button to open a dialog box and selecting the mapping file from the
Mapping list area according to the type of database you are connecting to.
The mapping files are XML files that you can manage via Window > Preferences > Talend > Specific Settings
> Metadata of TalendType. For more information, see Type mapping.
6. Fill in, if needed, the database properties information. Click Finish to close the connection setup wizard.
The newly created JDBC connection is now available in the Repository tree view and it displays several
folders including Queries (for the SQL queries you save) and Table schemas that will gather all schemas
linked to this DB connection upon schema retrieval.
For information on locking and unlocking a project item and on different lock types, see Working collaboratively on project
items.
1. To retrieve table schemas from the database connection you have just set up, right-click the connection item
from the Repository tree view and select Retrieve schema from the contextual menu.
A new wizard opens up where you can filter and show different objects (tables, views and synonyms) in your
database connection, select tables of interest, and define table schemas.
2. Define a filter to filter databases objects according to your need. For details, see Filtering database objects.
Click Next to open a view that lists your filtered database objects. The list offers all the databases with all
their tables present on the database connection that meet you filter conditions.
If no database is visible on the list, click Check connection to verify the database connection.
3. Select one or more tables on the list to load them onto your repository file system. Your repository schemas
will be based on these tables.
4. Click Next. On the next window, four setting panels help you define the schemas to create. Modify the
schemas if needed.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
If your source database table contains any default value that is a function or an expression rather than a string, be
sure to remove the single quotation marks, if any, enclosing the default value in the end schema to avoid unexpected
results when creating database tables using this schema.
By default, the schema displayed on the Schema panel is based on the first table selected in the list of schemas
loaded (left panel). You can change the name of the schema and according to your needs, you can also
customize the schema structure in the schema panel.
The tool bar allows you to add, remove or move columns in your schema. In addition, you can load an XML
schema from a file or export the current schema as XML.
To retrieve a schema based on one of the loaded table schemas, select the database table schema name in
the drop-down list and click Retrieve schema. Note that the retrieved schema then overwrites any current
schema and does not retain any custom edits.
When done, click Finish to complete the database schema creation. All the retrieved schemas are displayed
in the Table schemas sub-folder under the relevant database connection node.
Now you can drag and drop any table schema of the database connection from the Repository tree view onto
the design workspace as a new database component or onto an existing component to reuse the metadata. For
more information, see How to use centralized metadata in a Job and How to set a repository schema.
To centralize the metadata information of a SAS connection in the Repository, you need to complete two major
tasks:
Prerequisites:
• Talend Studio requires specific third-party Java libraries or database drivers (.jar files) to be installed in order
to connect to sources or targets. Due to license restrictions, Talend may not be able to ship certain required
libraries or drivers; in that situation, the connection wizard to be presented in the following sections displays
related information to help you identify and install the libraries or drivers in question. For more information,
see the Talend Installation Guide.
• Before carrying on the procedure below to configure your SAS connection, make sure that you retrieve your
metadata from the SAS server and export it in XML format.
2. Fill in the general properties of the connection, such as Name and Description and click Next to open a new
view on the wizard to define the connection details.
For further information, see the section on defining general properties in Setting up a database connection.
3. In the DB Type field of the [Database Connection] wizard, select SAS and fill in the fields that follow with
SAS connection information.
5. If needed, define the properties of the database in the corresponding fields in the Database Properties area.
The newly set connection to the defined database displays under the DB Connections folder in the Repository
tree view. This connection has several sub-folders among which Table schemas will group all schemas
relative to this connection after schema retrieval.
For information on locking and unlocking a project item and on different lock types, see Working collaboratively on project
items.
1. Right-click the SAS connection you created and then select Retrieve Schema from the contextual menu.
A new wizard opens up where you can filter and show different objects (tables, views) in your database
connection, select tables of interest, and define table schemas.
2. Filter databases objects according to your need, select one or more tables of interest, and modify the table
schemas if needed. For details, see Retrieving table schemas.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
When done, you can drag and drop any table schema of the SAS connection from the Repository tree view
onto the design workspace as a new component or onto an existing component to reuse the metadata. For
more information, see How to use centralized metadata in a Job and How to set a repository schema.
Prerequisites:
To be able to use IDoc SAP connectors and the IDoc SAP wizard correctly, you must install specific jar and dll
files validated and provided by SAP and then restart the Studio.
1. Copy the dll files, namely librfc32.dll, sapjco3.dll and sapjcorfc.dll into the C:\WINDOWS\system32\ folder
of the client workstation on which Talend Studio is installed.
If you already have an older librfc32.dll and sapjcorfc.dll in the {windows-dir}\system32 directory, replace them with
the ones that come with JCo.
2. Install the jar files, namely sapjco.jar, sapjco3.jar and sapidoc3.jar in the Java library of Talend Studio. For
more information on how to install libraries in Talend Studio, see the Talend Installation Guide
1. If you are using 32-bit Java, copy the dll files into the C:\WINDOWS\SysWOW64\ folder of the client
workstation on which Talend Studio is installed;
If you are using 64-bit Java, copy the dll files into the C:\WINDOWS\system32\ folder.
If you already have an older librfc32.dll and sapjcorfc.dll in the target directory, replace them with the ones that come
with JCo.
2. Install the jar files, namely sapjco.jar, sapjco3.jar and sapidoc3.jar in the Java library of Talend Studio. For
more information on how to install libraries in Talend Studio, see the Talend Installation Guide.
cd {sapjco-install-path}
3. Extract the archive tar zxvf sapjco-linux*x.x.x.tgz, where x.x.x is the version of the SAP JCo.
export LD_LIBRARY_PATH={sapjco-install-path}
5. Finally, add {sapjco-install-path}/sapjco.jar in the Java library of Talend Studio. For more information on
how to install libraries in Talend Studio, see the Talend Installation Guide.
For more information, see the Talend Knowledge Base article Installing SAP Java Connector.
The SAP metadata wizard is composed of several procedures described in the sections below:
• Setting up a connection to an SAP system. For details, see Setting up an SAP connection.
• Retrieving SAP tables and table schemas, and previewing data in a table. For details, see Retrieving SAP tables.
• Retrieving SAP RFC and BAPI functions and their input and output schemas. For details, see Retrieving an
SAP function.
• Retrieving the metadata of the SAP BW objects. For details, see Retrieving the SAP BW objects metadata.
• Creating a file from SAP IDoc. For details, see Creating a file from SAP IDOC.
2. Fill in the generic properties such as Name, Purpose, and Description. The Status field is a customized field
you can define in Window > Preferences.
6. Complete the other fields with your SAP system connection details.
If you need to retrieve tables with more than 512 bytes per row using this connection later, click the [+] button
below the Additional Properties table to add a property api.use_z_talend_read_table and set its value to
true. For more information, click Help to open the dialog box that shows the instruction.
8. Click Finish to validate and save the settings. The newly created SAP connection node appears under the
Metadata > SAP Connections node in the Repository tree view. Now you can drop the SAP connection
node onto your design workspace as an SAP component, with the connection details automatically filled. If
you need to further edit an SAP connection, right-click the connection node and select Edit SAP Connection
from the contextual menu to open this wizard again and make your modifications.
For information on locking and unlocking a project item and on different lock types, see Working collaboratively on project
items.
In this step, we will retrieve SAP tables and table schemas of interest, and preview data in tables from the connected
SAP system.
1. In the Repository tree view, right-click the newly created SAP connection and select Retrieve SAP Table
from the contextual menu. The [SAP Table wizard] dialog box opens up.
2. In the Name and Description fields, enter the filter condition for the table name or table description if needed.
Then click Search and all the SAP tables that meet the filter condition will be listed in the table.
3. Select one or more tables of interest by selecting the corresponding check boxes in the Name column. Note
that the tables selected will finally be saved in the Repository and the tables unselected will be removed from
the Repository if they already exist in the Repository.
All selected tables are listed in the Table Name area. You can remove the table(s) you have already selected
by clicking Remove Table in this step.
Click Refresh Table and the latest table schema will be displayed in the Current Table area.
Click Refresh Preview to preview data in the selected table. If an Error Message dialog box pops up, and
when you click Details, it displays DATA_BUFFER_EXCEEDED error information, you need to edit the SAP
connection to add a property api.use_z_talend_read_table and set its value to true. For more information,
see Setting up an SAP connection.
Modify the schema of the selected table in the Current Table area if needed. Note that the Ref Table column
value will be lost if you modify the Technical Name or Talend Name column value.
5. Click Finish and the tables of interest appear under the SAP Tables folder under the SAP connection node
in the Repository tree view. You can now drop the connection node or any table node under it onto your
design workspace as an SAP component, with all the metadata information automatically filled. If you need
to further edit a table, right-click the table and select Edit Table from the contextual menu to open this wizard
again and make your modifications.
1. In the Repository tree view, right-click the newly created SAP connection and select Retrieve Bapi from
the contextual menu. The SAP function wizard opens up.
In the Name Filter field, enter the filter condition for the function name if needed. To use the
custom function Z_TALEND_READ_TABLE, you need to install an SAP module provided under the
directory <Talend_Studio>\plugins\org.talend.libraries.sap_<version>\resources. For how to install the
SAP module, see the file readme.txt under the directory.
Click Search. All SAP functions that meet the filter condition will be listed in the Functions area.
2. Double-click the name of the function of interest in the Functions area. The input and output parameters will
be displayed in the Parameter tab.
3. Click the Test-it view to test the recuperation of the SAP data.
4. Click the Value cell for the corresponding input parameter that needs an input value, and then click the [...]
button in the cell and enter the value in the pop-up [Setup input parameter] dialog box.
5. Click Run to get the values of the output parameters returned by the function in the Output
Parameters(Preview) table.
7. Click Finish. The function and its schemas of interest will be saved under the SAP Bapi folder under your
SAP connection in the Repository tree view.
If you need to further edit them, right-click the function and select Edit Bapi from the contextual menu to
open this wizard again and make your modifications.
8. You can also retrieve the input and output schemas as XML metadata in either of the following ways:
• Select the Import schema as xml metadata check box and the input and output schemas of interest in
the sixth step.
• Right-click the name of the function that you have just retrieved under the SAP Bapi folder and select
Retrieve As Xml Metadata from the contextual menu.
The selected schema will be saved under the File xml node in the Repository tree view. For the usage of the
XML metadata, see the scenario of tSAPBapi in Talend Components Reference Guide.
1. In the Repository tree view, right-click the newly created SAP connection and select Retrieve SAP BW
metadata from the contextual menu. The [SAP BW Table Wizard] dialog box opens up.
2. In the Search in drop-down list, select the type of the SAP BW objects whose table metadata you want to
retrieve.
3. In the Name field, enter the filter criteria for the object name to narrow your search if needed.
In the Description field, enter the filter criteria for the object description to narrow your search if needed.
Note that for the Data Store Object, InfoCube and InfoObject types, the filter criterias for the Name field
and the Description field act together as an OR operator, that is to say, all objects that match either the filter
criteria for the Name field or the filter criteria for the Description field will be returned.
4. For the Data Source, InfoCube and InfoObject objects, you can select the data type from the Type drop-down
list to filter the search results.
5. For the Data Source objects, you can also enter the filter condition for the Data Source system name if needed.
6. Click Search and all the SAP BW objects that match the criteria will be listed in the table. Select one or more
objects of interest by selecting the corresponding check boxes in the Name column and then wait until the
Creation Status for all the selected objects is Success.
Note that the tables and their schemas of the selected objects will finally be saved in the Repository and the
tables of the unselected objects will be removed from the Repository if they already exist in the Repository.
For the InfoObject type objects, only the Attribute, Hierarchy and Text information can be extracted, and
the number of the columns for each type of information is displayed in the Column Number field with the
format A[X] H[Y] T[Z], where X, Y and Z represent the number of the columns for the Attribute, Hierarchy
and Text information respectively.
All tables of the selected objects are listed in the Table Name area. For the InfoObject table, the type
information is appended to the name of each table. You can remove the table(s) by clicking Remove Table
in this step.
Click Refresh Table and the latest table schema will be displayed in the Current Table area. You can modify
the schema of the selected table in the Current Table area if needed.
You can also click Refresh Preview to preview the data in the selected table if needed. But note that the
Refresh Preview button is not available when you search the Data Source type objects.
8. Click Finish and the tables and their schemas appear under the folder for the corresponding object type in
the Repository tree view.
You can now drag and drop any SAP BW table node onto your design workspace as an SAP BW component,
with all the metadata information automatically filled. For more information about the SAP BW components,
see Talend Components Reference Guide.
If you need to further edit or read a table for an object, right-click the table and select the corresponding item
from the contextual menu to open this wizard again and make your modifications.
Note that when reading data from the Data Source and InfoCube objects or writing data to the direct
updatable Data Store objects, the custom function modules Z_TALEND_INFOPROV_READ_RFC and
Z_TALEND_ODSO_UPSERT_RFC have to be installed. For how to install these modules, see the readme.txt
file provided under the <Talend_Studio>\plugins\org.talend.libraries.sap_<version>\resources directory.
3. In the iDoc name field, give a name to your connection to SAP IDoc file.
4. In the Program Id field, fill in the program identifier as it is defined in the RFC destination you want to use.
5. In the Gateway Service field, fill in the name of the service that enable Talend system to communicate with
SAP system. To get the service name, you can edit the service file in the C:\WINDOWS\system32\drivers\etc
\ folder of the workstation on which the SAP server is installed.
6. In the Output Format area, you can select XML and/or HTML check boxes according to the type of output
you want to generate from SAP IDoc.
8. Click Finish to close the dialog box and validate the creation of the IDoc file connection.
The new connection displays under the SAP iDocs node of your SAP connection in the Repository tree view.
You can now use it with the tSAPIDocInput and tSAPIDocOutput components. For more information on these
components, see Talend Components Reference Guide.
The file schema creation is very similar for all types of file connections: Delimited, Positional, Regex, XML, or Ldif.
Unlike the database connection wizard, the [New Delimited File] wizard gathers both file connection and schema
definitions in a four-step procedure.
To create a File Delimited connection from scratch, expand Metadata in the Repository tree view, right-click
File Delimited and select Create file delimited from the contextual menu to open the file metadata setup wizard.
To centralize a file connection and its schema you have defined in a Job, click the icon in the Basic settings
view of the relevant component with its Property Type set to Built-in to open the file metadata setup wizard.
Then define the general properties and file schema in the wizard.
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description
fields if you choose to do so. The information you provide in the Description field will appear as a tooltip
when you move your mouse pointer over the file connection.
2. If needed, set the version and status in the Version and Status fields respectively. You can also manage
the version and status of a repository item in the [Project Settings] dialog box. For more information, see
Version management and Status management respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File delimited node to
hold your newly created file connection. Note that you cannot select a folder if you are editing an existing
connection, but you can drag and drop it to a new folder whenever you want.
1. Click the Browse... button to search for the file on the local host or a LAN host.
2. Select the OS Format the file was created in. This information is used to prefill subsequent step fields. If the
list doesn't include the appropriate format, ignore it.
3. The File viewer gives an instant picture of the file loaded. Check the file consistency, the presence of header
and more generally the file structure.
On this view, you can refine the various settings of your file so that the file schema can be properly retrieved.
1. Set the Encoding type,and the Field and Row separators in the File Settings area.
2. Depending on your file type (csv or delimited), set the Escape and Enclosure characters to be used.
3. If the file preview shows a header message, exclude the header from the parsing. Set the number of header
rows to be skipped. Also, if you know that the file contains footer information, set the number of footer lines
to be ignored.
4. The Limit of Rows allows you to restrict the extend of the file being parsed. If needed, select the Limit check
box and set or select the desired number of rows.
6. Check the Set heading row as column names box to transform the first parsed row as labels for schema
columns. Note that the number of header rows to be skipped is then incremented by 1.
7. Click Refresh on the preview panel for the settings to take effect and view the result on the viewer.
8. Click Next to proceed to the final step to check and customize the generated file schema.
The last step shows the Delimited File schema generated. You can customize the schema using the toolbar
underneath the table.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example, a
data file not otherwise supported can be processed with a tFileInputRaw component by specifying that it has
a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the xsd:list
element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields as
VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the column
names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
1. If the Delimited file which the schema is based on has been changed, use the Guess button to generate again
the schema. Note that if you customized the schema, the Guess feature does not retain these changes.
2. Click Finish. The new schema is displayed under the relevant File Delimited connection node in the
Repository tree view.
Now you can drag and drop the file connection or any schema of it from the Repository tree view onto the design
workspace as a new component or onto an existing component to reuse the metadata. For further information
about how to use the centralized metadata in a Job, see How to use centralized metadata in a Joband How to set
a repository schema.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file delimited
to open the file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and
select Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema
from the contextual menu.
The [New Positional File] wizard gathers both file connection and schema definitions in a four-step procedure.
To create a File Positional connection from scratch, expand Metadata in the Repository tree view, right-click
File positional and select Create file positional from the contextual menu to open the file metadata setup wizard.
To centralize a file connection and its schema you have defined in a Job, click the icon in the Basic settings
view of the relevant component with its Property Type set to Built-in to open the file metadata setup wizard.
Then define the general properties and file schema in the wizard.
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description
fields if you choose to do so. The information you provide in the Description field will appear as a tooltip
when you move your mouse pointer over the file connection.
2. If needed, set the version and status in the Version and Status fields respectively. You can also manage
the version and status of a Repository item in the [Project Settings] dialog box. For more information, see
Version management and Status management respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File positional node to
hold your newly created file connection. Note that you cannot select a folder if you are editing an existing
connection, but you can drag and drop it to a new folder whenever you want.
1. Click the Browse... button to search for the file on the local host or a LAN host.
2. Select the Encoding type and the OS Format the file was created in. This information is used to prefill
subsequent step fields. If the list doesn't include the appropriate format, ignore the OS format.
The file is loaded and the File Viewer area shows a file preview and allows you to place your position markers.
3. Click the file preview and set the markers against the ruler to define the file column properties. The orange
arrow helps you refine the position.
The Field Separator and Marker Position fields are automatically filled with a series of figures separated
by commas.
The figures in the Field Separator are the number of characters between the separators, which represent
the lengths of the columns of the loaded file. The asterisk symbol means all remaining characters on the
row, starting from the preceding marker position. You can change the figures to specify the column lengths
precisely.
The Marker Position field shows the exact position of each marker on the ruler, in units of characters. You
can change the figures to specify the positions precisely.
To move a marker, press its arrow and drag it to the new position. To remove a marker, press its arrow and
drag it towards the ruler until a icon appears.
On this view, you define the file parsing parameters so that the file schema can be properly retrieved.
At this stage, the preview shows the file columns upon the markers' positions.
1. Set the Field and Row separators in the File Settings area.
• If needed, change the figures in the Field Separator field to specify the column lengths precisely.
• If the row separator of your file is not the standard EOL (end of line), select Custom String from the Row
Separator list and specify the character string in the Corresponding Character field.
2. If your file has any header rows to be excluded from the data content, select the Header check box in the
Rows To Skip area and define the number of rows to be ignored in the corresponding field. Also, if you know
that the file contains footer information, select the Footer check box and set the number of rows to be ignored.
3. The Limit of Rows area allows you to restrict the extend of the file being parsed. If needed, select the Limit
check box and set or select the desired number of rows.
4. If the file contains column labels, select the Set heading row as column names check box to transform the
first parsed row to labels for schema columns. Note that the number of header rows to be skipped is then
incremented by 1.
5. Click Refresh Preview on the Preview panel for the settings to take effect and view the result on the viewer.
6. Click Next to proceed to the next view to check and customize the generated file schema.
1. Rename the schema (by default, metadata) and edit the schema columns as needed.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. To generate the Positional File schema again, click the Guess button. Note that, however, any edits to the
schema might be lost after "guessing" the file-based schema.
The new schema is displayed under the relevant File positional connection node in the Repository tree view. You
can drop the defined metadata from the Repository onto the design workspace as a new component or onto an
existing component to reuse the metadata. For further information about how to use the centralized metadata in a
Job, see How to use centralized metadata in a Joband How to set a repository schema.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file positional
to open the file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and
select Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema
from the contextual menu.
The [New RegEx File] wizard gathers both file connection and schema definitions in a four-step procedure.
To create a File Regex connection from scratch, expand the Metadata node in the Repository tree view, right-
click File Regex and select Create file regex from the contextual menu to open the file metadata setup wizard.
To centralize a file connection and its schema you have defined in a Job, click the icon in the Basic settings
view of the relevant component with its Property Type set to Built-in to open the file metadata setup wizard.
Then define the general properties and file schema in the wizard.
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description
fields if you choose to do so. The information you provide in the Description field will appear as a tooltip
when you move your mouse pointer over the file connection.
2. If needed, set the version and status in the Version and Status fields respectively. You can also manage
the version and status of a repository item in the [Project Settings] dialog box. For more information, see
Version management and Status management respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File regex node to hold your
newly created file connection. Note that you cannot select a folder if you are editing an existing connection,
but you can drag and drop it to a new folder whenever you want.
1. Click the Browse... button to search for the file on the local host or a LAN host.
2. Select the Encoding type and the OS Format the file was created in. This information is used to prefill
subsequent step fields. If the list doesn't include the appropriate format, ignore the OS format.
On this view, you define the file parsing parameters so that the file schema can be properly retrieved.
1. Set the Field and Row separators in the File Settings area.
• If needed, change the figures in the Field Separator field to specify the column lengths precisely.
• If the row separator of your file is not the standard EOL, select Custom String from the Row Separator
list and specify the character string in the Corresponding Character field.
2. In the Regular Expression settings panel, enter the regular expression to be used to delimit the file.
Make sure to include the Regex code in single or double quotes accordingly.
3. If your file has any header rows to be excluded from the data content, select the Header check box in the
Rows To Skip area and define the number of rows to be ignored in the corresponding field. Also, if you know
that the file contains footer information, select the Footer check box and set the number of rows to be ignored.
4. The Limit of Rows allows you to restrict the extend of the file being parsed. If needed, select the Limit check
box and set or select the desired number of rows.
5. If the file contains column labels, select the Set heading row as column names check box to transform the
first parsed row to labels for schema columns. Note that the number of header rows to be skipped is then
incremented by 1.
6. Then click Refresh preview to take the changes into account. The button changes to Stop until the preview
is refreshed.
7. Click Next to proceed to the next view where you can check and customize the generated Regex File schema.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. To retrieve or update the Regex File schema, click Guess. Note however that any edits to the schema might
be lost after guessing the file based schema.
The new schema is displayed under the relevant File regex node in the Repository tree view. You can drop
the defined metadata from the Repository onto the design workspace as a new component or onto an existing
component to reuse the metadata. For further information about how to use the centralized metadata in a Job, see
How to use centralized metadata in a Job and How to set a repository schema.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file regex to
open the file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and
select Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema
from the contextual menu.
Depending on the option you select, the wizard helps you create either an input or an output file connection. In a
Job, the tFileInputXML and tExtractXMLField components use the input connection created to read XML files,
whereas tAdvancedFileOutputXML uses the output schema created to either write an XML file, or to update
an existing XML file.
For further information about reading an XML file, see Setting up XML metadata for an input file.
For further information about writing an XML file, see Setting up XML metadata for an output file.
To create an XML file connection from scratch, expand the Metadata node in the Repository tree view, right-
click File XML and select Create file XML from the contextual menu to open the file metadata setup wizard.
To centralize a file connection and its schema you have defined in a Job, click the icon in the Basic settings
view of the relevant component with its Property Type set to Built-in to open the file metadata setup wizard.
Then define the general properties and file schema in the wizard.
In this step, the general metadata properties such as the Name, Purpose and Description are set.
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description
fields if you choose to do so. The information you provide in the Description field will appear as a tooltip
when you move your mouse pointer over the file connection.
When you enter the general properties of the metadata to be created, you need to define the type of connection as
either input or output. It is therefore advisable to enter information that will help you distinguish between your input
and output schemas.
2. If needed, set the version and status in the Version and Status fields respectively. You can also manage
the version and status of a Repository item in the [Project Settings] dialog box. For more information, see
Version management and Status management respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File XML node to hold your
newly created file connection. Note that you cannot select a folder if you are editing an existing connection,
but you can drag and drop it to a new folder whenever you want.
In this step, the type of metadata is set as either input or output. For this procedure, the metadata of interest is input.
This procedure describes how to upload an XML file to obtain the XML tree structure. To upload an XML Schema
Definition (XSD) file, see Uploading an XSD file.
The example input XML file used to demonstrate this step contains some contact information, and the structure
is like the following:
<contactInfo>
<contact>
<id>1</id>
<firstName>Michael</firstName>
<lastName>Jackson</lastName>
<company>Talend</company>
<city>Paris</city>
<phone>2323</phone>
</contact>
<contact>
<id>2</id>
<firstName>Elisa</firstName>
<lastName>Black</lastName>
<company>Talend</company>
<city>Paris</city>
<phone>4499</phone>
</contact>
...
</contactInfo>
1. Click Browse... and browse your directory to the XML file to be uploaded. Alternatively, enter the access
path to the file.
The Schema Viewer area displays a preview of the XML structure. You can expand and visualize every
level of the file's XML tree structure.
2. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
3. In the Limit field, enter the number of columns on which the XPath query is to be executed, or 0 if you want
to run it against all of the columns.
An XSD file is used to define the schema of XML files. The structure and element data types of the example XML
file above can be described using the following XSD, which is used as the example XSD input in this section.
<xs:element ref="id"/>
<xs:element ref="firstName"/>
<xs:element ref="lastName"/>
<xs:element ref="company"/>
<xs:element ref="city"/>
<xs:element ref="phone"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="id" type="xs:integer"/>
<xs:element name="firstName" type="xs:NCName"/>
<xs:element name="lastName" type="xs:NCName"/>
<xs:element name="company" type="xs:NCName"/>
<xs:element name="city" type="xs:NCName"/>
<xs:element name="phone" type="xs:integer"/>
</xs:schema>
• the data will be saved in the Repository, and therefore the metadata will not be affected by the deletion or displacement
of the file.
1. Click Browse... and browse your directory to the XSD file to be uploaded. Alternatively, enter the access
path to the file.
2. In the dialog box the appears, select an element from the Root list as the root of your XML tree, and click OK.
The Schema Viewer area displays a preview of the XML structure. You can expand and visualize every
level of the file's XML tree structure.
3. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
4. In the Limit field, enter the number of columns on which the XPath query is to be executed, or 0 if you want
to run it against all of the columns.
View Description
Source Schema Tree view of the XML file.
Target Schema Extraction and iteration information.
Preview Preview of the target schema, together with the input data of the selected
columns displayed in the defined order.
First define an Xpath loop and the maximum number of times the loop can run. To do so:
1. Populate the XPath loop expression field with the absolute XPath expression for the node to be iterated
upon. There are two ways to do this, either:
• enter the absolute XPath expression for the node to be iterated upon (Enter the full expression or press
Ctrl+Space to use the autocompletion list),
• drop a node from the tree view under Source schema onto the Absolute XPath expression field.
2. In the Loop limit field, specify the maximum number of times the selected node can be iterated, or -1 if you
want to run it against all of the rows.
3. Define the fields to be extracted dragging the node(s) of interest from the Source Schema tree into the
Relative or absolute XPath expression fields.
You can select several nodes to drop on the table by pressing Ctrl or Shift and clicking the nodes of interest. The
arrow linking an individual node selected on the Source Schema to the Fields to extract table are blue in colour.
The other ones are gray.
4. If needed, you can add as many columns to be extracted as necessary, delete columns or change the column
order using the toolbar:
•
Add or delete a column using the and buttons.
•
Change the order of the columns using the and buttons.
5. In the Column name fields, enter labels for the columns to be displayed in the schema Preview area.
6. Click Refresh Preview to display a preview of the target schema. The fields are consequently displayed in
the schema according to the defined order.
The schema generated displays the columns selected from the XML file and allows you to further define the
schema.
1. If needed, rename the metadata in the Name field (metadata, by default), add a Comment, and make further
modifications, for example:
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. If the XML file which the schema is based on has been changed, click the Guess button to generate the
schema again. Note that if you have customized the schema, the Guess feature does not retain these changes.
3. Click Finish. The new file connection, along with it schema, appears under the File XML node in the
Repository tree view.
Now you can drag and drop the file connection or any schema of it from the Repository tree view onto the design
workspace as a new tFileInputXML or tExtractXMLField component or onto an existing component to reuse
the metadata. For further information about how to use the centralized metadata in a Job, see How to use centralized
metadata in a Joband How to set a repository schema.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file xml to
open the file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and
select Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema
from the contextual menu.
In this step, the general metadata properties such as the Name, Purpose and Description are set.
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description
fields if you choose to do so. The information you provide in the Description field will appear as a tooltip
when you move your mouse pointer over the file connection.
When you enter the general properties of the metadata to be created, you need to define the type of connection as
either input or output. It is therefore advisable to enter information that will help you distinguish between your input
and output schemas.
2. If needed, set the version and status in the Version and Status fields respectively. You can also manage
the version and status of a repository item in the [Project Settings] dialog box. For more information, see
Version management and Status management respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File XML node to hold your
newly created file connection. Note that you cannot select a folder if you are editing an existing connection,
but you can drag and drop it to a new folder whenever you want.
In this step, the type of metadata is set as either input or output. For this procedure, the metadata of interest is output.
2. Click Next to define the output file, either from an XML or XSD file or from scratch.
In this step, you will choose whether to create your file manually or from an existing XML or XSD file. If
you choose the Create manually option you will have to configure your schema, source and target columns
yourself at step 4 in the wizard. The file will be created in a Job using a an XML output component such as
tAdvancedFileOutputXML.
In this procedure, we will create the output file structure by loading an existing XML. To create the output XML
structure from an XSD file, see Defining the output file structure using an XSD file.
To create the output XML structure from an XML file, do the following:
2. Click the Browse... button next to the XML or XSD File field, browse to the access path to the XML file
the structure of which is to be applied to the output file, and double-click the file.
The File Viewer area displays a preview of the XML structure, and the File Content area displays a maximum
of the first 50 rows of the file.
3. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
4. In the Limit field, enter the number of columns on which the XPath query is to be executed, or enter 0 if
you want it to be run against all of the columns.
5. In the Output File field, in the Output File Path zone, browse to or enter the path to the output file. If the
file does not exist as yet, it will be created during the execution of a Job using a tAdvancedFileOutputXML
component. If the file already exists, it will be overwritten.
This procedure describes how to define the output XML file structure from an XSD file. To define the XML
structure from an XML file, see Defining the output file structure using an existing XML file.
• the data will be saved in the Repository, and therefore the metadata will not be affected by the deletion or displacement
of the file.
To create the output XML structure from an XSD file, do the following:
2. Click the Browse... button next to the XML or XSD File field, browse to the access path to the XSD file the
structure of which is to be applied to the output file, and double-click the file.
3. In the dialog box the appears, select an element from the Root list as the root of your XML tree, and click OK.
The File Viewer area displays a preview of the XML structure, and the File Content area displays a maximum
of the first 50 rows of the file.
4. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
5. In the Limit field, enter the number of columns on which the XPath query is to be executed, or enter 0 if
you want it to be run against all of the columns.
6. In the Output File field, in the Output File Path zone, browse to or enter the path to the output file. If the
file does not exist as yet, it will be created during the execution of a Job using a tAdvancedFileOutputXML
component. If the file already exists, it will be overwritten.
In this step, you need to define the output schema. The following table describes how:
Define a group element In the Linker Target area, right-click the element of interest and select Set As Group Element
from the contextual menu.
You can set a parent element of the loop element as a group element on the condition
that the parent element is not the root of the XML tree.
Create a child element for an In the Linker Target area,
element
• Right-click the element of interest and select Add Sub-element from the contextual menu, enter
a name for the sub-element in the dialog box that appears, and click OK,
• Select the element of interest, click the [+] button at the bottom, select Create as sub-element
in the dialog box that appears, and click OK. Then, enter a name for the sub-element in the next
dialog box and click OK.
Create an attribute for an element In the Linker Target area,
• Select the element of interest, click the [+] button at the bottom, select Create as attribute in
the dialog box that appears, and click OK. Then, enter a name for the attribute in the next dialog
box and click OK.
Create a name space for an In the Linker Target area,
element
• Right-click the element of interest and select Add Name Space from the contextual menu, enter
a name for the name space in the dialog box that appears, and click OK,
• Select the element of interest, click the [+] button at the bottom, select Create as name space
in the dialog box that appears, and click OK. Then, enter a name for the name space in the next
dialog box and click OK.
Delete one or more elements/ In the Linker Target area,
attributes/name spaces
• Right-click the element(s)/attribute(s)/name space(s) of interest and select Delete from the
contextual menu
• Select the element(s)/attribute(s)/name space(s) of interest and click the [x] button at the bottom
• Select the element(s)/attribute(s)/name space(s) of interest and press the Delete key.
Adjust the order of one or more In the Linker Target area, select the element(s) of interest and click the and buttons.
elements
Set a static value for an element/ In the Linker Target area, right-click the element/attribute/name space of interest and select Set
attribute/name space A Fix Value from the contextual menu.
• The value you set will replace any value retrieved for the corresponding column from
the incoming data flow in your Job.
• You can set a static value for a child element of the loop element only, on the condition
that the element does not have its own children and does not have a source-target
mapping on it.
Create a source-target mapping Select the column of interest in the Linker Source area, drop it onto the node of interest in the
Linker Target area, and select Create as sub-element of target node, Create as attribute of
target node, or Add linker to target node according to your need in the dialog box that appears,
and click OK.
If you choose an option that is not permitted for the target node, you will see a warning message
and your operation will fail.
Remove a source-target mapping In the Linker Target area, right-click the node of interest and select Disconnect Linker from the
contextual menu.
Create an XML tree from another Right-click any schema item in the Linker Target area and select Import XML Tree from
XML or XSD file the contextual menu to load another XML or XSD file. Then, you need to create source-target
mappings manually and define the output schema all again.
You can select and drop several fields at a time, using the Ctrl + Shift technique to make multiple selections, therefore
making mapping faster. You can also make multiple selections for right-click operations.
1. In the Linker Target area, right-click the element you want to run a loop on and select Set As Loop Element
from the contextual menu.
2. Define other output file properties as needed, and then click Next to view and customize the end schema.
Step 5 of the wizard displays the end schema generated and allows you to further define the schema.
1. If needed, rename the metadata in the Name field (metadata, by default), add a Comment, and make further
modifications, for example:
2. If the XML file which the schema is based on has been changed, click the Guess button to generate the
schema again. Note that if you have customized the schema, the Guess feature does not retain these changes.
3. Click Finish. The new file connection, along with its schema, is displayed under the relevant File XML
metadata node in the Repository tree view.
Now you can drag and drop the file connection or any schema of it from the Repository tree view onto the design
workspace as a new tAdvancedFileOutputXML component or onto an existing component to reuse the metadata.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file xml to
open the file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and
select Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema
from the contextual menu.
You can centralize an Excel file connection either from an existing Excel file, or from Excel file property settings
defined in a Job.
To centralize a File Excel connection and its schema from an Excel file, expand Metadata in the Repository
tree view, right-click File Excel and select Create file Excel from the contextual menu to open the file metadata
setup wizard.
To centralize a file connection and its schema you have already defined in a Job, click the icon in the Basic
settings view of the relevant component, with its Property Type set to Built-in, to open the file metadata setup
wizard.
• Define the general information that will identify the file connection. See Defining the general properties.
• Parse the file to retrieve the file schema. See Parsing the file.
1. In the file metadata setup wizard, fill in the Name field, which is mandatory, and the Purpose and Description
fields if needed. The information you provide in the Description field will appear as a tooltip when you move
your mouse pointer over the file connection.
2. If needed, set the version and status in the Version and Status fields respectively. You can also manage
the version and status of a repository item in the [Project Settings] dialog box. For more information, see
Version management and Status management respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File Excel node to hold
your newly created file connection.
1. Click the Browse... button to browse to the file and fill out the File field.
Skip this step if you are saving an Excel file connection defined in a component because the file path is
already filled in the File field.
2. If the uploaded file is an Excel 2007 file, make sure that the Read excel2007 file format(xlsx) check box
is selected.
3. By default, user mode is selected. If the uploaded xlsx file is extremely large, select Less memory consumed
for large excel(Event mode) from the Generation mode list to prevent out-of-memory errors.
4. In the File viewer and sheets setting area, view the file content and the select the sheet or sheets of interest.
• From the Please select sheet drop-down list, select the sheet you want to view. The preview table displays
the content of the selected sheet.
By default the file preview table displays the first sheet of the file.
• From the Set sheets parameters list, select the check box next to the sheet or sheets you want to upload.
If you select more than one sheet, the result schema will be the combination of the structures of all the
selected sheets.
1. Specify the encoding, advanced separator for numbers, and the rows that should be skipped as they are header
or footer, according to your Excel file.
2. If needed, fill the First column and Last column fields with integers to set precisely the columns to be read
in the file. For example, if you want to skip the first column as it may not contain proper data to be processed,
fill the First column field with 2 to set the second column of the file as the first column of the schema.
To retrieve the schema of an Excel file you do not need to parse all the rows of the file, especially when you
have uploaded a large file. To limit the number of rows to parse, select the Limit check box in the Limit Of
Rows area and set or select the desired number of rows.
3. If your Excel file has a header row, select the Set heading row as column names check box to take into
account the heading names. Click Refresh to view the result of all the previous changes in the preview table.
Note that any character which could be misinterpreted by the program is replaced by neutral characters. For
example, asterisks are replaced with underscores.
1. If needed, rename the schema (by default, metadata) and leave a comment.
Customize the schema if needed: add, remove or move schema columns, export the schema to an XML file,
or replace the schema by importing an schema definition XML file using the tool bar.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. If the Excel file which the schema is based on has been changed, click the Guess button to generate the
schema again. Note that if you have customized the schema, the Guess feature does not retain these changes.
3. Click Finish. The new schema is displayed under the relevant File Excel connection node in the Repository
tree view.
Now you can drag and drop the file connection or the schema of it from the Repository tree view onto the design
workspace as a new component or onto an existing component to reuse the metadata. For further information
about how to use the centralized metadata in a Job, see How to use centralized metadata in a Joband How to set
a repository schema.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file Excel to
open the file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and
select Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema
from the contextual menu.
You can centralize an LDIF file connection either from an existing LDIF file, or from the LDIF file property
settings defined in a Job.
To centralize an LDIF connection and its schema from an LDIF file, expand Metadata in the Repository tree view,
right-click File ldif and select Create file ldif from the contextual menu to open the file metadata setup wizard.
To centralize a file connection and its schema you have already defined in a Job, click the icon in the Basic
settings view of the relevant component, with its Property Type set to Built-in, to open the file metadata setup
wizard.
Make sure that you have installed the required third-party module as described in the Talend Installation Guide.
1. Fill in the general information in the relevant fields to identify the LDIF file metadata, including Name,
Purpose and Description.
The Name field is required, and the information you provide in the Description field will appear as a tooltip
when you move your mouse pointer over the file connection.
2. If needed, set the version and status in the Version and Status fields respectively. You can also manage
the version and status of a repository item in the [Project Settings] dialog box. For more information, see
Version management and Status management respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File ldif node to hold
your newly created file connection.
4. Click the Browse... button to browse to the file and fill out the File field.
Skip this step if you are saving an LDIF file connection defined in a component because the file path is
already filled in the File field.
5. Check the first 50 rows of the file in the File Viewer area and click Next to continue.
6. From the list of attributes of the loaded file, select the attributes you want to include the file schema, and
click Refresh Preview to preview the selected attributes.
• Add, remove or move schema columns, export the schema to an XML file, or replace the schema by
importing an schema definition XML file using the tool bar.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
8. If the LDIF file on which the schema is based has been changed, click the Guess button to generate the
schema again. Note that if you have customized the schema, the Guess feature does not retain these changes.
9. Click Finish. The new schema is displayed under the relevant Ldif file connection node in the Repository
tree view.
Now you can drag and drop the file connection or the schema of it from the Repository tree view onto the design
workspace as a new component or onto an existing component to reuse the metadata. For further information
about how to use the centralized metadata in a Job, see How to use centralized metadata in a Joband How to set
a repository schema.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit file ldif to
open the file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and
select Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema
from the contextual menu.
Depending on the option you select, the wizard helps you create either an input or an output file connections. In a
Job, the tFileInputJSON and tExtractJSONFields components use the input schema created to read JSON files/
fields, whereas tWriteJSONField uses the output schema created to write a JSON field, which can be saved in
a file by tFileOutputJSON or extracted by tExtractJSONFields.
For information about setting up input JSON file metadata, see Setting up JSON metadata for an input file.
For information about setting up output JSON metadata, see Setting up JSON metadata for an output file.
In the Repository view, expand the Metadata node, right click File JSON, and select Create JSON Schema
from the contextual menu to open the [New Json File] wizard.
1. In the wizard, fill in the general information in the relevant fields to identify the JSON file metadata, including
Name, Purpose and Description.
The Name field is required, and the information you provide in the Description field will appear as a tooltip
when you move your mouse pointer over the file connection.
In this step, it is advisable to enter information that will help you distinguish between your input and output connections,
which will be defined in the next step.
2. If needed, set the version and status in the Version and Status fields respectively.
You can also manage the version and status of a repository item in the [Project Settings] dialog box. For
more information, see Version management and Status management respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File Json node to hold
your newly created file connection.
1. In the dialog box, select Input Json and click Next to proceed to the next step of the wizard to load the
input file.
2. From the Read By list box, select the type of query to read the source JSON file.
This is the default and recommended query type to read JSON data in order to gain performance and to
avoid problems that you may encounter when reading JSON data based on an XPath query.
3. Click Browse... and browse your directory to the JSON file to be uploaded. Alternatively, enter the full path
to the file or the URL that links to the JSON file.
The Schema Viewer area displays a preview of the JSON structure. You can expand and visualize every
level of the file's JSON tree structure.
4. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
5. In the Limit field, enter the number of columns on which the JsonPath or XPath query is to be executed, or
0 if you want to run it against all of the columns.
View Description
Source Schema Tree view of the JSON file.
Target Schema Extraction and iteration information.
Preview Preview of the target schema, together with the input data of the selected
columns displayed in the defined order.
File Viewer Preview of the JSON file's data.
1. Populate the Path loop expression field with the absolute JsonPath or XPath expression, depending on the
type of query you have selected, for the node to be iterated upon. There are two ways to do this, either:
• enter the absolute JsonPath or XPath expression for the node to be iterated upon (enter the full expression
or press Ctrl+Space to use the autocompletion list),
• drag the loop element node from the tree view under Source schema into the Absolute path expression
field of the Path loop expression table.
2. In the Loop limit field, specify the maximum number of times the selected node can be iterated.
3. Define the fields to be extracted by dragging the nodes from the Source Schema tree into the Relative or
absolute path expression fields of the Fields to extract table.
You can select several nodes to drop onto the table by pressing Ctrl or Shift and clicking the nodes of interest.
4. If needed, you can add as many columns to be extracted as necessary, delete columns or change the column
order using the toolbar:
•
Change the order of the columns using the and buttons.
5. If you want your file schema to have different column names than those retrieved from the input file, enter
new names in the corresponding Column name fields.
6. Click Refresh Preview to preview the target schema. The fields are consequently displayed in the schema
according to the defined order.
The last step of the wizard shows the end schema generated and allows you to customize the schema according
to your needs.
1. If needed, rename the schema (by default, metadata) and leave a comment.
Customize the schema if needed: add, remove or move schema columns, export the schema to an XML file,
or replace the schema by importing an schema definition XML file using the tool bar.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. If the JSON file which the schema is based on has been changed, click the Guess button to generate the
schema again. Note that if you have customized the schema, the Guess feature does not retain these changes.
3. Click Finish. The new file connection, along with its schema, is displayed under the relevant File Json
metadata node in the Repository tree view.
Now you can drag and drop the file connection or the schema of it from the Repository tree view onto the design
workspace as a new tFileInputJSON or tExtractJSONFields component or onto an existing component to reuse
the metadata. For further information about how to use the centralized metadata in a Job, see How to use centralized
metadata in a Job and How to set a repository schema.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit JSON to open
the file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and
select Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema
from the contextual menu.
1. In the wizard, fill in the general information in the relevant fields to identify the JSON file metadata, including
Name, Purpose and Description.
The Name field is required, and the information you provide in the Description field will appear as a tooltip
when you move your mouse pointer over the file connection.
Talend Data Integration Studio User Guide 387
Setting up JSON metadata for an output file
In this step, it is advisable to enter information that will help you distinguish between your input and output connections,
which will be defined in the next step.
2. If needed, set the version and status in the Version and Status fields respectively.
You can also manage the version and status of a repository item in the [Project Settings] dialog box. For
more information, see Version management and Status management respectively.
3. If needed, click the Select button next to the Path field to select a folder under the File Json node to hold
your newly created file connection.
Setting the type of metadata and loading the template JSON file
In this step, the type of schema is set as either input or output. For this procedure, the schema of interest is output.
1. From the dialog box, select Output JSON click Next to proceed to the next step of the wizard.
2. Choose whether to create the output metadata manually or from an existing JSON file as a template.
If you choose the Create manually option you will have to configure the schema and link the source and
target columns yourself. The output JSON file/field is created via a Job using a JSON output component such
as tWriteJSONField.
In this example, we will create the output metadata by loading an existing JSON file. Therefore, select the
Create from a file option.
3. Click the Browse... button next to the JSON File field, browse to the access path to the JSON file the structure
of which is to be applied to the output JSON file/field, and double-click the file. Alternatively, enter the full
path to the file or the URL which links to the template JSON file.
The File Viewer area displays a preview of the JSON structure, and the File Content area displays a
maximum of the first 50 rows of the file.
4. Enter the Encoding type in the corresponding field if the system does not detect it automatically.
5. In the Limit field, enter the number of columns on which the XPath query is to be executed, or enter 0 if
you want it to be run against all of the columns.
Upon completion of the previous operations, the columns in the Linker Source area are automatically mapped to
the corresponding ones in the Linker Target area, as indicated by blue arrow links..
In this step, you need to define the output schema. The following table describes how:
Define a group element In the Linker Target area, right-click the element of interest and select Set As Group Element
from the contextual menu.
You can set a parent element of the loop element as a group element on the condition
that the parent element is not the root of the JSON tree.
Create a child element for an In the Linker Target area,
element
• Right-click the element of interest and select Add Sub-element from the contextual menu, enter
a name for the sub-element in the dialog box that appears, and click OK.
• Select the element of interest, click the [+] button at the bottom, select Create as sub-element
in the dialog box that appears, and click OK. Then, enter a name for the sub-element in the next
dialog box and click OK.
Create an attribute for an element In the Linker Target area,
• Right-click the element of interest and select Add Attribute from the contextual menu, enter a
name for the attribute in the dialog box that appears, and click OK.
• Select the element of interest, click the [+] button at the bottom, select Create as name space
in the dialog box that appears, and click OK. Then, enter a name for the name space in the next
dialog box and click OK.
Delete one or more elements/ In the Linker Target area,
attributes/name spaces
• Right-click the element(s)/attribute(s)/name space(s) of interest and select Delete from the
contextual menu.
• Select the element(s)/attribute(s)/name space(s) of interest and click the [x] button at the bottom.
• Select the element(s)/attribute(s)/name space(s) of interest and press the Delete key.
Adjust the order of one or more In the Linker Target area, select the element(s) of interest and click the and buttons.
elements
Set a static value for an element/ In the Linker Target area, right-click the element/attribute/name space of interest and select Set
attribute/name space A Fix Value from the contextual menu.
• The value you set will replace any value retrieved for the corresponding column from
the incoming data flow in your Job.
• You can set a static value for a child element of the loop element only, on the condition
that the element does not have its own children and does not have a source-target
mapping on it.
Create a source-target mapping Select the column of interest in the Linker Source area, drop it onto the node of interest in the
Linker Target area, and select Create as sub-element of target node, Create as attribute of
target node, or Add linker to target node according to your need in the dialog box that appears,
and click OK.
If you choose an option that is not permitted for the target node, you will see a warning message
and your operation will fail.
Remove a source-target mapping In the Linker Target area, right-click the node of interest and select Disconnect Linker from the
contextual menu.
Create a JSON tree from another Right-click any schema item in the Linker Target area and select Import JSON Tree from
JSON file the contextual menu to load another JSON file. Then, you need to create source-target mappings
manually and define the output schema all again.
You can select and drop several fields at a time, using the Ctrl + Shift technique to make multiple selections, therefore
making mapping faster. You can also make multiple selections for right-click operations.
1. In the Linker Target area, right-click the element you want to set as the loop element and select Set As
Loop Element from the contextual menu.
The last step of the wizard shows the end schema generated and allows you to customize the schema according
to your needs.
1. If needed, rename the schema (by default, metadata) and leave a comment.
Customize the schema if needed: add, remove or move schema columns, export the schema to an XML file,
or replace the schema by importing an schema definition XML file using the tool bar.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. If the JSON file which the schema is based on has been changed, click the Guess button to generate the
schema again. Note that if you have customized the schema, the Guess feature does not retain these changes.
3. Click Finish. The new file connection, along with its schema, is displayed under the relevant File Json
metadata node in the Repository tree view.
Now you can drag and drop the file connection or the schema of it from the Repository tree view onto the design
workspace as a new tWriteJSONField component or onto an existing component to reuse the metadata. For
further information about how to use the centralized metadata in a Job, see How to use centralized metadata in
a Joband How to set a repository schema.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit JSON to open
the file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and
select Retrieve Schema from the contextual menu.
If you are working on an SVN or Git managed project while the Manual lock option is selected in Talend Administration
Center, be sure to lock manually your connection in the Repository tree view before retrieving or updating table schemas
for it. Otherwise the connection is read-only and the Finish button of the wizard is not operable.
For information on locking and unlocking a project item and on different lock types, see Working collaboratively on project
items.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema
from the contextual menu.
You can create an LDAP connection either from an accessible LDAP directory, or by saving the LDAP settings
defined in a Job.
To create an LDAP connection from an accessible LDAP directory, expand the Metadata node in the Repository
tree view, right-click the LDAP tree node, and select Create LDAP schema from the contextual menu to open
the [Create new LDAP schema] wizard.
To centralize an LDAP connection and its schema you have already defined in a Job, click the icon in the
Basic settings view of the relevant component, with its Property Type set to Built-In, to open the [Create new
LDAP schema] wizard.
Unlike the DB connection wizard, the LDAP wizard gathers both LDAP server connection and schema definition
in a five-step procedure.
The Name field is required, and the information you provide in the Description field will appear as a tooltip
when you move your mouse pointer over the LDAP connection.
2. If needed, set the version and status in the Version and Status fields respectively. You can also manage
the version and status of a Repository item in the [Project Settings] dialog box. For more information, see
Version management and Status management respectively.
3. If needed, click the Select button next to the Path field to select a folder under the LDAP node to hold your
newly created LDAP connection.
Field Description
Host LDAP Server host name or IP address
Port Listening port to the LDAP directory
Encryption method LDAP : no encryption is used
2. Then check your connection using Check Network Parameter to verify the connection and activate the
Next button.
Field Description
Authentication method Simple authentication: requires Authentication Parameters field to be filled in
Field Description
Follow:does handle request redirections
Limit Limited number of records to be read
3. Click Fetch Base DNs to retrieve the DN and click the Next button to continue.
4. If any third-party libraries required for setting up an LDAP connection are found missing, an external module
installation wizard appears. Install the required libraries as guided by the wizard. For more information on
installing third-party modules, see the Talend Installation Guide.
2. Click Refresh Preview to display the selected column and a sample of the data.
The last step shows the LDAP schema generated and allows you to further customize the end schema.
1. If needed, rename the metadata in the Name field (metadata, by default), add a Comment, and make further
modifications, for example:
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
2. If the LDAP directory which the schema is based on has changed, use the Guess button to generate again the
schema. Note that if you customized the schema, your changes will not be retained after the Guess operation.
3. Click Finish. The new schema is displayed under the relevant LDAP connection node in the Repository
tree view.
Now you can drag and drop the file connection or any schema of it from the Repository tree view onto the design
workspace as a new component or onto an existing component to reuse the metadata.
To modify an existing file connection, right-click it from the Repository tree view, and select Edit LDAP schema
to open the file metadata setup wizard.
To add a new schema to an existing file connection, right-click the connection from the Repository tree view and
select Retrieve Schema from the contextual menu.
To edit an existing file schema, right-click the schema from the Repository tree view and select Edit Schema
from the contextual menu.
1. In the Repository tree view, expand the Metadata node, right-click the Salesforce tree node, and select
Create Salesforce from the contextual menu to open the [Salesforce] wizard.
2. Enter a name for your connection in the Name field, select Basic or OAuth from the Connection type list,
and provide the connection details according to the connection type you selected.
• With the Basic option selected, you need to specify the following details:
• With the OAuth option selected, you need to specify the following details:
• Client Id and Client Secret: the OAuth consumer key and consumer secret, which are available in the
OAuth Settings area of the Connected App that you have created at Salesforce.com.
• Callback Host and Callback Port: the OAuth authentication callback URL. This URL (both host and
port) is defined during the creation of a Connected App and will be shown in the OAuth Settings area
of the Connected App.
• Token File: the path to the token file that stores the refresh token used to get the access token without
authorization.
3. If needed, click Advanced... to open the [Salesforce Advanced Connection Settings] dialog box, do the
following and then click OK:
• enter the Salesforce Webservice URL required to connect to the Salesforce system.
• select the Bulk Connection check box if you need to use bulk data processing function.
• select the Need compression check box to activate SOAP message compression, which can result in
increased performance levels.
• select the Trace HTTP message check box to output the HTTP interactions on the console.
• select the Use HTTP Chunked check box to use the HTTP chunked data transfer mechanism.
This option is not available if the Bulk Connection check box is selected.
• enter the ID of the real user in the Client Id field to differentiate between those who use the same account
and password to access the Salesforce website.
• fill the Timeout field with the Salesforce connection timeout value, in milliseconds.
4. Click Test connection to verify the connection settings, and when the connection check success message
appears, click OK for confirmation. Then click Next to go to the next step to select the modules you want
to retrieve the schema of.
5. Select the check boxes for the modules of interest and click Finish to retrieve the schemas of the selected
modules.
The newly created Salesforce connection is displayed under the Salesforce node in the Repository tree view,
along with the schemas of the selected modules.
You can now drag and drop the Salesforce connection or any schema of it from the Repository onto the design
workspace, and from the dialog box that opens choose a Salesforce component to use in your Job. You can also
drop the Salesforce connection or a schema of it onto an existing component to reuse the connection or metadata
details in the component. For more information about dropping component metadata in the design workspace,
see How to use centralized metadata in a Job. For more information on Salesforce components, see the Talend
Components Reference Guide.
To modify the Salesforce metadata entry, right-click it from the Repository tree view, and select Edit Salesforce
to open the file metadata setup wizard.
To edit an existing Salesforce schema, right-click the schema from the Repository tree view and select Edit
Schema from the contextual menu.
• from scratch. For details, see Setting up a generic schema from scratch,
• from a schema definition XML file. For details, see Setting up a generic schema from an XML file, and
• from the schema defined in a component. For details, see Saving a component schema as a generic schema.
• Select Repository from the Schema drop-down list in the component Basic settings view.
Click the [...] button to open the [Repository Content] dialog box, select the generic schema under the Generic
schemas node and click OK.
• Select the metadata node of the generic schema from the Repository tree view and drop it onto the component.
1. Right-click Generic schemas under the Metadata node in the Repository tree view, and select Create
generic schema.
2. In the schema creation wizard that appears, fill in the generic schema properties such as schema Name and
Description. The Status field is a customized field. For more information about how to define the field, see
Status settings.
3. Give a name to the schema or use the default one (metadata) and add a comment if needed. Customize the
schema structure in the Schema panel according to your needs.
The tool bar allows you to add, remove or move columns in your schema. You can also export the current
schema as an XML file, or import a schema from an XML file, which must be an export of schema from the
Studio, to replace the current schema.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
4. Click Finish to complete the generic schema creation. The created schema is displayed under the relevant
Generic schemas node.
1. Right-click Generic schemas in the Repository tree view, and select Create generic schema from xml.
2. In the dialog box that appears, choose the source XML file from which the schema is taken and click Open.
3. In the schema creation wizard that appears, define the schema Name or use the default one (metadata) and
give a Comment if any.
The schema structure from the source file is displayed in the Schema panel. You can customize the columns
in the schema as needed.
The tool bar allows you to add, remove or move columns in your schema. You can also export the current
schema as an XML file, or import a schema from an XML file, which must be an export of schema from the
Studio, to replace the current schema.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
4. Click Finish to complete the generic schema creation. The created schema is displayed under the relevant
Generic schemas node.
1. Open the Basic settings view of the component that has the schema you want to create a generic schema
from, and click the [...] button next to Edit schema to open the [Schema] dialog box.
2. Click the floppy disc icon to open the [Select folder] dialog box.
3. Select a folder if needed, and click OK to close the dialog box and open the [Save as generic schema]
creation wizard.
4. Fill in the Name field (required) and the other fields if needed, and click Finish to save the schema. Then
close the [Schema] dialog box opened from the component Basic settings view.
The schema is saved in the selected folder under the Generic schemas node in the Repository tree view.
You can also set up an MDM connection the same way by clicking the icon in the Basic settings view of the
tMDMInput and tMDMOutput components. For more information, see the Talend MDM component chapter in Talend
Components Reference Guide.
According to the option you select, the wizard helps you create an input XML, an output XML or a receive XML
schema. Later, in a Talend Job, the tMDMInput component uses the defined input schema to read master data
stored in XML documents, tMDMOutput uses the defined output schema to either write master data in an XML
document on the MDM server, or to update existing XML documents and finally the tMDMReceive component
uses the defined XML schema to receive an MDM record in XML from MDM triggers and processes.
1. In the Repository tree view, expand Metadata and right-click Talend MDM.
3. Fill in the connection properties such as Name, Purpose and Description. The Status field is a customized
field that can be defined. For more information, see Status settings.
5. From the Version list, select the version of the MDM server to which you want to connect.
The default value in the Server URL field varies depending on what you selected in the Version list.
6. Fill in the connection details including the authentication information to the MDM server and then click
Check to check the connection you have created.
A dialog box pops up to show that your connection is successful. Click OK to close it.
If needed, you can click Export as context to export this Talend MDM connection details to a new context
group in the Repository or reuse variables of an existing context group to set up your metadata connection. For
more information, see Exporting metadata as context and reusing context parameters to set up a connection.
8. From the Data-Model list, select the data model against which the master data is validated.
9. From the Data-Container list, select the data container that holds the master data you want to access.
10. Click Finish to validate your changes and close the dialog box.
The newly created connection is listed under Talend MDM under the Metadata folder in the Repository
tree view.
You need now to retrieve the XML schema of the business entities linked to this MDM connection.
To set the values to be fetched from one or more entities linked to a specific MDM connection, complete the
following:
1. In the Repository tree view, expand Metadata and right-click the MDM connection for which you want to
retrieve the entity values.
3. Select the Input MDM option in order to download an input XML schema and then click Next to proceed
to the following step.
4. From the Entities field, select the business entity (XML schema) from which you want to retrieve values.
You are free to enter any text in this field, although you would likely put the name of the entity from which you are
retrieving the schema.
The schema of the entity you selected is automatically displayed in the Source Schema panel.
Here, you can set the parameters to be taken into account for the XML schema definition.
The schema dialog box is divided into four different panels as the following:
Panel Description
Source Schema Tree view of the uploaded entity.
Target schema Extraction and iteration information.
Preview Target schema preview.
File viewer Raw data viewer.
6. In the Xpath loop expression area, enter the absolute XPath expression leading to the XML structure node
on which to apply the iteration. Or, drop the node from the source schema to the target schema Xpath field.
This link is orange in color.
In the capture above, we use Features as the element to loop on because it is repeated within the Product
entity as follows:
<Product>
<Id>1</Id>
<Name>Cup</Name>
<Description/>
<Features>
<Feature>Color red</Feature>
<Feature>Size maxi</Feature
<Features>
...
</Product>
<Product>
<Id>2</Id>
<Name>Cup</Name>
<Description/>
<Features>
<Feature>Color blue</Feature>
<Feature>Thermos</Feature>
<Features>
...
</Product>
By doing so, the tMDMInput component that uses this MDM connection will create a new row for every
item with different feature.
8. To define the fields to extract, drop the relevant node from the source schema to the Relative or absolute
XPath expression field.
Use the [+] button to add rows to the table and select as many fields to extract as necessary. Press the Ctrl or the Shift
keys for multiple selection of grouped or separate nodes and drop them to the table.
9. If required, enter a name to each of the retrieved columns in the Column name field.
You can prioritize the order of the fields to extract by selecting the field and using the up and down arrows. The link
of the selected field is blue, and all other links are grey.
10. Click Finish to validate your modifications and close the dialog box.
The newly created schema is listed under the corresponding MDM connection in the Repository tree view.
1. In the Repository tree view, expand Metadata and Talend MDM and then browse to the schema you want
to modify.
2. Right-click the schema name and select Edit Entity from the contextual menu.
You can change the name of the schema according to your needs, you can also customize the schema structure
in the schema panel. The tool bar allows you to add, remove or move columns in your schema.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
The MDM input connection (tMDMInput) is now ready to be dropped in any of your Jobs.
To set the values to be written in one or more entities linked to a specific MDM connection, complete the following:
1. In the Repository tree view, expand Metadata and right-click the MDM connection for which you want to
write the entity values.
3. Select the Output MDM option in order to define an output XML schema and then click Next to proceed
to the following step.
4. From the Entities field, select the business entity (XML schema) in which you want to write values.
You are free to enter any text in this field, although you would likely put the name of the entity from which you are
retrieving the schema.
Identical schema of the entity you selected is automatically created in the Linker Target panel, and columns are
automatically mapped from the source to the target panels. The wizard automatically defines the item Id as the looping
element. You can always select to loop on another element.
Here, you can set the parameters to be taken into account for the XML schema definition.
7. Do necessary modifications to define the XML schema you want to write in the selected entity.
Your Linker Source schema must corresponds to the Linker Target schema, that is to say define the
elements in which you want to write values.
9. In the Linker Target panel, right-click the element you want to define as a loop element and select Set as
loop element. This will restrict the iteration to one or more nodes.
By doing so, the tMDMOutput component that uses this MDM connection will create a new row for every
item with different feature.
You can prioritize the order of the fields to write by selecting the field and using the up and down arrows.
10. Click Finish to validate your modifications and close the dialog box.
The newly created schema is listed under the corresponding MDM connection in the Repository tree view.
1. In the Repository tree view, expand Metadata and Talend MDM and then browse to the schema you want
to modify.
2. Right-click the schema name and select Edit Entity from the contextual menu.
You can change the name of the schema according to your needs, you can also customize the schema structure
in the schema panel. The tool bar allows you to add, remove or move columns in your schema.
The MDM output connection (tMDMOutput) is now ready to be dropped in any of your Jobs.
To set the XML schema you want to receive in accordance with a specific MDM connection, complete the
following:
1. In the Repository tree view, expand Metadata and right-click the MDM connection for which you want to
retrieve the entity values.
3. Select the Receive MDM option in order to define a receive XML schema and then click Next to proceed
to the following step.
4. From the Entities field, select the business entity (XML schema) according to which you want to receive
the XML schema.
You can enter any text in this field, although you would likely put the name of the entity according to which you want
to receive the XML schema.
The schema of the entity you selected display in the Source Schema panel.
Here, you can set the parameters to be taken into account for the XML schema definition.
The schema dialog box is divided into four different panels as the following:
Panel Description
Source Schema Tree view of the uploaded entity.
Target schema Extraction and iteration information.
Preview Target schema preview.
File viewer Raw data viewer.
6. In the Xpath loop expression area, enter the absolute XPath expression leading to the XML structure node
on which to apply the iteration. Or, drop the node from the source schema to the target schema Xpath field.
This link is orange in color.
7. If required, define a Loop limit to restrict the iteration to one or more nodes.
In the above capture, we use Features as the element to loop on because it is repeated within the Product
entity as the following:
<Product>
<Id>1</Id>
<Name>Cup</Name>
<Description/>
<Features>
<Feature>Color red</Feature>
<Feature>Size maxi</Feature
<Features>
...
</Product>
<Product>
<Id>2</Id>
<Name>Cup</Name>
<Description/>
<Features>
<Feature>Color blue</Feature>
<Feature>Thermos</Feature>
<Features>
...
</Product>
By doing so, the tMDMReceive component that uses this MDM connection will create a new row for every
item with different feature.
8. To define the fields to receive, drop the relevant node from the source schema to the Relative or absolute
XPath expression field.
Use the plus sign to add rows to the table and select as many fields to extract as necessary. Press the Ctrl or the Shift
keys for multiple selection of grouped or separate nodes and drop them to the table.
9. If required, enter a name to each of the received columns in the Column name field.
You can prioritize the order of the fields you want to receive by selecting the field and using the up and down arrows.
The link of the selected field is blue, and all other links are grey.
10. Click Finish to validate your modifications and close the dialog box.
The newly created schema is listed under the corresponding MDM connection in the Repository tree view.
1. In the Repository tree view, expand Metadata and Talend MDM and then browse to the schema you want
to modify.
2. Right-click the schema name and select Edit Entity from the contextual menu.
You can change the name of the schema according to your needs, you can also customize the schema structure
in the schema panel. The tool bar allows you to add, remove or move columns in your schema.
Make sure the data type in the Type column is correctly defined.
For more information regarding Java data types, including date pattern, see Java API Specification.
• Object: a generic Talend data type that allows processing data without regard to its content, for example,
a data file not otherwise supported can be processed with a tFileInputRaw component by specifying that
it has a data type of Object.
• List: a space-separated list of primitive type elements in an XML Schema definition, defined using the
xsd:list element.
• Dynamic: a data type that can be set for a single column at the end of a schema to allow processing fields
as VARCHAR(100) columns named either as ‘Column<X>’ or, if the input includes a header, from the
column names appearing in the header. For more information, see Dynamic schema.
• Document: a data type that allows processing an entire XML document without regarding to its content.
The MDM receive connection (tMDMReceive) is now ready to be dropped in any of your Jobs.
An established survivorship rule package is stored in the folder of the same name in Metadata > Rules
Management > Survivorship Rules on the Repository tree view and is composed of the items representing each
step of a validation flow, the rule package itself, and the whole validation flow, respectively. The following figure
presents an example of the survivorship rule package in the Repository.
The Survivorship Rules item node has no child items until you have generated the corresponding survivorship rule package.
You need to use the tRuleSurvivorship component to define each rule of interest, generate the corresponding rule package
into the Repository and as well to execute the established survivor validation flow. For further information about this
component, see Talend Components Reference Guide.
Once a survivorship rule package with its validation rules is generated under the Survivorship Rules item, you
could perform different operations to manage them:
• Lock items: for further information about how to lock or unlock an item, see Lock principle.
• Detect dependencies: for further information about the dependencies, see How to update impacted Jobs
manually
• Import or export items: for further information, see Importing/exporting items and building Jobs
• Edit items: for further information, see How to view or edit a survivorship rule item
1. In Metadata > Rules Management > Survivorship Rules of the Repository tree view, expand it to show
the survivorship rule package folder that you have generated.
2. Select the rule package folder of interest and expand it. Then the contents of this package folder are listed
under it.
3. Right-click the validation step item that you need to view or edit. Then in the contextual menu, select Edit
rule.
In this example, the validation rule of this step is labelled 5_MostCommonZip, belonging to the rule group
whose identifier in the established survivor validation flow is 5_MostCommonZipGroup, and it examines the
data from the zip column. The when clause indicates the condition used to examine the data and the then
clause indicates the target columns from which the best-of-breed data is selected.
The edit feature is intended for viewing items as well as minor modifications such as, in this example, the change of
the matching regex from "\\d{5}" to "\\d{6}". If you have to rewrite the clauses, or remove or add some clauses,
we recommend using tRuleSurvivorship to define and organize the rules of interest and then regenerate the new rule
package into the Repository in order to avoid manual efforts and risky errors.
This package defines a Drools declarative model for the corresponding survivor validation flow, using the user-
defined columns in the input schema of the tRuleSurvivoship component. For further information about Drools
declarative model, see the manual of Drools Guvnor.
The edit feature is intended for viewing items as well as minor modifications. If you have to rewrite the whole contents, or
remove or add some contents, we recommend using tRuleSurvivorship to define and organize the rules of interest and then
regenerate the new rule package into the Repository in order to avoid manual efforts and risky errors.
Once opened, the diagram of the validation flow in this example reads as presented in the figure below:
This diagram is a simple Drools flow. You can select each step to check the corresponding properties in the
Properties view, for example, the RuleFlowGroup property, which indicates the group identifier of the rules
defined and executed at each step.
If the Properties view does not display, click the Window menu > Show view > General > Properties to enable it.
On the left side are docked the tool panel where you can select the tools of interest to modify the established
diagram. Three flow components are available in this figure, but in the Drools Flow nodes view of the
[Preferences] dialog box, you can select the corresponding check box to add a flow component or clear the
corresponding check box to hide it. To validate the settings of preferences, you need to re-open the flow of interest.
For further information about the [Preferences] dialog box, see Setting Talend Studio preferences
The edit feature is intended for viewing items as well as minor modifications. If you have to rearrange the flow or change
properties of a step, we recommend using tRuleSurvivorship to define and organize the rules of interest and then regenerate
the new rule package into the Repository in order to avoid manual efforts and risky errors.
For further information about a Drools flow and its editing tools, see the relative Drools manuals.
This step describes how to launch the BRMS wizard and then set the connection metadata such the Name, Purpose
and Description.
1. In the Repository tree view, expand the Metadata node and the Rules Management node.
2. Right-click BRMS, and select Create BRMS from the pop-up menu.
3. Enter the generic metadata information such as the connection Name, Purpose and Description.
1. Enter the GuvURL Name and TAC URL in the corresponding fields.
2. Click Browse to enter your authentication information in order to select the rules package of interest from
the Drools repository.
The [Deploy Jar] dialog box opens, with the URL fields automatically retrieved from the previous dialog box:
4. Click the [...] button to browse the Jar files in the [Select Jarfile] dialog box:
5. Expand the nodes to browse to and select the Jar file that contains the rules library of interest, then click OK
to close the dialog box.
The selected Jar file is displayed on the [Deploy Jar] dialog box.
6. From the list next to the Jar file, select the corresponding class name and click OK to close the [Deploy Jar]
dialog box and return to the BRMS wizard.
A Talend program transforms the library into a form which can be used in a Job and creates an XSD file
at the root of your studio.
7. Click Next to define the Input Schema and the Linker Target schema in the [Schema Design] window.
In this step it is necessary to define the Input Schema and Linker Target schema, according to your needs.
1. In the Linker source area, click Input Schema to open the schema editor to define the input schema:
2.
Click the button to add as many columns as required, and define the schema as required. When done,
click OK to close the editor.
3. In the Linker Target area, right-click the node to run a loop on, reason in this example, and select Set As
Loop Element from the contextual menu.
4. Drop the columns from the Linker Source area onto the Related Column field in the Linker target schema.
You can select and drop several fields at a time, using the Ctrl + Shift technique to make multiple selections, therefore
making mapping faster. You can also make multiple selections for right-click operations.
5. Click Output Schema and add a new column called XML, in the output schema editor:
6. From the XML Field list in the Output-Management area, select XML, and click Next to view the finalized
input and output schemas.
The new BRMS connection, along with its schema, is added to the Repository tree view, under the Metadata
> Rules Management > BRMS node.
You can drop the metadata defined from the Repository onto the design workspace as a new tBRMS component,
which is automatically set with all of the connection parameters. For further information about how to use the
centralized metadata in a Job, see How to use centralized metadata in a Joband How to set a repository schema.
To modify a BRMS connection, right-click it from the Repository tree view, and select Edit BRMS to open the
file metadata setup wizard.
To edit a schema, right-click the schema from the Repository tree view and select Edit Schema from the
contextual menu.
Drools Guvnor, a web based business rules governance system, has been integrated in Talend Studio. With Drools Guvnor,
non-technical users can quickly and easily create and modify complex business logic directly, via the Guvnor interface. For
more information, see Talend Administration Center User Guide.
Through the Rules folder in the Metadata node of the Repository tree view, you can create your own personalized
rules or access a file that holds predefined rules. Then you can use the tRule component to apply the encoded
rules in one or more of your job designs.
For more information about using rules with the tRules component, see Talend Components Reference Guide.
3. In the contextual menu, select Create Rules to display the [New Rule...] wizard that will guide you through
the steps of creating or selecting the business rules you want to use.
The Embedded Rules files are either Drools or Excel files of .drl or .xls formats, respectively.
4. In the [New Rule...] wizard, fill in schema generic information, such as Name and Description and click
Next to open a new view on the wizard.
• create a rule file of Drools format in which you can store the newly created rules, or
When you connect to an Excel file, make sure that all occurrences of project and job names on top of the file correspond to
the project you launch the Studio on and to the Job you want to use the rules in.
1. Select the Create option to create the rule file of Drools format.
2. In the Type of rule resource list, select the format of the file you want to create: New DRL (rule package).
3. Click Finish to validate the operation and close the wizard. A rule editor opens in the design workspace, in
which you must manually define the rules you want to use in simplified Drools language.
2. In the Type of rule resource field, select New DRL (rule package) or New XLS (Excel) depending on the
file format you want to set the path to.
3. Click the Browse button next to the field to set the path to the rule file you want to use.
4. Click Finish to close the wizard and open in the Studio the rule file you set the connection to.
If you want to modify any of the rules held in the rule files, do the following:
-For a Drools file, open the file in Talend Studio and modify the rules directly in the open file.
-For an Excel file, open the file locally and carry out necessary modifications. Then in the Repository tree view and
under Rules, right-click the file connection and select Update Xls file in the contextual menu.
If you modify a rule, you must close the Job using the rule and reopen it to take into accounts the new modifications.
The [Web Service] schema wizard enables you to create either a simple schema (Simple WSDL) or an advanced
schema (Advanced WebService), according to your needs.
In step 1, you must enter the schema metadata before choosing whether to create a simple or an advanced schema in step 2.
It is therefore important to enter metadata information which will help you to differentiate between your different schema
types in the future.
2. Right-click Web Service and select Create WSDL schema from the context menu list.
3. Enter the generic schema information such as its Name and Description.
In this step, you need to indicate whether you want to create a simple or an advanced schema. In this example,
a simple schema is created.
This step involves the definition of the URI and other parameters required to obtain the desired values.
1. Enter the URI which will transmit the desired values, in the WSDL field, http://www.webservicex.net/
country.asmx?wsdl in this example.
2. If necessary, select the Need authentication? check box and then enter your authentication information in
the User and Password fields.
3. If you use an http proxy, select the Use http proxy check box and enter the information required in the host,
Port, user and password fields.
4. Enter the Method name in the corresponding field, GetCountryByCountryCode in this example.
5. In the Value table, Add or Remove values as desired, using the corresponding buttons.
6. Click Refresh Preview to check that the parameters have been entered correctly.
In the Preview tab, the values to be transmitted by the Web Service method are displayed, based the
parameters entered.
You can modify the schema name (metadata, by default) and modify the schema itself using the tool bar.
1.
Add or delete columns using the and buttons.
2.
Modify the order of the columns using the and buttons.
3. Click Finish.
The new schema is added to the Repository under the Web Service node. You can now drop it onto the
design workspace as a tWebServiceInput component in your Job.
2. Right-click Web Service and select Create WSDL schema from the context menu list.
3. Enter the generic schema information, such as its Name and Description.
In this step, you must indicate whether you want to create a Simple or an Advanced schema. In this example,
an Advanced schema is created.
1. Type in the URI of the Web Service WSDL file manually by typing in the WSDL field, or click the Browse...
button to browse your directory if your WSDL is stored locally.
2. Click the Refresh button to retrieve the list of port names and operations available.
3. Select the port name to be used, in the Port Name zone, countrySoap12 in this example.
Next, you need to define the input and output schemas and schema-parameter mappings in the Input mapping
and Output mapping tabs.
1. Click the Input mapping tab to define the input schema and set the parameters required to execute the
operation.
2. In the table to the right, select the parameters row and click the [+] button to open the [ParameterTree]
dialog box.
3. Selectthe parameter you want to use and click OK to close the dialog box.
A new row appears showing the parameter you added, CountryCode in this example.
4. In the table to the left, click the Schema Management button to open the [Schema] dialog box.
In this example, drop the CountryCode column from the left table onto the parameters.CountryCode row
to the right.
If available, use the Auto Map button situated to the top of the tab, to carry out the mapping automatically.
1. Click the Output mapping tab to define the output schema and set its parameters.
2. In the table to the left, select the parameter row and click the [+] button to add a parameter.
A new row appears showing the parameter you added, GetCountryByCountryCodeResult in this example.
4. In the table to the right, click [...] to open the [Schema] dialog box.
In this example, drop the parameters.GetCountryByCountyCodeResult row from the table to the left onto the
Result column to the right.
Depending on the type of the output, you can choose to normalize or denormalize the results by clicking the Normalize
and Denormalize buttons.
You can customize the metadata by changing or adding information in the Name and Comment fields and make
further modifications using the toolbar, for example:
1.
Add or delete columns using the and buttons.
2.
Change the column order by clicking the and arrows.
The new schema is added to the Repository under the corresponding Web Service node. You can now drop
it onto the design workspace as a tWebService component in your Job.
The Web Service Explorer button is located next to the Refresh Preview button.
2. In the Web Service Explorer's toolbar (top-right), click the WSDL Page icon.
4. In the WSDL URL field, enter the URL of the Web Service WSDL you want to get the operation details of,
and click Go. Note that the field is case-sensitive.
5. Click the port name you want to use under Bindings. In this example: countrySoap.
6. Click the Name of the method under Operations to display the parameters required,
GetCountryByCountryCode in this example.
7. Click the parameter name (in this example: CountryCode) to get more information about the parameter.
8. Click Add to add a new parameter line. You can add as many lines as you want.
The result displays in the Status area. If the number of parameters you entered exceeds the maximum number of
parameters authorized, an error message will display a pop-up message.
Simply copy and paste the relevant information to help you fill in the fields of the standard WSDL wizard.
The WSDL URI is not passed on automatically to the Web Service wizard fields.
You can use the Source link on the Status area in case you need to debug your Web Service request or response
The Web Service Explorer can also help find your favorite registry through the UDDI page and WSIL
page buttons of the tool bar.
All your business and validation rules can now be centralized in Repository metadata which will enable you to
modify, activate, deactivate and delete them according to your need.
They can be defined either from the Validation Rules metadata entry or directly from the metadata schema or
columns you want to check and they are to be used in your Job designs at the component level. Data that did not
pass the validation check can easily be retrieved through a reject link for a further treated, if necessary.
To see how to use a validation rule in a Job design, see Validation rules Job example in Theory into practice:
Job examples.
1. In the Repository tree view, expand Metadata and right-click Validation Rules, and select Create
validation rule from the contextual menu.
Or
In the Repository tree view, expand Metadata and expand any metadata item you want to check, either
directly right-click the schema of the metadata item or right-click a column of that schema, and select Add
validation rule... from the contextual menu.
For more information about metadata compatible with validation rules, see Selecting the trigger and type of
validation .
2. Fill in the general information of the metadata such as Name, Purpose and Description. The Status field is
a customized field that can be defined. For more information, see Status settings.
1. In the tree view on the left of the window, select the metadata item you want to check.
2. In the panel on the right, select the column(s) on which you want to perform the validity check.
• On select,
• On insert,
• On update,
• On delete.
Some of the rule trigger options can be disabled according to the type of metadata you checked. For example if the
metadata is a file, on update and on delete triggers are not applicable.
Please refer to the following table for a complete list of supported (enabled) options:
Validation rules are not supported for any other metadata that does not display in the above list.
When you select the On select trigger, the validation rule should be applied to the input components of the
Job Designs and when you select the On insert, On update and On delete triggers, the validation rule should
be applied to output components.
And you can select the type of validation you want to perform:
• a referential integrity validation rule that will check your data against a reference data,
• a basic restriction validation rule that will check the validity of the values of the selected field(s) with
basic criteria,
• a custom code validation rule allowing you to specify your own Java or SQL based criteria.
Referential rule
1. In the Trigger time settings area, select the option corresponding to the action that will trigger the
validation. As On insert and On update options are selected here, data will be checked when insert or
update action will be performed.
2. In the Rule type settings area, select the type of validation you want to apply between Reference, Basic
Value and Custom check. To check data by reference, select Reference Check.
3. Click Next.
4. In this step, select the database schema that will be used as reference.
5. Click Next.
6. In the Source Column list, select the column name you want to check and drag it to the Target column
against which you want to compare it.
Basic rule
1. In the Trigger time settings area, select the option corresponding to the action that will trigger the
validation. As On Select option is selected here, the check will be performed when data are read.
2. In the Rule type settings area, select the type of validation you want to apply between Reference, Basic
Value and Custom check. To make a basic check of data, select Basic Value Check.
4. Click the plus button at the bottom of the Conditions table to add as many conditions as required and
select between And and Or to combine them. Here, you want to ignore empty Phone number fields, so
you added two conditions: retrieve data that are not empty and data that are not null.
Custom rule
1. In the Trigger time settings area, select the option corresponding to the action that will trigger the
validation. As On Select option is selected here, the check will be performed when data are read.
2. In the Rule type settings area, select the type of validation you want to apply between Reference, Basic
Value and Custom check. To make a custom check of data, select Custom Check.
3. Click Next.
4. In this step, type in your Java condition directly in the text box or click Expression Editor to open the
[Expression Builder] that will help you create your Java condition. Use input_row.columnname, where
columnname is the name of the column of your schema, to match the input column. In the previous
capture, the data will be passed if the value of the idState column is bigger than 0 and smaller than 51.
For more information about the Expression Builder, see Using the expression editor.
In this step:
1. Select Disallow the operation and the data that fails to pass the condition will not be outputted.
2. Select Make rejected data available on REJECT link in job design to retrieve the rejected data in another
output.
To see how to use a validation rule in a Job Design, see Validation rules Job example in Theory into practice:
Job examples.
2. Right-click FTP and select Create FTP from the context menu.
3. Enter the generic schema information such as its Name and Description.
The status field is a customized field which can be defined in the [Preferences] dialog box (Window > Preferences).
For further information about setting preferences, see Setting Talend Studio preferences.
4. When you have finished, click Next to enter the FTP server connection information.
2. In the Host field, enter the name of your FTP server host.
5. From the Connection Model list, select the connection model you want to use:
• Select Passive if you want the FTP server to choose the port connection to be used for data transfer.
6. In the Parameter area, select a setting for FTP server usage. For standard usage, there is no need to select
an option.
• Select the SFTP Support check box to use the SSH security protocol to protect server communications.
An Authentication method appears. Select Public key or Password according to what you use.
• Select the FTPs Support check box to protect server communication with the SSL security protocol.
• Select the Use Socks Proxy check box if you want to use this option, then enter the proxy information (the
host name, port number, username and password).
All of the connections created appear under the FTP server connection node, in the Repository view.
You can drop the connection metadata from the Repository onto the design workspace. A dialog box opens in
which you can choose the component to be used in your Job.
For further information about how to drop metadata onto the workspace, see How to use centralized metadata
in a Job.
The step, in which you define the general properties of the schema to be created, precedes the next step at which you set the
type of schema as either input or output. It is therefore advisable to enter names which will help you to distinguish between
your input and output schemas.
If you want to read an HL7 structured message, see Centralizing HL7 metadata for an input file.
If you want to write an HL7 structured message, see Setting up an HL7 schema for an output file.
2. Right-click HL7, and select Create HL7 from the pop-up menu.
3. Enter the generic schema information such as its name and description.
This step involves selecting the input file. You can also preview the structure of the HL7 file selected.
1. In the File path field, browse to or enter the path to the HL7 file to be uploaded.
A preview of the HL7 file structure is displayed in the Message View area.
1. In the Schema View area, select the type of segment from the Segment (As Schema) drop-down list.
2. In the Message View area, select the elements or attributes of each message segment that you want in the
schema.
The HL7 structure is comparable to that of XML. The elements and attributes selected from the Message View must
correspond to the type of segment selected.
4. Click the corresponding User Column field if you want to change the name and enter the new name manually.
This step shows the final schema generated. The name corresponds to the type of segment selected.
1. If needed, change the name of the metadata in the Name field (by default this is populated by the name of the
segment type), add a comment in the corresponding field and make further modifications using the toolbar,
for example:
•
Add or delete a column using the and buttons.
•
Change the order of the columns using the and buttons.
The new metadata item is added to the Repository tree beneath the HL7 node.
In this step, the schema metadata such as the Name, Purpose and Description are set.
2. Right-click HL7, and select Create HL7 from the pop-up menu.
3. Enter the generic schema information such as its name and description.
In this step, the type of schema is set as either input or output. For this procedure, the schema of interest is output.
You can choose whether to create your file manually or from an existing file. If you choose the Create manually
option you will have to configure your schema, source and target columns by yourself. The file is created in a
Job using a tHL7Output component.
2. In the File path field, browse to or enter the path to the HL7 file to be uploaded.
A preview of the HL7 file structure will be displayed in the Message View area. The structure of this file
will be applied to the output file.
3. In the Output File Path field, browse to or enter the path to the output file. If the file doesn't exist, it will
be created during the execution of a Job using a tHL7Output component. If the file already exists, it will
be overwritten.
In this step, we define the repeat elements and edit columns as required.
The source columns from the Linker Source area are automatically mapped to the target columns in the Linker
Target area.
1. In the Linker Target area, right-click the element concerned and select Set As Repeatable Element from
the contextual menu if needed.
2. Click [Schema Management] to edit the schema in the pop-up [Schema] dialog box if needed.
1. If needed, change the name of the metadata in the Name field (by default this is populated by the name of the
segment type), add a comment in the corresponding field and make further modifications using the toolbar,
for example:
•
Add or delete a column using the and buttons.
•
Change the order of the columns using the and buttons.
The new metadata item is added under the HL7 node in the Repository tree view.
1. In the Repository tree view, right-click the UN/EDIFACT tree node, and select Create EDI from the pop-
up menu.
2. Enter the general properties of the schema, such as its Name and Description. The Name field must be filled.
1. To search your UN/EDIFACT standard quickly, enter the full or partial name of the UN/EDIFACT standard
in the Name Filter field, for example, enter inv for INVOIC.
2. In the UN/EDIFACT standards list, expand the standard node of your choice, and select the release version
of the UN/EDIFACT messages you want to read through this metadata.
1. From the left-hand panel, select the EDIFACT message fields that you to include in your schema, and drop
them to the Description of the Schema table in the right-hand Schema panel.
2. If needed, select any field in the Description of the Schema table, and move it up or down or rename it.
The metadata created is added under the UN/EDIFACT node in the Repository tree.
1. Upon creating or editing a metadata connection in the wizard, click Export as context.
2. In the [Create / Reuse a context group] wizard that opens, select Create a new repository context and
click Next.
3. Type in a name for the context group to be created, and add any general information such as a description
if required.
The name of the Metadata entry is proposed by the wizard as the context group name, and the information
you provide in the Description field will appear as a tooltip when you move your mouse over the context
group in the Repository.
4. Click Next to create and view the context group, or click Finish to complete context creation and return to
the connection wizard directly.
To edit the context variables, go to the Contexts node of the Repository, right-click the newly created context
group, and select Edit context group to open the [Create / Edit a context group] wizard after the connection
wizard is closed.
To edit the default context, or add new contexts, click the [+] button at the upper right corner of the wizard.
To add a new context variable, click the [+] button at the bottom of the wizard.
For more information on handling contexts and variables, see Using contexts and variables.
6. Click Finish to complete context creation and return to the connection wizard.
The relevant connection details fields in the wizard are set with the context variables.
1. When creating or editing a metadata connection in the wizard, click Export as context.
2. In the [Create / Reuse a context group] wizard that opens, select Reuse an existing repository context
and click Next.
4. For each variable, select the corresponding field of the connection details, and then click Next to view and
edit the context variables, or click Finish to show the connection setup result directly.
5. Edit the contexts and/or context variables if needed. If you make any changes, your centralized context group
will be updated automatically.
For more information on handling contexts and variables, see Using contexts and variables.
6. Click Finish to validate context reuse and return to the connection wizard.
The relevant connection details fields in the wizard are set with the context variables.
In the Metadata node of the Repository tree view, you can import metadata from a CSV file on an external
application.
This option is available only for database connections (Db Connections) and delimited files (File delimited).
Name;Purpose;Description;Version;Status;DbType;ConnectionString;Login;
Password;Server;Port;Database;DBSchema;Datasource;File;DBRoot;TableName;
OriginalTableName;Label;OriginalLabel;Comment;Default;Key;Length;
Nullable;Pattern;Precision;TalendType;DBType.
Note that:
-tableName is the name displayed in Talend Studio, originalTableName is the original table name in the database. (You can
choose to fill only the originalTableName).
-label is the column name used in Talend Studio, originalLabel is the column name in the table. (You can choose to fill
only the originalLabel).
To import database connection metadata from a defined CSV file, do the following:
1. In the Repository tree view, expand the Metadata node and right-click Db connections.
3. Click Browse... and go to the CSV file that holds the metadata of the database connection.
The [Show Logs] dialog bow displays to list imported and rejected metadata, if any.
The imported metadata displays under the DB connections node in the Repository tree view.
Before importing delimited file metadata from a CSV file, make sure that each line of your CSV file complies
with the following format:
Note that:
• Name is the file connection name that will be created under the File delimited node. You can create multiple
file connections by specifying different connection names.
• TableName is the name of the file schema, and Label is the column name in the schema.
• Escape sequences must be used to specify CSV metacharacters or control characters, such as ; or \n.
• The FirstLineCaption field must be set to true and the HeaderValue field must be filled properly if the
delimited file contains a header row and rows to be skipped.
The following example shows how to import the metadata of a delimited file named directors.csv from a predefined
CSV file named directors_metadata.csv.
Below is an abstract of the file directors.csv, which has two columns id and name:
id;name
1;Gregg Araki
2;P.J. Hogan
3;Alan Rudolph
The CSV file directors_metadata.csv contains two lines to describe the metadata of directors.csv:
To import delimited file connection metadata from the above-mentioned CSV file, do the following:
1. In the Repository tree view, expand the Metadata node and right-click File delimited.
3. Click Browse... and browse to the CSV file that describes the metadata of the delimited file metadata,
directors_metadata.csv in this example.
The [Show Logs] dialog box opens to list imported and rejected metadata, if any.
A new file connection named directors is created under the File delimited node in the Repository tree view,
with its properties as defined in the CSV file.
Different wizards will help you centralize connection and schema metadata in the Repository tree view. For more
information about the [Metadata Manager] wizards, see Managing Metadata.
Once the relevant metadata is stored under the Metadata node, you will be able to drop the corresponding
components directly onto the design workspace.
1. In the Repository tree view of the Integration perspective, expand Metadata and the folder holding the
connection you want to use in your Job.
A dialog box prompts you to select the component you want to use among those offered.
3. Select the component and then click OK. The selected component displays on the design workspace.
Alternatively, according to the type of component (Input or Output) you want to use, perform one of the following
operations:
• Output: Press Ctrl on your keyboard while you are dropping the component onto the design workspace to
directly include it in the active Job.
• Input: Press Alt on your keyboard while you drop the component onto the design workspace to directly include
it in the active Job.
If you double-click the component, the Component view shows the selected connection details as well as the
selected schema information.
If you select the connection without selecting a schema, then the properties will be filled with the first encountered schema.
You can also use the Repository tree view to store frequently used parts of code or extract parts of existing
company functions, by calling them via the routines. This factorization makes it easier to resolve any problems
which may arise and allows you to update the code used in multiple Jobs quickly and easily.
On top of this, certain system routines adopt the most common Java methods, using the Talend syntax. This allows
you to escalate Java errors in the studio directly, thereby facilitating the identification and resolution of problems
which may arise as your integration processes evolve with Talend.
• System routines: a number of system routines are provided. They are classed according to the type of data which
they process: numerical, string, date...
• User routines: these are routines which you have created or adapted from existing routines.
You do not need any knowledge of the Java language to create and use Talend routines.
All of the routines are stored under Code > Routines in the Repository tree view.
For further information concerning the system routines, see Accessing the System Routines.
For further information about how to create user routines, see How to create user routines.
You can also set up routine dependencies on Jobs. To do so, simply right click a Job on the Repository tree view and select
Set up routine dependencies. In the dialog box which opens, all routines are set by default. You can use the tool bar to
remove routines if required.
Each class or category in the system folder contains several routines or functions. Double-click the class that you
want to open.
If you have subscribed to one of the Talend solutions with the Profiling perspective, you will have access to routines specific
to data quality in the Routines node. These data quality routines handle the first and last names and the titles.
All of the routines or functions within a class are composed of some descriptive text, followed by the corresponding
Java code. In the Routines view, you can use the scrollbar to browse the different routines. Or alternatively:
The view jumps to the section comprising the routine's descriptive text and corresponding code.
1. First of all, create a user routine by following the steps outlined in How to create user routines. The routine
opens in the workspace, where you shall find a basic example of a routine.
2. Then, under Code > Routines > system, select the class of routines which contains the routine(s) you want
to customize.
3. Double-click the class which contains the relevant routine to open it in the workspace.
4. Use the Outline panel on the bottom left of the studio to locate the routine from which you want to copy
all or part of the content.
5. In the workspace, select all or part of the code and copy it using Ctrl+C.
6. Click the tab to access your user routine and paste the code by pressing Ctrl+V.
We advise you to use the descriptive text (in blue) to detail the input and output parameters. This will make your
routines easier to maintain and reuse.
1. In the Repository tree view, expand Code to display the Routines folder.
3. The [New routine] dialog box opens. Enter the information required to create the routine, ie., its name,
description...
The newly created routine appears in the Repository tree view, directly below the Routines node. The routine
editor opens to reveal a model routine which contains a simple example, by default, comprising descriptive
text in blue, followed by the corresponding code.
We advise you to add a very detailed description of the routine. The description should generally include the input and
output parameters you would expect to use in the routine, as well as the results returned along with an example. This
information tends to be useful for collaborative work and the maintenance of the routines.
5. Modify or replace the model with your own code and press Ctrl+S to save the routine. Otherwise, the routine
is saved automatically when you close it.
You can copy all or part of a system routine or class and use it in a user routine by using the Ctrl+C and Ctrl+V commands,
then adapt the code according to your needs. For further information about how to customize routines, see Customizing the
system routines.
You can right-click your user routine to use the Impact Analysis feature. This feature indicates which Jobs use the routine
and would therefore be impacted by any modifications. For further information about Impact Analysis, see How to analyze
repository items.
The system folder and all of the routines held within are read only.
1. Right click the routine you want to edit and select Edit Routine.
2. The routine opens in the workspace, where you can modify it.
3. Once you have adapted the routine to suit your needs, press Ctrl+S to save it.
If you want to reuse a system routine for your own specific needs, see Customizing the system routines.
The .jar file of the imported library will be also listed in the library file of your current Studio.
1. If the library to be imported isn't available on your machine, either download and install it using the Modules
view or download and store it in a local directory.
3. Right-click the user routine you want to edit its library and then select Edit Routine Library.
4. Click New to open the [New Module] dialog box where you can import the external library.
You can delete any of the already imported routine files if you select the file in the Library File list and click the
Remove button.
• If you have installed the library using the Modules view, enter the full name of the library file in the Input
a library's name field.
• If you have stored the library file in a local directory, select the Browse a library file option and click
browse to set the file path in the corresponding field.
The imported library file is listed in the Library File list in the [Import External Library] dialog box.
The library file is imported into the library folder of your current Studio and also listed in the Module view
of the same Studio.
For more information about the Modules view, see the Talend Installation Guide.
You can call any of your user and system routines from your Job components in order to run them at the same
time as your Job.
To access all the routines saved in the Routines folder in the Repository tree view, press Ctrl+Space in any of
the fields in the Basic settings view of any of the Talend components used in your Job and select the one you
want to run.
Alternatively, you can call any of these routines by indicating the relevant class name and the name of the routine,
followed by the expected settings, in any of the Basic settings fields in the following way:
<ClassName>.<RoutineName>
1. In the Palette, click File > Management, then drop a tFileTouch component onto the workspace. This
component allows you to create an empty file.
2. Double-click the component to open its Basic settings view in the Component tab.
3. In the FileName field, enter the path to access your file, or click [...] and browse the directory to locate the file.
4. Close the double inverted commas around your file extension as follows: "D:/Input/customer".txt.
5. Add the plus symbol (+) between the closing inverted commas and the file extension.
6. Press Ctrl+Space to open a list of all of the routines, and in the auto-completion list which appears, select
TalendDate.getDate to use the Talend routine which allows you to obtain the current date.
8. Enter the plus symbol (+) next to the getDate variable to complete the routine call, and place double inverted
commas around the file extension.
If you are working on windows, the ":" between the hours and minutes and between the minutes and seconds must
be removed.
The tFileTouch component creates an empty file with the days date, retrieved upon execution of the GetDate
routine called.
However, the ELT mode is certainly not optimal for all situations, for example,
• As SQL is less powerful than Java, the scope of available data transformations is limited.
• ELT requires users that have high proficiency in SQL tuning and DBMS tuning.
• Using ELT with Talend Studio, you cannot pass or reject one single row of data as you can do in ETL. For more
information about row rejection, see Row connection.
Based on the advantages and disadvantages of ELT, the SQL templates are designed as the ELT facilitation
requires.
These SQL templates are used with the components from the Talend ELT component family including
tSQLTemplate, tSQLTemplateFilterColumns, tSQLTemplateCommit, tSQLTemplateFilterRows,
tSQLTemplateRollback, tSQLTemplateAggregate and tSQLTemplateMerge. These components execute the
selected SQL statements. Using the UNION, EXCEPT and INTERSECT operators, you can modify data directly
on the DBMS without using the system memory.
Moreover, with the help of these SQL templates, you can optimize the efficiency of your database management
system by storing and retrieving your data according to the structural requirements.
Talend Studio provides the following types of SQL templates under the SQL templates node in the Repository
tree view:
• System SQL templates: They are classified according to the type of database for which they are tailored.
• User-defined SQL templates: these are templates which you have created or adapted from existing templates.
More detailed information about the SQL templates is presented in the below sections.
For further information concerning the components from the ELT component family, see Talend Components
Reference Guide.
As most of the SQL templates are tailored for specific databases, if you change database in your system, it is inevitable to
switch to or develop new templates for the new database.
The below sections show you how to manage these two types of SQL templates.
Even though the statements of each group of templates vary from database to database, according to the operations
they are intended to accomplish, they are also grouped on the basis of their types in each folder.
The below table provides these types and their related information.
Conditions
MergeInsert Inserts records from the source tSQLTemplateMerge Target table name (and schema)
table to the target table. tSQLTemplateCommit
Source table name (and schema)
Conditions
MergeUpdate Updates the target table with tSQLTemplateMerge Target table name (and schema)
records from the source table. tSQLTemplateCommit
Source table name (and schema)
Conditions
Each folder contains a system sub-folder containing pre-defined SQL statements, as well as a UserDefined folder
in which you can store SQL statements that you have created or customized.
Each system folder contains several types of SQL templates, each designed to accomplish a dedicated task.
Apart from the Generic folder, the SQL templates are grouped into different folders according to the type of
database for which they are to be used. The templates in the Generic folder are standard, for use in any database.
You can use these as a basis from which you can develop more specific SQL templates than those defined in
Talend Studio.
From the Repository tree view, proceed as follows to open an SQL template:
1. In the Repository tree view, expand SQL Templates and browse to the template you want to open.
2. Double-click the class that you want to open, for example, Aggregate in the Generic folder.
You can read the predefined Aggregate statements in the template view. The parameters, such as
TABLE_NAME_TARGET, operation, are to be defined when you design related Jobs. Then the parameters can be
easily set in the associated components, as mentioned in the previous section.
Everytime you click or open an SQL template, its corresponding property view displays at the bottom of the studio.
Click the Aggregate template, for example, to view its properties as presented below:
For further information regarding the different types of SQL templates, see Types of system SQL templates
For further information about how to use the SQL templates with the associated components, see in Talend
Components Reference Guide.
For more information on the SQL template writing rules, see SQL template writing rules.
1. In the Repository tree view, expand SQL Templates and then the category you want to create the SQL
template in.
2. Right-click UserDefined and select Create SQLTemplate to open the [New SQLTemplate] wizard.
3. Enter the information required to create the template and click Finish to close the wizard.
The name of the newly created template appears under UserDefined in the Repository tree view. Also, an
SQL template editor opens on the design workspace, where you can enter the code for the newly created
template.
For further information about how to create a user-defined SQL template and how to use it in a Job, see
tMysqlTableList in Talend Components Reference Guide.
This section presents you with a use case that takes you through the steps of using MySQL system templates in
a Job that:
• collects data grouped by specific value(s) from a database table and writes aggregated data in a target database
table.
• deletes the source table where the aggregated data comes from.
• reads the target database table and lists the Job execution result.
1. Drop the following components from the Palette onto the design workspace: tMysqlConnection,
tSQLTemplateAggregate, tSQLTemplateCommit, tMysqlInput, and tLogRow.
6. In the Basic settings view, set the database connection details manually.
8. On the Database Type list, select the relevant database type, and from the Component List, select the
relevant database connection component if more than one connection is used.
Grouping data, writing aggregated data and dropping the source table
2. On the Database Type list, select the relevant database type, and from the Component List, select the
relevant database connection component if more than one connection is used.
3. Enter the names for the database, source table, and target table in the corresponding fields and define the data
structure in the source and target tables.
The source table schema consists of three columns: First_Name, Last_Name and Country. The target table
schema consists of two columns: country and total. In this example, we want to group citizens by their
nationalities and count citizen number in each country. To do that, we define the Operations and Group
by parameters accordingly.
4. In the Operations table, click the [+] button to add one or more lines, and then click the Output column cell
and select the output column that will hold the counted data from the drop-down list.
5. Click the Function cell and select the operation to be carried on from the drop-down list.
6. In the Group by table, click the [+] button to add one or more lines, and then click the Output column cell
and select the output column that will hold the aggregated data from the drop-down list.
8. Click the [+] button twice under the SQL Template List table to add two SQL templates.
9. Click on the first SQL template row and select the MySQLAggregate template from the drop-down list. This
template generates the code to aggregate data according to the configuration in the Basic settings view.
10. Do the same to select the MySQLDropSourceTable template for the second SQL template row. This
template generates the code to delete the source table where the data to be aggregated comes from.
To add new SQL templates to an ELT component for execution, you can simply drop the templates of your choice
either onto the component in the design workspace, or onto the component's SQL Template List table.
The templates set up in the SQL Template List table have priority over the parameters set in the Basic settings view
and are executed in a top-down order. So in this use case, if you select MySQLDropSourceTable for the first template
row and MySQLAggregate for the second template row, the source table will be deleted prior to aggregation, meaning
that nothing will be aggregated.
Reading the target database and listing the Job execution result
2. Select the Use an existing connection check box to use the database connection that you have defined on
the tMysqlConnection component.
3. To define the schema, select Repository and then click the [...] button to choose the database table whose
schema is used. In this example, the target table holding the aggregated data is selected.
4. In the Table Name field, type in the name of the table you want to query. In this example, the table is the
one holding the aggregated data.
5. In the Query area, enter the query statement to select the columns to be displayed.
A two-column table citizencount is created in the database. It groups citizens according to their nationalities
and gives their total count in each country.
The ability to capture only the changed source data and to move it from a source to a target system(s) in real
time is known as Change Data Capture (CDC). Capturing changes reduces traffic across a network and thus helps
reduce ETL time.
The CDC feature, introduced in Talend Studio, simplifies the process of identifying the change data since the
last extraction. CDC in Talend Studio quickly identifies and captures data that has been added to, updated in, or
removed from database tables and makes this change data available for future use by applications or individuals.
The CDC feature is available for Oracle, MySQL, DB2, PostgreSQL, Sybase, MS SQL Server, Informix, Ingres,
Teradata, and AS/400.
The CDC feature works only with the same database system running on the same server.
• Redo/Archive log: this mode is used with Oracle v11 and previous versions and AS/400.
• XStream: this mode is used only with Oracle v12 with OCI.
For detailed information on these three modes, see the following sections.
The Trigger mode places a trigger that launches change data capture on every monitored source table. This, by
turn, imposes little modifications on database structure.
With this mode, data extraction takes place at the same time the Insert, Update, or Delete operations occur in the
source tables, and the change data is stored inside the database in change tables. The changed data, thus captured,
is then made available to the target system(s) in a controlled manner, using subscriber views.
In Trigger mode, CDC can have only one publisher but many subscribers. CDC creates subscriber tables to control
accessibility of the change table data by the target system(s). A target system is any application that wants to use
the data captured from the source system.
The below figure shows the basic architecture of a CDC environment in Trigger mode in Talend Studio.
In this example, CDC monitors the changes made to a Product table. The changes are caught and published in a
change table to which two subscribers have access: a CRM application and an Accounting application. These two
systems fetch the changes and use them to update their data.
In an Oracle database, a Redo log is a file which logs the history of changes made to data. In an AS/400 database,
these changes are logged automatically in the database's internal logbook (journal). These changes include the
insert, update and delete operations which data may undergo.
Redo/Archive log mode is less intrusive than Trigger mode because in contrast to Trigger mode, it does not
require modifications to the database structure.
When setting up this Redo/Archive log mode for Oracle, only one subscriber can have access rights to the change
table. This subscriber must be a database user who holds the subscription rights. Also, there is a subscription table
which controls access to the subscriber change table. The subscription change table is a comprehensive, internal
table which reflects the state of the Oracle database at the moment at which the Redo/Archive log option was
activated.
When setting up this mode for AS/400, a save file, called fitcdc.savf and provided in your Studio, is restored
on AS/400 and used to install a program called RUNCDC. When the subscriber views the changes made (View
all changes) or consumes them for reuse (using a tAS400CDC component), the RUNCDC program reads
and analyzes the logbook (journal) and the attached receiver from the source table and updates the change
table accordingly. The AS/400 CDC Redo/Archive log mode (journal) creates subscription tables to prevent
unauthorized target systems from accessing the data in the change tables. A target system means any application
which tries to use data captured in the source system.
In this example, the CDC monitors the changes made to a Product table, thanks to the data contained in the
database's logbook (journal). The CDC reads the logbook and records the changes which have been made to the
data. These changes are collected and published in a table of changes to which two subscribers have access, a CRM
application and an Accounting application. These two systems fetch the changes and use them to update their data.
XStream Out provides Oracle Database components and application programming interfaces that enable you to
share data changes made to an Oracle database with other systems. It also provides a transaction-based interface
for streaming the changes captured from the redo log of the Oracle database to client applications with an outbound
server. An outbound server is an optional Oracle background process that sends data changes to a client application.
XStream In provides Oracle Database components and application programming interfaces that enable you to
share data changes made to other systems with an Oracle database. It also provides a transaction-based interface
for sending information to an Oracle database from client applications with an inbound server. An inbound server
is an optional Oracle background process that receives data changes from a client application.
The XStream mode is only available for Oracle v12 with OCI in Talend Studio. For more information about the
XStream mode, see http://docs.oracle.com/cd/E11882_01/server.112/e16545/toc.htm.
The publisher captures the change data and makes it available to the subscribers. The subscribers utilize the change
data obtained from the publisher.
• identifying the source tables from which the change data needs to be captured.
• capturing the change data and storing it in specially created change tables.
In Trigger mode, or the AS/400 Redo/Archive log mode (journal) the subscriber is a table that only lists the
applications that have access rights to the change tables. In the Oracle Redo/Archive log mode, the subscriber is
a user of the database. The subscriber may not be interested in all the data that is published by the publisher.
However, if you want to use CDC in Redo/Archive log mode for an Oracle, you must first of all configure the
database so that it generates the redo records that hold all insert, update or delete changes made in datafiles. For
further information, see Prerequisites for the Oracle Redo/Archive log mode.
If you want to use CDC in Redo/Archive log mode for AS/400, you must verify that the prerequisites on your
AS/400 are all met. For further information, see The prerequisites on AS/400.
For the time being, CDC is only available in Java and is for Oracle, MySQL, DB2, PostgreSQL, Sybase, MS SQL Server,
Informix, Ingres, and Teradata in Trigger mode, for Oracle and AS/400 databases in Redo/Archive log mode, and for
Oracle in XStream mode.
To set up a CDC environment you must understand the basics involved in designing a Job in Talend Studio, and particularly
the definition of metadata items.
When setting up a CDC environment, make sure that the database connection for CDC is on the same server with the source
data to which changes are to be captured.
For more information on how to design a Job in Talend Studio, see Designing a Job.
For more information on how to define metadata items in Talend Studio, see Managing Metadata.
For more information on database support for CDC, see Database support for CDC.
For more information about how to set up a database connection, see Centralizing database metadata.
If you work with an MS SQL Server, you must set the two connections to the same database but using two different schemas.
To identify the table from which data changes will be captured, right-click the newly created data connection to
retrieve the schema of the source table and load it on your repository file system. In this example, the source table
is person.
1. Right-click the CDC Foundation folder under the data connection node and select Create CDC from the
contextual menu. The [Create Change Data Capture] dialog box opens up.
2. In the [Create Change Data Capture] dialog box, click the [...] button next to the Set Link Connection
field to select the database connection dedicated to CDC.
Note that for the database, such as Oracle, which also supports other CDC mode, make sure to select Trigger
mode as the option to capture data changes in this step.
3. Click Create Subscriber and the [Create Subscriber and Execute SQL Script] dialog box opens up.
4. Click Execute to run the SQL script displayed and then click Close to close the dialog box.
In the CDC Foundation folder, the CDC database connection and the subscriber table schema appear.
You must specify the table that the subscriber wants to subscribe to and then activate the subscription.
1. Right-click the relevant schema of the source table and select add CDC. The [Create Subscriber and
Execute SQL Script] dialog box appears.
The source table to be monitored should have a primary key so that the CDC system can identify the rows on which
changes have been made. You cannot set up a CDC environment if your source table schema does not have a primary
key.
For Oracle databases, the CDC system creates an alias for the source table(s) monitored. This helps to avoid problems
due to the length of identifiers upon creation of the change table and its associated view. For CDC systems which are
already set up, the table names are retained.
2. In the [Create Subscriber and Execute SQL Script] dialog box, check the event(s) you want to catch:
Insert, Update or Delete.
3. Click Execute to run the SQL script displayed and then click Close to close the dialog box.
In the CDC Foundation folder, the catch table schemas and the corresponding view schemas appear.
4. To view any data changes made to the source table, right-click the table in the Table schemas folder and
select View All Changes to open the [View All Changes] dialog box.
1. Create a new Job in Talend Studio, add a tTeradataCDC component and a tLogRow component, and link
tTeradataCDC to tLogRow using a Row > Main connection.
3. Select Repository from the Property of the CDC connection drop-down list and click the [...] button next
to the field to retrieve the schema that corresponds to the database connection dedicated to CDC.
4. Select Repository from the Schema using CDC drop-down list and click the [...] button next to the field to
retrieve the schema that corresponds to the table from which changes will be captured.
6. Double-click tLogRow and in the Mode area on its Basic settings view select Table (print values in cells
of a table) for a better display of the result.
On the console, you can read the output results which correspond to what you can see in the [View All
Changes] dialog box.
The following three sections detail the prerequisites for using CDC in Redo/Archive log mode for Oracle databases
and provide a two-step example of how to set up a CDC environment using the Oracle Redo/Archive log mode
in Talend Studio: the first step explains how to configure your system for CDC and the second, how to extract
the modified data.
To do so, connect to the Oracle database as an administrator and activate the active log mode using the following
queries:
connect / as sysdba;
shutdown;
startup exclusive mount;
alter database archivelog;
alter database open;
To do so, create a tablespace for the source user and the publisher respectively, then create a source user and
give it all the rights necessary to make modifications, and create a publisher and give it all the rights necessary
to capture and publish modifications.
In the example below, the $ORACLE_PATH varies depending on where Oracle is installed. The source user is
called source, and the publisher is called publisher:
The select_catalog_role procedure allows the publisher to consult all Oracle dictionaries.
The execute_catalog_role procedure allows the publisher to execute the dictionary procedures.
The SYS.DBMS_CDC_PUBLISH procedure allows the publisher to configure the CDC system that will capture and
publish change data in one or more source tables.
2. Set a DB connection dedicated to CDC by using the "publisher" user that has all necessary rights.
To identify the table(s) to catch, right-click the DB connection for the database you want to monitor and select
Retrieve Schema, then proceed to retrieve and load the source table schema in the repository.
In this example, the source table is client, which contains three columns id, name and age.
To retrieve modified data, define the connection between CDC and data:
1. Right-click the relevant CDC Foundation folder and proceed to connect to the Oracle database to be
monitored.
2. Select Create CDC to open the [Create Change Data Capture] dialog box.
3. Click the three-dot button next to the Set Link Connection field to select the connection that corresponds to
CDC. Then define the user for Oracle - publisher in this example. This user will create the change tables that
store modifications and will activate change captures for the source table.
4. In the Options area, select Log mode as the option for capturing changes.
5. Click Create Subscriber. The [Create Subscriber and Execute SQL Script] dialog box appears.
In the CDC Foundation folder, the subscription table schema appears. An icon also appears to show that the
change capture for the source table is activated.
Step 4: Create the change table, subscribe to the source table and activate the
subscription
You must specify the table to which the subscriber wants to subscribe and then activate its subscription.
1. Right-click the schema that corresponds to the source table and select Add CDC. The [Create Subscriber
and Execute SQL Script] dialog box appears.
For Oracle databases and for versions 3.2 + of Talend Studio, the CDC system creates an alias for the source table(s)
monitored. This helps to avoid problems due to the length of identifiers upon creation of the change table and its
associated view. For CDC systems which are already set up, the table names are retained.
2. Click Execute to activate the subscription to the source table and then click Close to close the dialog box.
In the CDC Foundation folder, the table that holds the modified data and the associated view schemas appear.
3. To see the changes made to data, right-click the corresponding table in the Table schemas folder and select
View All Changes to open the corresponding dialog box.
The TALEND_CDC_TYPE column of the [View All Changes] dialog box indicates all of the different
changes caught.
The changes are caught as follows: I indicates that the data has been inserted, UN indicates that the data has
been updated, and D indicates that the data has been deleted.
The columns of the source table and their values are also displayed.
For an example of how to use a CDC component and for more information on the properties and the parameters
of the tOracleCDC component, see Talend Components Reference Guide.
1. From the Repository tree view, drop the source table to the design workspace and select the tOracleCDC
component in the [Components] dialog box, drop tLogRow from the Palette to the design workspace, and
link the two components together using a Row Main connection.
The Property type is set to Repository since we used the connection information related to CDC stored
locally in the Repository tree view. All connection fields are automatically filled in.
In the Schema using CDC, Repository is selected and this way the schema corresponding to Oracle source
table is automatically retrieved.
The name of the source table that holds change data appears in the Table using CDC field. In this example,
the table is called CLIENT.
The CDC Log Mode check box is selected since you select this mode when setting up the CDC environment.
3. For the Events to catch option, select the check box corresponding to the event(s) you want to catch. In this
example, we want to catch the three events, Insert, Update and Delete.
In the console, you can read the output results that correspond to what you can see in the [View All Changes]
dialog box.
Connect to the Oracle database as an administrative user and run the following statement to display its archiving
information:
If the database is not operating in the archive log mode, run the following statements to activate the archive log
mode:
shutdown immediate;
startup mount;
alter database archivelog;
alter database open;
During XStream configuration, if the Oracle database is a container database (CDB), you need to ensure that all
pluggable databases (PDBs) in the CDB are in open read/write mode.
To view the open mode of PDBs, connect to the Oracle database as an administrative user and run the following
statement.
To open PDBs, connect to the Oracle database as an administrative user and run the following statement.
To configure an XStream administrator, connect to the Oracle database as an administrative user with the right to
create users, grant privileges, and create tablespaces, and then proceed with the following steps.
1. Create a tablespace for the XStream administrator by running the following statement. Skip this step if you
want to use an existing tablespace.
2. Create a new user to act as the XStream administrator by running the following statements. Skip this step
to identify an existing user.
• If you are creating an XStream administrator in a CDB, the XStream administrator must be a common user. The
name of a common user must begin with c## or C##, and you need to include the CONTAINER=ALL clause in
the statement.
• If you are creating an XStream administrator using the Oracle default tablespace, you need to remove the DEFAULT
TABLESPACE and QUOTA UNLIMITED ON clauses in the statement.
3. Grant privileges to the XStream administrator by running the following statements and procedures:
BEGIN
DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'username',
privilege_type => 'CAPTURE',
grant_select_privileges => TRUE);
END;
/
BEGIN
DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'username',
privilege_type => 'APPLY',
grant_select_privileges => TRUE);
END;
/
Note that if you are granting privileges to a common user, you need to include the CONTAINER=ALL clause in
the above GRANT statements and procedures.
1. In the Repository tree view, set up a database connection using OCI connection type to an Oracle database,
and then retrieve the schema of the source table in which data changes are to be captured. In this example,
the source table is PERSON. For detailed information about how to set up a database connection and retrieve
table schemas, see Centralizing database metadata.
2. Right-click CDC Foundation under the newly created Oracle database connection and select Create CDC
from the contextual menu. The [Create Change Data Capture] dialog box opens up.
3. Select XStream mode and click Show sample initialization script. The [Sample Initialization Script]
dialog box opens up.
Note that this is only a sample script for configuring XStream on an Oracle 12c server, you need to update
the username, password, and tablespace information according to your settings and run the statements and
procedures in Oracle. For detailed information, see Prerequisites for the XStream mode.
Click Finish to create CDC in Oracle and close the [Create Change Data Capture] dialog box.
4. Right-click the source table and select add CDC from the contextual menu.
5. Right-click the source table and select Generate XStreamsOut Script from the contextual menu. The
[XStreamsOut generation script] dialog box opens up.
6. Fill in the XStreams server name field with the outbound server name. The name must be a unique one.
Identify the source table(s) by selecting the check box(es) in the corresponding Include in script column.
Click Generate Script. The [XStreamsOut Script] dialog box pops up.
Note that if the script execution fails, you can connect to the Oracle database as an XStream administrator
and run the script in Oracle.
8. Connect to the Oracle database as an XStream administrator and check the status of the outbound server by
running the following statement:
exec DBMS_XSTREAM_ADM.DROP_OUTBOUND('xout');
exec DBMS_XSTREAM_ADM.REMOVE_XSTREAM_CONFIGURATION(container => 'ALL');
1. In the Repository tree view, set up a database connection using OCI connection type to an Oracle database,
and then retrieve the schema of the target table to which data changes will be replicated. In this example,
the target table is PERSON_BAK. For detailed information about how to set up a database connection and
retrieve table schemas, see Centralizing database metadata.
2. Right-click CDC Foundation under the newly created Oracle database connection and select Create CDC
from the contextual menu. The [Create Change Data Capture] dialog box opens up.
3. Select XStream mode in the Options area and click Show sample initialization script. The [Sample
Initialization Script] dialog box opens up.
Note that this is only a sample script for configuring XStream on an Oracle 12c server, you need to update
the username, password, and tablespace information according to your settings and run the statements and
procedures in Oracle. For detailed information, see Prerequisites for the XStream mode.
Click Finish to create CDC and close the [Create Change Data Capture] dialog box.
4. Right-click the target table and select add CDC from the contextual menu.
5. Right-click the target table and select Generate XStreamsIn Script from the contextual menu. The
[XStreamsIn generation script] dialog box opens up.
6. Fill in the XStreams server name field with the inbound server name.
Fill in the Queue name field with the name of the inbound server's queue.
Click Generate script. The XStream In script will be generated and displayed.
Note that if the script execution fails, you can connect to the Oracle database as an XStream administrator
and run the script in Oracle.
8. Connect to the Oracle database as an XStream administrator and check the status of the inbound server by
running the following statement:
exec DBMS_APPLY_ADM.START_APPLY('xin');
exec DBMS_XSTREAM_ADM.DROP_INBOUND('xin');
exec DBMS_XSTREAM_ADM.REMOVE_QUEUE('xin_queue');
exec DBMS_APPLY_ADM.DELETE_ALL_ERRORS(apply_name => 'xin');
For an example of how to use the Oracle CDC components and for more information about the properties and the
parameters of the Oracle CDC components, see Talend Components Reference Guide.
Since version 5.4.2, the Studio does not automatically create, modify or delete any journal and can only run the
CDC process on the basis of the journal and receivers that you or the administrator of your AS/400 system can
provide depending on the policy of your company. For this reason, ensure that an old receiver has been treated by
RUNCDC before deleting it so as to avoid lost of the information recorded in that receiver.
The following two sections describe how to set up a CDC environment in Talend Studio. The contents described
include:
• the prerequisites for reusing a CDC environment migrated from one of the 5.4.1 or earlier versions of the Studio.
• the AS/400 user account for CDC must have *ALLOBJ privileges or at least all of the following privileges:
- CRTSAVF,
- CLRSAVF,
- DLTF,
- RSTLIB,
- DLTLIB,
- CRTLIB,
- CHGCMD,
• if the files of interest are already journalized, the journal must be created with option IMAGES (*BOTH)
For further information about the setup of these listed prerequisites, see the manual of your AS/400 system.
2. Set a DB connection dedicated to CDC by filling in DB connection data. For example, a connection called
AS400_CDC.
3. Set a DB connection to where data is located by filling in DB connection data. For example, a connection
called AS400_DATA.
To identify the table(s) to catch, right-click the newly created data connection to retrieve the schema of the source
table and load it in the repository. In this example, this data connection is AS400_DATA.
1. Right-click the CDC Foundation folder of the data connection and select Create CDC to open the [Create
Change Data Capture] dialog box. In this example, this data connection is AS400_DATA.
2. In the [Create Change Data Capture] dialog box, click the three-dot button next to the Set Link Connection
field to select connection to the database that corresponds to CDC. In this example, select AS400_CDC.
3. Click Create Subscriber to create the subscribers. Then the command to be executed is displayed. The
following image presents an example of this command.
open <AS400_server_host>
user <Username> <Password>
quote rcmd "crtsavf qgpl/instfitcdc"
quote rcmd "clrsavf qgpl/instfitcdc"
bin
cd qgpl
put "<Studio_install>\plugins\org.talend.designer.cdc_<version>\resource
\fitcdc.savf" instfitcdc
quote rcmd "rstlib savlib(fitcdc) dev(*savf) savf(qgpl/instfitcdc)
RSTLIB(<CDC_library_name>)"
quote rcmd "CHGCMD CMD(<CDC_library_name>/RUNCDC) PGM(<CDC_library_name>/F2CD00)
CURLIB(<CDC_library_name>)"
quote rcmd "dltf qgpl/instfitcdc"
quit
It is automatically executed via FTP by the Studio to install the RUNCDC program, restore the CDC library
(the CDC database) and create the TSUBSCRIBERS table.
4. If you need to manually execute this command, copy this command and click Skip to close this dialog box.
In this situation, this command is not executed by the Studio and you need to paste or even edit this command
by yourself and execute it in your AS/400 system.
Otherwise, click Execute to directly run the default command in the Studio. Then a step-by-step execution
list appears.
Note that on the list, you might read an error with number 550 describing issues such as the fact that not all
objects have been restored. This could be normal if the library that was not restored has in fact been restored
in your AS/400 system. Contact the administrator of your AS/400 system for clarification.
5. Once done, in the [Create Change Data Capture] dialog box, click Finish.
In the CDC Foundation folder, the CDC database connection appears, along with the subscription table schema.
Step 4: Create the change table, subscribe to the source table and activate the
subscription
You must specify the table to which the subscriber wants to subscribe and then activate the subscription.
1. Right-click the schema that corresponds to the source table and select Add CDC. The [Create Subscriber
and Execute SQL Script] dialog box displays. The long name and the short name of the source table are
both displayed in this dialog box.
The source table to be monitored should have a primary key so that the CDC system can identify the lines on which
changes have been done. You cannot set up a CDC environment if the schema of your source table does not have
a primary key.
In this example, since the long name CUSTOMERS does not exceed 10 characters, the short name reads the
same as the long name.
Note that if you are using a source table retrieved in one of the 5.4.1 or earlier versions of the Studio, only
its long name has been retrieved. Then you have to retrieve this table again from the current Studio.
2. In the Subscriber Name field, enter the name you want to give the subscriber. By default, the subscriber
name is APP1.
In the CDC Foundation folder, the change table schema and the associated view appear.
CRTJRNRCV JRNRCV(<source_library_name>/<receiver_name>)
2. Create a new journal and attach the receiver created in the previous step:
CRTJRN JRN(<source_library_name>/<journal_name)JRNRCV(<source_library_name>/
<receiver_name>)
3. For the file to be monitored, start journaling changes into the journal created in the previous step:
STRJRNPF FILE(<source_library_name>/
<file_to_be_monitored)JRN(<source_library_name>/<journal_name) IMAGES(*BOTH)
5. To view any changes made to the data, right-click the relevant table in the Table schemas folder and select
View All Changes to open the relevant dialog box.
For an example of how to use a CDC component and for more information regarding the properties and the
parameters of the tAS400CDC component, see Talend Components Reference Guide.
1. Drop tAS400CDC and tLogRow from the Palette onto the design workspace, and link the two components
using a Row Main connection.
3. Select Repository from the Property type drop-down list and click on [...] to fetch the schema which
corresponds to your CDC connection. The fields which follow are automatically filled in with the information
required to connect to the CDC database.
4. Select Repository from the Schema drop-down list and click on [...] to fetch the schema which corresponds
to the AS/400 table to be monitored.
5. In the Table Name field, enter the name of the source table monitored by CDC, here CUSTOMERS.
6. In the Source Library field, enter the name of the source library. By default, this is the same name of the
source database.
7. In the Subscriber field, enter the name of the subscriber who will extract the modified data. By default, the
subscriber is named APP1.
8. In the Events to catch field, select the check box which corresponds to the event(s) to be caught.
Alternatively, in the Advanced settings view, select the Customize FTP command check box and enter
<CDC_library_name>/RUNCDC FILE(<Source_library_name>/<Source_table_name>)
LIBOUT(<CDC_library_name>) MODE(*DETACHED) MBROPT(*ADD) DTCHJRN(*YES)
This command allows tAS400CDC to detach the older receiver from the journal and create and attach the
newer receiver to that journal.
In the console, you can read the output results which correspond to what is displayed in the [View All Changes]
dialog box.
• The long name and the short name of an AS/400 table are both retrieved with the table schema in the Repository.
The CDC table uses automatically the short name as its own name. This means that you have to retrieve your
AS/400 table again after the migration in order to have both the source table and the CDC table recognized
by the Studio.
• The structure of the TSUBSCRIBERS table has been updated in order to contain the long name and the short
name of a source table. Therefore, you need to delete the existing CDC and add new CDC to reinitialize your
TSUBSCRIBERS table.
• The Studio does not create, modify or delete any journal and consequently, cannot automatically detach an
older receiver from a journal and attach a newer one to it. Since this detachment and attachment process is
indispensable for the Studio to take the last change into account, you have to execute the following command
in the AS/400 system:
Alternatively, you can as well use a custom FTP command through the tAS400CDC component to automate
this process.
For further information about how to use this Customize FTP command feature in tAS400CDC, see Talend
Components Reference Guide.
For more information about the CDC feature, see CDC architectural overview.
For more information about setting up a CDC environment, see Setting up a CDC environment.
The CDC feature is available for the following databases: AS/400, DB2, Informix, Ingres, MS SQL Server,
MySQL, Oracle, PostgreSQL, Sybase and Teradata.
Source database Target database with the same name Target database with different names
AS/400 Supported Supported
MySQL Supported Supported
Teradata Supported Supported
Target database with the same name Target database with different names
Source database Same schema (table) Different Same schema (table) Different
schema (table) schema (table)
DB2 Supported Supported Not Supported Not Supported
Informix Supported Supported Not Supported Not Supported
Ingres Supported Not Supported Not Supported Not Supported
MS SQL Server Supported Supported Supported Supported
Oracle Supported Supported Not Supported Not Supported
PostgreSQL Supported Supported Not Supported Not Supported
Sybase Supported Supported Supported Supported
At runtime, the Joblet code is integrated into the Job code itself. No separate code is generated, the same Java
class being used.
This way, the Joblet use does not have any drawbacks on the performance side. The execution time is unchanged
whether your Job includes a Joblet or the whole subjob directly.
Moreover if you intend to log and monitor the whole Job statistics and execution error or warnings, the
Joblets included in your Job will be monitored without requiring further log component (such as tLogCatcher,
tStatCatcher or tFlowCatcher).
This specific component can be used like any other usual component within a Job. For more information on how
to design a Job, see Designing a Job.
Unlike for the tRunJob component, the Joblet code is automatically included in the Job code at runtime, thus
using less resources. As it uses the same context variables as the Job itself, the Joblet is easier to maintain.
To use a group of components as a standalone Job, you can use the tRunJob component. Unlike the Joblet, the
tRunJob has its own context variables. For more information on the tRunJob component, see Talend Components
Reference Guide.
2. In the [New Joblet] dialog box, fill in at least the Name field to designate the Joblet. You can also add
information to ease the Joblet management, such as: Description, Version, Author and Status.
Field Description
Name Enter a name for your new Joblet. A message comes up if you enter prohibited characters.
Purpose Enter the Joblet purpose or any useful information regarding the job in use.
Description Enter a description if need be for the Job created.
Author This field is read-only as it shows by default the current user login.
Locker This field is read-only as it shows by default the current user login.
Version The version is also read-only. You can manually increment the version using the M and m buttons.
Status You can define the status of a Job in your preferences. By default none is defined. To define them, go
to Window > Preferences > Talend >Status.
Path This field is read-only because it refers to the item access path in the repository. This field is empty
when the item is created in the root folder.
Icon Select the icon you want to use for your Joblet. It will show next to the Joblet name in the Repository
tree view and in the Palette as well.
3. In the Icon area, click the [...] button to open a window where you can browse to an icon of your choice and
add it to your Joblet, if needed.
4. Select the icon and click Open. The window closes and the selected icon displays in the Icon area in the
[New Joblet] dialog box.
The icon must have the dimensions 32 x 32. You will have an image-size related error if you try to use icons with
other dimensions.
6. Click Finish to validate your changes and close the dialog box.
The design workspace opens showing the Joblet name as tab label. By default the newly created Joblet
includes an input and an output Joblet component.
The INPUT component is only to be used if there is a flow coming from the main Job that should be used in
the joblet, and the OUTPUT component is only to be used if there is a flow going out of the joblet that needs
be used in the main Job. You can remove either or both of them as needed.
7. Include the transformation components you need and connect them to the Joblet input and the output
components. In the example below, the input component is removed, and a tMap component is used for the
transformation step.
As for any component requiring a schema definition, you can define your schema as Built-in, import it from
an XML file or retrieve it from the Repository tree view.
The output schema is automatically retrieved from the preceding component (likely the transformation component)
but you can also change it if you like.
The next step is to use the Joblet you have just created in your usual Job in order to replace the transformation steps.
2. Right-click the component(s) you want to transform to a Joblet and select Refactor to Joblet from the
contextual menu to open the [New Joblet] dialog box. In this example, we want to transform tMap to a Joblet.
3. Fill in at least the Name field to designate the Joblet. You can also add information to ease the Joblet
management, such as: Description, Version, Author and Status.
Field Description
Name Enter a name for your new Joblet. A message comes up if you enter prohibited characters.
Purpose Enter the Joblet purpose or any useful information regarding the job in use.
Field Description
Description Enter a description if need be for the Job created.
Author This field is read-only as it shows by default the current user login.
Locker This field is read-only as it shows by default the current user login.
Version The version is also read-only. You can manually increment the version using the M and m buttons.
Status You can define the status of a Job in your preferences. By default none is defined. To define them, go
to Window > Preferences > Talend >Status.
Path This field is read-only because it refers to the item access path in the repository. This field is empty
when the item is created in the root folder.
Icon Select the icon you want to use for your Joblet. It will show next to the Joblet name in the Repository
tree view and in the Palette as well.
4. In the Icon area, click the [...] button to open a window where you can browse to an icon of your choice
and add it to your Joblet.
5. Select the icon and click Open. The window closes and the selected icon displays in the Icon area in the
[New Joblet] dialog box.
The icon must have the dimensions 32 x 32. You will have an image-size related error if you try to use icons with
other dimensions.
7. Click Finish to validate your changes and close the dialog box.
The design workspace opens showing the Joblet name as tab label. The input and output Joblet components
are automatically included in the Joblet during its creation and the transformation component selected for
creating the Joblet, tMap in this example.
The tMap component is then automatically replaced by the Joblet component in the Job.
You can as well include other transformation steps after your Joblet, if necessary. For more information about
modifying a Joblet, see How to edit a Joblet.
1. Like any other component, click the relevant Joblet name in the Palette and drop it to the design workspace
to include it in your Job.
2. Connect the Joblet with the input and/or output components of the Job.
3. Define all other components properties and context variables, if required, before running the Job like any
other Job.
2. Click the Joblet tab in the lower part of the Studio to display the relevant view and then click Extra.
1. Drag the Trigger Input component from the Palette and drop it above your Joblet.
2. Right-click Trigger Input and select a link of the type Trigger > OnSubjobOk so that your Joblet starts
after the execution of the first subjob.
3. Drag the Trigger Output component from the Palette and drop it below the Joblet.
4. Right-click the input component of the Joblet and select a link of the type Trigger > OnSubjobOk so that
your third subjob starts after the execution of your Joblet, Transformation.
2. From the Repository tree view, click the created Joblet (Transformation) and drop it in the Job.
3. Drop a tFileOutputDelimited component next to the Joblet component, drop a tWarn component above the
Joblet component, and drop a tMsgBox component below the Joblet component.
4. Right-click the Joblet component and then select the Row > Joblet OUTPUT_1 link and click
tFileOutputDelimited.
5. Double-click tFileOutputDelimited to display its basic settings and then define the path to the folder and
file to be created in the File Name field.
Drag and drop a tWarn component from the Logs & Errors family over the Joblet component.
6. Right-click this component and select the link of the type Trigger > On Subjob Ok (TRIGGER_INPUT_1)
and then click the Joblet component.
7. Double-click the component that represents the Joblet to display its basic settings view.
In the Joblet TRIGGER_INPUT_1 field, the link type defined in the Joblet is read-only.
If you use many Triggers Input components in the Joblet and corresponding launching components in the Job, verify
that the right component is attached with the right launching link in the Attached node field of the Basic settings view.
8. From the Version list, select the Joblet version you want to use in your Job. In this example, we use the
latest version of the Joblet.
9. Drag and drop a tMsgBox from the Misc family under the Joblet component.
10. Right-click the Joblet component and select the link Trigger > On Subjob Ok (TRIGGER_OUTPUT_1).
The tWarn component sends a warning massage and launches the next subjob holding the Joblet you created:
Transformation. Once the second subjob is successfully executed, it launches a third subjob holding the tMsgBox
component indicating at the same time that the transformation has been carried out.
You can make changes to a Joblet and get your changes reflected in the actual Job execution output. These changes
can be made directly in the Job or in a separate tab view.
Note that you can not modify the links of the Joblet directly in the Job.
2. Double click any component to open its Basic settings view and modify its properties.
1. Double-click the Joblet you want to edit. You can also right-click it and select Open Joblet Component
from the contextual menu.
If you modify any trigger link connected to the Trigger Input or to the Trigger Output component, be sure
to update the Job using this Joblet accordingly.
If you do not want the Joblet to open when double-clicking on it, see How to change specific component settings
(Talend > Components).
The same way as for the Job Designs, you can create folders via the right-click menu to gather together families
of Joblets. Right-click on the Joblets node, and choose Create folder. Give a name to this folder and click OK.
If you have already created Joblets that you want to move in this new folder, simply drag and drop them into
the folder.
In addition, a Job containing a Joblet allows its Joblet to use different context variables from those the Job is using.
To do so, simply drop the group of contexts you want to use onto the Joblet in the workspace of the Job.
For more information on how to apply a context variable to a Job from the repository, see How to apply Repository
context variables to a Job.
In Talend Studio Repository tree view, click Job Designs > Joblets to expand the Joblets node.
1. Right-click an existing Joblet, and select Setup routine dependencies from the contextual menu.
2. In the [Setup routine dependencies] dialog box, select the User routines tab and click the plus button to
add a customized routine.
4. To add a system routine, select the System routines tab and click the plus button to add a system routine.
You can delete routines that you want to exclude in the exported routine dependencies by clicking the cross button
under the User routines tab or the System routines tab. This will help you avoid redundancy in exported routine
dependencies.
For more information about how to manage routines, see Managing routines.
• menu bar,
• toolbar,
• design workspace,
• Palette,
• various configuration views in a tab system, for any of the elements in the data integration Job designed in
the workspace,
The figure below illustrates Talend Studio main window and its panels and views.
The various panels and their respective features are detailed hereafter.
All the panels, tabs, and views described in this documentation are specific to Talend Studio. Some views listed in the [Show
View] dialog box are Eclipse specific and are not subjects of this documentation. For information on such views, check
Eclipse online documentation at http://www.eclipse.org/documentation/.
• some standard functions, such as Save, Print, Exit, which are to be used at the application level.
• some Eclipse native features to be used mainly at the design workspace level as well as specific Talend Studio
functions.
The table below describes menus and menu items available to you on the menu bar of Talend Studio.
The menus on the menu bar differ slightly according to what you are working with: a Business Model or a Job.
Import Opens a wizard that helps you to import different types of resources (files, items,
preferences, XML catalogs, etc.) from different sources.
Export Opens a wizard that helps you to export different types of resources (files, items,
preferences, breakpoints, XML catalogs, etc.) to different destinations.
Exit Closes the Studio main window.
Open/Edit Job Opens a dialog box where you can open and customize an exported Job script.
Script
For more information, see How to open an exported Job script.
Edit Undo Undoes the last action done in the Studio design workspace.
Redo Redoes the last action done in the Studio design workspace.
Cut Cuts selected object in the Studio design workspace.
Copy Copies the selected object in the Studio design workspace.
Paste Pastes the previously copied object in the Studio design workspace.
Delete Deletes the selected object in the Studio design workspace.
Select All Selects all components present in the Studio design workspace.
View Zoom In Obtains a larger image of the open Job.
Zoom Out Obtains a smaller image of the open Job.
Grid Displays grid in the design workspace. All items in the open Job are snapped to it.
Snap to Geometry Enables the Snap to Geometry feature.
Window Perspective Opens different perspectives corresponding to the different items in the list.
Show View... Opens the [Show View] dialog box which enables you to display different views on
the Studio.
Maximize Active Maximizes the current perspective.
View or Editor...
Preferences Opens the [Preferences] dialog box which enables you to set your preferences.
You have access to this view only if you have a Master Data Management
or Data Quality license.
The icons on the toolbar differ slightly according to what you are working with: a Business Model or a Job.
The table below describes the toolbar icons and their functions.
Export items Exports repository items to an archive file, for deploying outside Talend Studio. Instead
if you intend to import the exported element into a newer version of Talend Studio or of
another workstation, make sure the source files are included in the archive.
Import items Imports repository items from an archive file into your current Talend Studio. For more
information regarding the import/export items feature, see How to import items.
Find a specific job Displays the relevant dialog box that enables you to open any Job listed in the Repository
tree view.
The Repository centralizes and stores all necessary elements for any Job design and business modeling contained
in a project.
The Refresh button allows you to update the tree view with the last changes made.
The Activate filter button allows you to open the filter settings view so as to configure the display of
the Repository view.
The Switch branch button is displayed when your Studio is connected to a remote project. It allows you to
switch across project branches without the need of restarting your Studio. For further information, see Working
with project branches and tags.
The Repository tree view stores all your data (Business, Jobs, Joblets) and metadata (Routines, DB/File
connections, any meaningful Documentation and so on).
It is possible to filter the nodes, Jobs or items listed in the Repository tree view in order to display only a selected group.
For more information about filtering the tree view, see Filtering entries listed in the Repository tree view.
The table below describes the nodes in the Repository tree view.
Node Description
Business Models Under the Business Models node, are grouped all business models of the project. Double-click the
name of the model to open it on the design workspace. For more information, see Designing a
Business Model.
Job Designs The Job Designs node shows the tree view of the designed Jobs and joblets for the current project.
Double-click the name of a Job or joblet to open it on the design workspace. For more information,
see Designing a Job and Designing a Joblet.
Joblets The Joblets sub-node under the Job Designs node gathers all the joblets designed in the current
project. Double-click the name of the joblet to open it on the design workspace. For more
information, see Designing a Joblet.
Contexts The Contexts node groups files holding the contextual variables that you want to reuse in various
Jobs, such as filepaths or DB connection details. For more information, see Using contexts and
variables.
Code The Code node is a library that groups the routines available for this project and other pieces of code
that could be reused in the project. Click the relevant tree entry to expand the appropriate code piece.
The deleted elements are still present on your file system, in the recycle bin, until you
right-click the recycle bin icon and select Empty Recycle bin.
Expand the recycle bin to view any elements held within. You can action an element
directly from the recycle bin, restore it or delete it forever by clicking right and selecting
the desired action from the list.
Referenced Projects The Referenced Projects node groups all the projects set as references projects by the administrator
from Talend Administration Center. For more information on referenced projects, see Working with
referenced projects and How to set the display mode of referenced projects.
For more information, see Opening or creating a Business Model and Creating a Job.
For both Business Models and Job Designs: active designs display in a easily accessible tab system above this
workspace.
For Job Designs only. Under this workspace, you can access several other tabs:
• the Designer tab. It opens by default when creating a Job. It displays the Job in a graphical mode.
• the Code tab. It enables you to visualize the code and highlights the possible language errors.
• the Jobscript enables you to visualize and edit the Jobscript. For more information on how to edit a Jobscript,
see How to edit a Job script and How to display a Job script.
A Palette is docked at the top of the design workspace to help you draw the model corresponding to your workflow
needs.
A.5. Palette
From the Palette, depending on whether you are designing a Job or modeling a Business Model, you can drop
technical components or shapes, branches and notes to the design workspace for Job design or business modeling.
Related topics:
• Designing a Job.
The Component, Run Jobs, Problems and Error Log views gather all information relative to the graphical
elements selected in the design workspace or the actual execution of the open Job.
The Modules and Scheduler tabs are located in the same tab system as the Component, Logs and Run Job tabs.
Both views are independent from the active or inactive Jobs open on the design workspace.
You can show more tabs in this tab system and directly open the corresponding view if you select Window > Show view
and then, in the open dialog box, expand any node and select the element you want to display.
The sections below describe the view of each of the configuration tabs.
View Description
Component This view details the parameters specific to each component of the Palette. To create a Job that will function,
you are required to fill out the necessary fields of this Component view for each component forming your Job.
For more information about the Component view, see How to define component properties.
Run Job This view obviously shows the current job execution. It becomes a log console at the end of an execution.
The log tab has also an informative function for a Java component operating progress, for example.
Error Log tab is hidden by default. As for any other view, go to Window > Show views, then expand General
node and select Error Log to display it on the tab system.
Modules This view shows if a module is necessary and required for the use of a referenced component. Checking the
Modules view helps to verify what modules you have or should have to run smoothly your Jobs.
View Description
For more information, see the Talend Installation Guide.
Job view The Job view displays various information related to the open Job on the design workspace. This view has
the following tabs:
Main tab
This tab displays basic information about the Job opened on the design workspace, for example its name, author,
version number, etc. The information is read-only. To edit it you have to close your Job, right-click its label on
the Repository tree view and click Edit properties on the drop-down list.
Extra tab
This tab displays extra parameters including multi thread and implicit context loading features. For more
information, see How to use the features in the Extra tab
Stats/Log tab
This tab allows you to enable/disable the statistics and logs for the whole Job.
You can already enable these features for every single component of your Job by simply using and setting the
relevant components: tFlowMeterCatcher, tStatCatcher, tLogCatcher.
For more information about these components, see Talend Components Reference Guide.
In addition, you can now set these features for the whole active Job (for all components of your Job) in one go,
without using the Catcher components mentioned above. This way, all components get tracked and logged in
the File or Database table according to your setting.
You can also save the current setting to Project Settings by clicking the button.
For more details about the Stats & Logs automation, see How to automate the use of statistics & logs.
Version tab
This tab displays the different versions of the Job opened on the design workspace and their creation and
modification dates.
History tab
This tab displays the different revisions of the Job stored on the SVN or Git repository and their date and author.
It appears for the job created in a remote project.
Test Cases When a Job is selected from the Repository tree view or currently open on the design workspace, this view
lets you view and run the test cases created for the Job.
When a test case is selected from the Repository tree view or currently open on the design workspace, this
view displays the various information related to the test case, and lets you configure and run the test case or
its instances.
For more information on how to create, configure and run test cases, see Testing Jobs using test cases.
Problems This view displays the messages linked to the icons docked at a components in case of problem, for example
when part of its setting is missing. Three types of icons/messages exist: Error, Warning and Infos.
You can also show this view in the Window > Show view... combination where you can select Talend > Job
Hierarchy.
You can see Job Hierarchy only if you create a parent Job and one or more child Job(s) via the tRunJob
component. For more information about tRunJob, see Talend Components Reference Guide.
Properties When inserting a shape in the design workspace, the Properties view offers a range of formatting tools to help
you customizing your business model and improve its readability.
The Information panel is composed of two tabs, Outline and Code Viewer, which provide information regarding
the displayed diagram (either Job or Business Model) and also the generated code.
For more information, see How to display the code or the outline of your Job.
1.
Click on the Studio tool bar, or select File > Edit Project Properties from the menu bar.
2. In the tree diagram to the left of the dialog box, select the setting you wish to customize and then customize
it, using the options that appear to the right of the box.
From the dialog box you can also export or import the full assemblage of settings that define a particular project:
• To export the settings, click on the Export button. The export will generate an XML file containing all of your
project settings.
• To import settings, click on the Import button and select the XML file containing the parameters of the project
which you want to apply to the current project.
Based on the default, global build templates, you can create folder-level build scripts. Build scripts generated
based on these templates are executed when building Jobs, and are added to your build archive if you select the
Sources (Maven) option when building a Job so that you can rebuild your built Job sources using Maven.
This section provides information on how to customize the build script templates. For information on how to build
a Job, see How to build Jobs.
The following example shows how to customize the global POM script template for standalone Jobs:
1. From the menu bar, click File > Edit Project properties to open the [Project Settings] dialog box.
2. Expand the Build > Maven > Default nodes, and then click the Standalone Job node to open the relevant
view that displays the content of the POM script template.
Depending on the license you are using, the project settings items in your Studio may differ from what is shown above.
3. Modify the script code in the text panel and click OK to finish your customization.
The following example shows how to add and customize the POM script template for building standalone Jobs
from Jobs in the CA_customers folder:
1. From the menu bar, click File > Edit Project properties to open the [Project Settings] dialog box.
2. Expand the Build > Maven > Setup custom scripts by folder > Job Designs > CA_customers nodes, and
then click the Standalone Job node to open the relevant view, from which you can add script templates or
delete all existing templates.
Depending on the license you are using, the project settings items in your Studio may differ from what is shown above.
3. Click the Create Maven files button to create script templates based on the global templates for standalone
Jobs.
4. Select the script template you want to customize, pom.xml in this example, to display the script code in the
code view. Modify the script code in the text panel and click OK to finish your customization.
Once the build script templates are created for a folder, you can also go to the directory where the XML
files are stored, <studio_installation_directory>\workspace\<project_name>\process\CA_customers in this
example, and directly modify the XML file of the template you want to customize. Your changes will affect
all Jobs in the folder and in all sub-folders except those with their own script set up.
If you are working in a remote project and if you modify an XML file directly, your changes will not be automatically
committed to the version control system. To make sure your changes are properly committed, we recommend that you
customize the script templates in Project Settings of your Talend Studio instead.
The modified script file will be taken into account when a Job is built with the Maven option activated.
There is no direct customization to the global build script templates for use with the CommandLine. As a workaround, you
can add template files in the root directory <studio_installation_directory>\workspace\<project_name>\process\ for Jobs,
and then and modify the XML files. Note that these script templates will apply to all Jobs in all folders except those with
their own build script templates set up.
For more information about the CommandLine, see Appendix A of the Talend Administration Center User Guide.
1.
On the toolbar of the Studio's main window, click or click File > Edit Project Properties on the menu
bar to open the [Project Settings] dialog box.
In the General view of the [Project Settings] dialog box, you can add a project description, if you did not do so
when creating the project.
2. In the tree view of the [Project Settings] dialog box, expand Designer and select Palette Settings. The
settings of the current Palette are displayed in the panel to the right of the dialog box.
3. Select one or several components, or even set(s) of components you want to remove from the current project's
Palette.
4. Use the left arrow button to move the selection onto the panel on the left. This will remove the selected
components from the Palette.
5. To re-display hidden components, select them in the panel on the left and use the right arrow button to restore
them to the Palette.
6. Click Apply to validate your changes and OK to close the dialog box.
For more information on the Palette, see How to change the Palette layout and settings.
You can activate the automatic conversion option at the project level so that any tMap component added afterwards
in the project will have this feature enabled.
If needed, you can also define conversion rules to override the default conversion behavior of tMap.
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, expand General and select Auto-Conversion of types to open the relevant
view.
3. Select the Enable Auto-Conversion of types check box to activate the automatic type conversion feature
for all tMap components added afterwards in the project.
4. If needed, click the [+] button to add a line, select the source and target data types, and define a Java function
for data type conversion to create a conversion rule to override the default conversion behavior of tMap for
data that matches the rule. Press Ctrl+Space to access a list of available Java functions.
5. Click Apply to apply your changes and then OK to close the dialog box.
For more information about the automatic type conversion feature in tMap, see the tMap documentation of your
Talend Components Reference Guide.
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, expand General and select Metadata of Talend Type to open the relevant
view.
The Metadata Mapping File area lists the XML files that hold the conversion parameters for each database
type used in Talend Studio.
• You can import, export, or delete any of the conversion files by clicking Import, Export or Remove
respectively.
• You can modify any of the conversion files according to your needs by clicking the Edit button to open
the [Edit mapping file] dialog box and then modify the XML code directly in the open dialog box.
To do so:
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, expand General and select Version Management to open the
corresponding view.
3. In the Repository tree view, expand the node holding the items you want to manage their versions and then
select the check boxes of these items.
The selected items display in the Items list to the right along with their current version in the Version column
and the new version set in the New Version column.
• In the Options area, select the Change all items to a fixed version check box to change the version of
the selected items to the same fixed version.
• Click Select all dependencies if you want to update all of the items dependent on the selected items at
the same time.
• Click Select all subjobs if you want to update all of the subjobs dependent on the selected items at the
same time.
• To increment each version of the items, select the Update the version of each item check box and change
them manually.
• Select the Fix tRunjob versions if Latest check box, if you want the father job of current version to keep
using the child Job(s) of current version in the tRunJob to be versioned, regardless of how their versions
will update. For example, a tRunJob will update from the current version 1.0 to 1.1 at both father and
child levels. Once this check box is selected, the father Job 1.0 will continue to use the child Job 1.0 rather
than the latest one as usual, say, version 1.1 when the update is done.
To use this check box, the father Job must be using child Job(s) of the latest version as current version in the tRunjob
to be versioned, by having selected the Latest option from the drop-down version list in the Component view of the
child Job(s). For more information on tRunJob, see Talend Components Reference Guide.
5. Click Apply to apply your changes and then OK to close the dialog box.
To do so:
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, expand General and select Status Management to open the corresponding
view.
3. In the Repository tree view, expand the node holding the items you want to manage their status and then
select the check boxes of these items.
The selected items display in the Items list to the right along with their current status in the Status column
and the new status set in the New Status column.
4. In the Options area, select the Change all technical items to a fixed status check box to change the status
of the selected items to the same fixed status.
6. To increment each status of the items, select the Update the version of each item check box and change
them manually.
7. Click Apply to apply your changes and then OK to close the dialog box.
To do so:
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, click the Job Settings node to open the corresponding view.
3. Select the Use project settings when create a new job check boxes of the Implicit Context Load and Stats
and Logs areas.
4. Click Apply to validate your changes and then OK to close the dialog box.
You can then set up the path to the log file and/or database once for good in the [Project Settings] dialog box so
that the log data get always stored in this location.
To do so:
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, expand the Job Settings node and then click Stats & Logs to display
the corresponding view.
If you know that the preferences for Stats & Logs will not change depending upon the context of execution, then
simply set permanent preferences. If you want to apply the Stats & Logs settings individually, then it is better to set
these parameters directly onto the Stats & Logs view. For more information about this view, see How to automate
the use of statistics & logs.
3. Select the Use Statistics, Use Logs and Use Volumetrics check boxes where relevant, to select the type of
log information you want to set the path for.
4. Select a format for the storage of the log data: select either the On Files or On Database check box. Or select
the On Console check box to display the data in the console.
The relevant fields are enabled or disabled according to these settings. Fill out the File Name between quotes or
the DB name where relevant according to the type of log information you selected.
You can now store the database connection information in the Repository. Set the Property Type to Repository
and browse to retrieve the relevant connection metadata. The fields get automatically completed.
Alternatively, if you save your connection information in a Context, you can also access them through Ctrl+Space.
To do so:
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, expand the Job Settings node and then select the Implicit Context Load
check box to display the configuration parameters of the Implicit tContextLoad feature.
3. Select the From File or From Database check boxes according to the type of file you want to store your
contexts in.
4. For files, fill in the file path in the From File field and the field separator in the Field Separator field.
5. For databases, select the Built-in or Repository mode in the Property Type list and fill in the next fields.
7. Select the type of system message you want to have (warning, error, or info) in case a variable is loaded but
is not in the context or vice versa.
8. Click Apply to validate your changes and then OK to close the dialog box.
To do so:
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, expand the Job Settings node and then click Use Project Settings to
display the use of Implicit Context Load and Stats and Logs option in the Jobs.
3. In the Implicit Context Load Settings area, select the check boxes corresponding to the Jobs in which you
want to use the implicit context load option.
4. In the Stats Logs Settings area, select the check boxes corresponding to the Jobs in which you want to use
the stats and logs option.
5. Click Apply to validate your changes and then OK to close the dialog box.
To do so:
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, click the Log4j node to open the Log4j view.
3. Select the Activate log4j in components check box to activate the log4j feature.
4. You can change the log4j configuration by modifying the XML instructions in the Log4j template area. For
more information on the log4j parameters, see http://wiki.apache.org/logging-log4j/Log4jXmlFormat.
For more information on how to use the log4j feature, see How to customize log4j output level at runtime.
To do so:
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, click the Status node to define the main properties of your Repository
tree view elements.
The main properties of a repository item gathers information data such as Name, Purpose, Description,
Author, Version and Status of the selected item. Most properties are free text fields, but the Status field
is a drop-down list.
3. Click the New... button to display a dialog box and populate the Status list with the most relevant values,
according to your needs. Note that the Code cannot be more than 3-character long and the Label is required.
Talend makes a difference between two status types: Technical status and Documentation status.
The Technical status list displays classification codes for elements which are to be running on stations, such
as Jobs, metadata or routines.
The Documentation status list helps classifying the elements of the repository which can be used to
document processes(Business Models or documentation).
The Status list will offer the status levels you defined here when defining the main properties of your Job
designs and business models.
5. In the [Project Settings] dialog box, click Apply to validate your changes and then OK to close the dialog
box.
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, click the Security node to open the corresponding view.
4. In the [Project Settings] dialog box, click Apply to validate your changes and then OK to close the dialog
box.
1.
On the toolbar of the Studio main window, click or click File > Edit Project Properties from the menu
bar to open the [Project Settings] dialog box.
2. In the tree view of the dialog box, click the Custom component node to open the corresponding view on the
right of the dialog box. If you have already installed your user or custom component in your studio, these
components display on the left part of the Custom component view.
The custom components can be installed from the [Preferences] dialog box or imported from Talend
exchange.
For further information about how to install a user component from the [Preferences] dialog box, see How
to define the user component folder (Talend > Components).
For further information about how to import external custom components, see How to download/upload
Talend Community components.
One example is provided on Talend Help Center (THC), which describes how to download a custom
component from Talend Exchange and install it. See https://help.talend.com/display/KB/Installing+a
+custom+component for details.
3.
Click the custom or user component(s) of your interest to activate the button.
4. Click this activated button to move the selected component(s) into the Shared Component view.
1.
In the Shared Components view, click the component(s) you want to stop sharing to activate the button.
2. Click this activated button to move the selected component(s) into the Custom Components view.
All the panels, tabs, and views described in this documentation are specific to Talend Studio. Some views listed in the [Show
View] dialog box are Eclipse specific and are not subjects of this documentation. For information on such views, check
Eclipse online documentation at http://www.eclipse.org/documentation/.
Talend Studio enables you to change the layout and position of your Palette according to your requirements. the
below sections explain all management options you can carry out on the Palette.
B.2.1.1. How to show, hide the Palette and change its position
By default, the Palette might be hidden on the right hand side of your design workspace.
If you want the Palette to show permanently, click the left arrow, at the upper right corner of the design workspace,
to make it visible at all times.
You can also move around the Palette outside the design workspace within the Integration perspective. To enable
the standalone Palette view, click the Window menu > Show View... > General > Palette.
If you want to set the Palette apart in a panel, right-click the Palette head bar and select Detached from the
contextual menu. The Palette opens in a separate view that you can move around wherever you like within the
perspective.
This display/hide option can be very useful when you are in the Favorite view of the Palette. In this view, you usually have
a limited number of components that if you display without their families, you will have them in an alphabetical list and
thus facilitate their usage. for more information about the Palette favorite, see How to set the Palette favorite.
To add a pin, click the pin icon on the top right-hand corner of the family name.
For more information about filtering the Palette, see Palette Settings.
For more information about adding components to the Palette, either from Talend Exchange or from your
own development, see How to download/upload Talend Community components and/or How to define the user
component folder (Talend > Components).
You can add/remove components to/from the Palette favorite view in order to have a quick access to all the
components that you mostly use.
To do so:
1. From the Palette, right-click the component you want to add to Palette favorite and select Add To Favorite.
2.
Do the same for all the components you want to add to the Palette favorite then click the Favorite button
in the upper right corner of the Palette to display the Palette favorite.
To delete a component from the Palette favorite, right-click the component you want to remove from the
favorite and select Remove From Favorite.
To restore the Palette standard view, click the Standard button in the upper right corner of the Palette.
You can also enlarge the component icons for better readability of the component list.
To do so, right-click any component family in the Palette and select the desired option in the contextual menu or
click Settings to open the [Palette Settings] window and fine-tune the layout.
For more information about the creation and development of user components, refer to the component creation
tutorial on our wiki at https://help.talend.com/pages/viewpage.action?pageId=226000909.
For more information about how to download user components in your Studio, see How to download/upload
Talend Community components.
All you need to do is to click the head border of a panel or to click a tab, hold down the mouse button and drag
the panel to the target destination. Release to change the panel position.
Click the minimize/maximize icons ( / ) to minimize the corresponding panel or maximize it. For more
information on how to display or hide a panel/view, see How to display Job configuration tabs/views.
Click the close icon ( ) to close a tab/view. To reopen a view, click Window > Show View > Talend, then click
the name of the panel you want to add to your current view or see Shortcuts and aliases .
If the Palette does not show or if you want to set it apart in a panel, go to Window > Show view...> General >
Palette. The Palette opens in a separate view that you can move around wherever you like within the perspective.
The Component, Run Job, and Contexts views gather all information relative to the graphical elements selected
in the design workspace or the actual execution of the open Job.
By default, when you launch Talend Studio for the first time, the Problems tab will not be displayed until the first Job is
created. After that, Problems tab will be displayed in the tab system automatically.
The Modules and Scheduler[deprecated] tabs are located in the same tab system as the Component, Logs and
Run Job tabs. Both views are independent from the active or inactive Jobs open on the design workspace.
Some of the configuration tabs are hidden by default such as the Error Log, Navigator, Job Hierarchy,
Problems, Modules and Scheduler[deprecated] tabs. You can show hidden tabs in this tab system and directly
open the corresponding view if you select Window > Show view and then, in the open dialog box, expand the
corresponding node and select the element you want to display.
You can filter the Repository tree view by job name, Job status, the user who created the Job/items or simply
by selecting/clearing the check box next to the node/ item you want to display/hide in the view. You can also set
several filters simultaneously.
1.
In the Studio, click the icon in the upper right corner of the Repository tree view and select Filter
settings from the contextual menu.
3. Follow the rules set below the field when writing the patterns you want to use to filter the Jobs.
In this example, we want to list in the tree view all Jobs that start with tMap or test.
4. In the [Repository Filter] dialog box, click OK to validate your changes and close the dialog box.
Only the Jobs that correspond to the filter you set are displayed in the tree view, those that start with tMap
and test in this example
You can switch back to the by-default tree view, which lists all nodes, Jobs and items, by simply clicking the icon . This
will cause the green plus sign appended on the icon to turn to a minus red sign ( ).
1.
In the Studio, click the icon in the upper right corner of the Repository tree view and select Filter
settings from the contextual menu.
This table lists the authentication information of all the users who have logged in to Talend Studio and created
a Job or an item.
3. Clear the check box next to a user if you want to hide all the Jobs/items created by him/her in the Repository
tree view.
All Jobs/items created by the specified user will disappear from the tree view.
You can switch back to the by-default tree view, which lists all nodes, Jobs and items, by simply clicking the icon . This
will cause the green plus sign appended on the icon to turn to a minus red sign ( ).
1.
In the Studio, click the icon in the upper right corner of the Repository tree view and select Filter
settings from the contextual menu.
2. In the Filter By Status area, clear the check boxes next to the status type if you want to hide all the Jobs
that have the selected status.
All Jobs that have the specified status will disappear from the tree view.
You can switch back to the by-default tree view, which lists all nodes, Jobs and items, by simply clicking the icon . This
will cause the green plus sign appended on the icon to turn to a minus red sign ( ).
1.
In the Integration perspective of the Studio, click the icon in the upper right corner of the Repository
tree view and select Filter settings from the contextual menu.
2. Select the check boxes next to the nodes you want to display in the Repository tree view.
Consider, for example, that you want to show in the tree view all the Jobs listed under the Job Designs node,
three of the folders listed under the SQL Templates node and one of the metadata items listed under the
Metadata node.
Only the nodes/folders for which you selected the corresponding check boxes are displayed in the tree view.
If you do not want to show all the Jobs listed under the Job Designs node, you can filter the Jobs using the Filter By Name
check box. For more information on filtering Jobs, see How to filter by Job name.
Numerous settings you define can be stored in the Preference and thus become your default values for all new
Jobs you create.
The following sections describe specific settings that you can set as preference.
First, click the Window menu of Talend Studio, then select Preferences.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Expand the Jobscript node and click the Syntax Coloring to define the color of the different text elements.
3. In the Token Styles list, select the text element you want to modify the style of.
4. Click the Color or Background button to respectively change the color of the text or background.
5. In the Style area, select style you want to apply to the text.
6. In the Font field, click Change... to change the font of the text.
1. Expand the Jobscript node and click the Templates to define Job script templates.
2. Click New... to add a new template. The [New template] wizard opens.
4. In the Context list, select the context in which the template will be proposed in the job script when you press
Ctrl+Space.
5. Select the Automatically insert check box if you want the template to expand automatically on Ctrl+Space
when there is no other matching template available.
9. Click OK to go back to the template list. Your template displays in the list.
10. Click Apply to apply your changes and OK to close the wizard.
From the Job script template preference, you can also edit, delete, import or export templates.
1. If needed, click the Talend node in the tree view of the [Preferences] dialog box.
2. Enter a path in the Java interpreter field if the default directory does not display the right path.
On the same view, you can also change the preview limit and the path to the temporary files or the OS language.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. In the tree view, expand the Java node and select Compiler.
3. In the Compiler compliance level list, select the compiler compliance level you want to use.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
On this view, you can define the way component names and hints will be displayed.
4. Select the relevant check boxes to customize your use of the Talend Studio design workspace.
For further information about the creation and development of user components, see https://help.talend.com/
display/KB/How+to+create+a+custom+component.
The following procedure applies only to the external components. For the preferences of all the components, see
How to change specific component settings (Talend > Components).
The user component folder is the folder that contains the components you created and/or the ones you downloaded
from TalendForge. To define it, proceed as follows:
1. In the tree view of the [Preferences] dialog box, expand the Talend node and select Components.
2. Enter the User component folder path or browse to the folder that contains the custom components to be
added to the Palette of the Studio.
In order to be imported to the Palette of the Studio, the custom components have to be in separate folders
located at the root of the component folder you have defined.
3. Click Apply and then OK to validate the preferences and close the dialog box.
The Studio restarts and the external components are added to the Palette.
This configuration is stored in the metadata of the workspace. If the workspace of Talend Studio changes, you
have to reset this configuration again.
The following procedure applies to the external components and to the components included in the Studio. For the
preferences specific to the user components, see How to define the user component folder (Talend > Components).
1. In the tree view of the [Preferences] dialog box, expand the Talend node and select Components.
2. In the Row limit field, set the number of the data rows you want to see in the data viewer. For further
information about the data viewer, see How to view in-process data.
3. From the Default mapping links display as list, select the mapping link type you want to use in the tMap.
4. Under tRunJob, select the check box if you do not want the corresponding Job to open upon double clicking
a tRunJob component.
You will still be able to open the corresponding Job by right clicking the tRunJob component and selecting Open
tRunJob Component.
5. Under Joblet, select the check box if you do not want the corresponding Job to open upon double clicking
a Joblet component.
You will still be able to open the corresponding Job by right clicking the Joblet component and selecting Open Joblet
Component.
6. Under Component Assist, select the Enable Component Creation Assistant check box if you want to be
able to add a component by typing its name in the design workspace. For more information, see Adding
components to the Job.
7. Click Apply and then OK to validate the set preferences and close the dialog box.
This configuration is stored in the metadata of the workspace. If the workspace of Talend Studio changes, you
have to reset this configuration again.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Expand the Talend node and click Documentation to display the documentation preferences.
• Select the Source code to HTML generation check box to include the source code in the HTML
documentation that you will generate.
• Select the Automatic update of corresponding documentation of job/joblet check box to automatically
update the Job and Joblet documentation.
• In the User Doc Logo field, specify an image file if you want your documentation to include your own logo.
• In the Company Name field, enter your company name to show on your documentation, if needed.
• Select the Use CSS file as a template when export to HTML check box to activate the CSS File field if
you need to use a CSS file to customize the exported HTML files.
For more information on documentation, see How to generate HTML documentation and Documentation tab.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Expand the Talend node and click Exchange to display the Exchange view.
• If you are not yet connected to the Talend Community, click Sign In to go to the Connect to TalendForge
page to sign in using your Talend Community credentials or create a Talend Community account and
then sign in.
If you are already connected to the Talend Community, your account is displayed and the Sign In button
becomes Sign Out. To get disconnected from the Talend Community, click Sign Out.
• If you are not yet connected to the Talend Community and you do not want to be prompted to connect to
the Talend Community when launching the Studio, select the Don't ask me to connect to TalendForge
at login check box.
• By default, while you are connected to the Talend Community, whenever an update to an installed
community extension is available, a dialog box appears to notify you about it. If you often check for
community extension updates and you do not want that dialog box to appear again, clear the Notify me
when updated extensions are available check box.
For more information on connecting to the Talend Community, see the Getting Started Guide. For more
information on using community extensions in the Studio, see How to download/upload Talend Community
components.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Expand the Talend and Import/Export nodes in succession and then click Shell Setting to display the
relevant view.
3. In the Command field, enter your piece/pieces of code before or after %GENERATED_TOS_CALL% to display
it/them before or after the code of your Job.
1. From the menu bar, click Window > Preferences to display the [Preferences] dialog box.
2. Expand the Talend and Import/Export nodes in succession and then click Metadata Bridge to display the
relevant view.
3. Set the preferences according to your use of the Talend Metadata Bridge:
• In the Location area, select the Embedded option to use the MIMB tool embedded in the Talend Metadata
Bridge. This is the default option.
To use the MIMB tool you have installed locally, select Local Directory and specify the installation
directory of the MIMB tool.
• In the Temp folder field, specify the directory to hold the temporary files generated during metadata
import/export executions, if you do not want to use the default directory.
• In the Log folder field, specify the directory to hold the logs files generated during metadata import/export
executions, if you do not want to use the default directory.
• Select the Show detailed logs check box to generate detailed log files during metadata import/export
executions.
4. Click Apply to apply your changes; click OK to validate the settings and close the [Preferences] dialog box.
For more information on using the Talend Metadata Bridge to import/export metadata, see the Knowledge Base
article Importing and exporting metadata using Talend Metadata Bridge.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Expand the Talend node and click Internationalization to display the relevant view.
3. From the Local Language list, select the language you want to use for the graphical interface of Talend
Studio.
4. Click Apply and then OK to validate your change and close the [Preferences] dialog box.
5. Restart the Studio to display the graphical interface in the selected language.
1. From the menu bar, click Window > Preferences to display the [Preferences] dialog box.
2. Expand the Talend node and click Palette Settings to display the Palette Settings view.
3. To limit number of components that can be displayed on the Recently Used list, enter your preferred number
in the Recently used list size field.
4. To enable searching a component using a phrase that describes the function or purpose of the component as
search keywords in the search field of the Palette or in the text field that appears on the design workspace,
select the Also search from Help when performing a component searching check box. With this check
box selected, you can find your component on the Palette or on the component list on the design workspace
as long as you can find it from the F1 Help information by using the same descriptive phrase as keywords.
5. To change the number of the search result entries when using a descriptive phrase as search keywords, enter
your preferred number in the Result limitation from Help field.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Expand the Talend node and click Performance to display the repository refresh preference.
You can improve your performance when you deactivate automatic refresh.
• Select the Deactivate auto detect/update after a modification in the repository check box to deactivate
the automatic detection and update of the repository.
• Select the Check the property fields when generating code check box to activate the audit of the property
fields of the component. When one property filed is not correctly filled in, the component is surrounded
by red on the design workspace.
You can optimize performance if you disable property fields verification of components, for example if you clear
the Check the property fields when generating code check box.
• Select the Generate code when opening the job check box to generate code when you open a Job.
• Select the Check only the last version when updating jobs or joblets check box to only check the latest
version when you update a Job or a Joblet.
• Select the Propagate add/delete variable changes in repository contexts to propagate variable changes
in the Repository Contexts.
• Select the Activate the timeout for database connection check box to establish database connection time
out. Then set this time out in the Connection timeout (seconds) field.
• Select the Add all user routines to job dependencies, when create new job check box to add all user
routines to Job dependencies upon the creation of new Jobs.
• When working in an SVN managed project, select the Auto check of svn to detect the update check box
to allow the Studio to automatically check if there had been new commits on the svn, making the Studio
faster. Then set the time interval between these checks in the Detect update in each (seconds) field.
If you clear this check box, the Studio updates the svn for each operation it makes. This slows down the
Studio but reduces the number of requests on the svn server.
• When working in an SVN or Git managed project, select the Automatic refresh of locks check box to
allow the Studio to automatically retrieve the lock status of all items contained in the project upon each
action made in the Studio. If you find communications with the Talend Administration Center slow or if
the project contains a big number of locked items, you can clear this check box to gain performance of
the Studio.
1. From the menu bar, click Window > Preferences to display the [Preferences] dialog box.
2. Expand the Talend and Performance nodes in succession and then click Nexus settings to display the
relevant view.
• In the Timeout for nexus connection (ms) field, specify the time in milliseconds you want your Talend
Studio to wait for an interaction with the Nexus server before cutting the connection, 0 for an infinite
timeout.
• In the Jars check frequency field, specify how often you want your Talend Studio to check for updates:
• -1 if you don't want your Talend Studio to check for updates at all.
• 0 if you want your Talend Studio to check for updates at any action that needs a Jar or Jars, for example
when a Job is built or executed from the Studio.
4. Click Apply to apply your changes; click OK to validate the settings and close the [Preferences] dialog box.
For more information on installing and configuring Nexus artifact repository, see the Talend Installation Guide.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Develop the Talend node and click Repository to display the relevant view.
3. Select the [Merge the reference project] dialog box to show referenced projects as part of the Job Designs
folder.
For more detailed information about project reference preferences, see How to set the display mode of referenced
projects.
1. From the menu bar, click Window > Preferences to display the [Preferences] dialog box.
2. Expand the Talend node and click Run/Debug to display the relevant view.
• In the Talend client configuration area, you can define the execution options to be used by default:
Stats port range Specify a range for the ports used for generating statistics, in particular, if the ports defined by
default are used by other applications.
Trace port range Specify a range for the ports used for generating traces, in particular, if the ports defined by default
are used by other applications.
Save before run Select this check box to save your Job automatically before its execution.
Clear before run Select this check box to delete the results of a previous execution before re-executing the Job.
Exec time Select this check box to show Job execution duration.
Statistics Select this check box to show the statistics measurement of data flow during Job execution.
Traces Select this check box to show data processing during job execution.
Pause time Enter the time you want to set before each data line in the traces table.
• In the Job Run VM arguments list, you can define the parameter of your current JVM according to your needs.
The by-default parameters -Xms256M and -Xmx1024M correspond respectively to the minimal and maximal
memory capacities reserved for your Job executions.
If you want to use some JVM parameters for only a specific Job execution, for example if you want to display
the execution result for this specific Job in Japanese, you need open this Job's Run view and then in the Run
view, configure the advanced execution settings to define the corresponding parameters.
For further information about the advanced execution settings of a specific Job, see How to set advanced execution
settings.
For more information about possible parameters, check the site http://www.oracle.com/technetwork/java/javase/
tech/vmoptions-jsp-140102.html.
You can also use the CommandLine to transfer your Jobs from your Talend Studio to a remote JobServer for Job
execution. To use the CommandLine, make sure you have logged on to a remote project via a remote connection.
To remotely execute your Jobs, you need to configure the remote JobServer details in the Studio preferences. You
can also configure the CommandLine you will use.
To open the preference page for distant run settings, do the following:
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Expand the Talend and the Run/Debug nodes in succession and then click Remote.
To allow monitoring the JVM resource usage during Job execution on a remote JobServer, do the following:
2. In Remote JMX port field, enter a listening port number that is free in your system.
To define a specific Unix OS user allowed to start the Job execution on a remote JobServer, enter the user name
in the Run as (Set up user for Unix) field. If left blank, any of the existing Operating System users can start
the Job execution.
1. In the Remote Jobs Servers area, click the [+] button to add a new line in the table.
2. Fill in all the fields for the JobServer: Name, Host name (or IP address), Standard port, Username,
Password, and File transfer Port. The Username and Password fields are not required if no users are
configured in the configuration file conf/users.csv of the JobServer. For more information about Job execution
server configuration, see the Talend Installation Guide.
If you use the CommandLine to transfer your Jobs to the remote JobServer, make sure the JobServer name
is identical to the execution server label configured in Talend Administration Center. For more information,
see the Talend Administration Center User Guide.
1. Select the Enable commandline server check box if you want to use a remote commandline.
3. Fill in the fields as configured for the commandline in Talend Administration Center: Name, Host name (or
IP address), and Port. For more information, see Talend Administration Center User Guide.
For more information about how to execute a Job on a remote server, see How to run a Job remotely.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. On the tree view of the opened dialog box, expand the Talend node.
3. Click the Specific settings node to display the corresponding view on the right of the dialog box.
4. Select the Allow specific characters (UTF8,...) for columns of schemas check box.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Expand the Talend node, and click Specific Settings > Default Type and Length to display the data length
and type of your schema.
• In the Default Settings for Fields with Null Values area, fill in the data type and the field length to apply
to the null fields.
• In the Default Settings for All Fields area, fill in the data type and the field length to apply to all fields
of the schema.
• In the Default Length for Data Type area, fill in the field length for each type of data.
1. From the menu bar, click Window > Preferences to open the [Preferences] dialog box.
2. Expand the Talend and Specific Settings nodes in succession and then click Sql Builder to display the
relevant view.
• Select the add quotes, when you generated sql statement check box to precede and follow column and
table names with inverted commas in your SQL queries.
• In the AS400 SQL generation area, select the Standard SQL Statement or System SQL Statement
check boxes to use standard or system SQL statements respectively when you use an AS/400 database.
• Clear the Enable check queries in the database components (disable to avoid warnings for specific
queries) check box to deactivate the verification of queries in all database components.
1. From the menu bar, click Window > Preferences to display the [Preferences] dialog box.
2. Expand the Talend node and then click SSL to display the relevant view.
3. Define the Keystore Configuration for the local certificate to be sent to the remote host:
a. Click Browse next to the Path field and browse to the keystore file that stores your local credentials.
c. From the Keystore Type list, select the type of keystore to use.
4. Define the Truststore Configuration for verification of the remote host's certificate:
a. Click Browse next to the Path field and browse to the truststore file.
c. From the Keystore Type list, select the type of keystore to use.
5. Click Apply to apply your changes; click OK to validate the settings and close the [Preferences] dialog box.
By default, Talend Studio automatically collects your Studio usage data and sends this data on a regular basis
to servers hosted by Talend. You can view the usage data collection and upload information and customize the
Usage Data Collector preferences according to your needs.
Be assured that only the Studio usage statistics data will be collected and none of your private information will be collected
and transmitted to Talend.
1. From the menu bar, click Window > Preferences to display the [Preferences] dialog box.
2. Expand the Talend node and click Usage Data Collector to display the Usage Data Collector view.
3. Read the message about the Usage Data Collector, and, if you do not want the Usage Data Collector to collect
and upload your Studio usage information, clear the Enable capture check box.
4. To have a preview of the usage data captured by the Usage Data Collector, expand the Usage Data Collector
node and click Preview.
5. To customize the usage data upload interval and view the date of the last upload, click Uploading under the
Usage Data Collector node.
• By default, if enabled, the Usage Data Collector collects the product usage data and sends it to Talend
servers every 10 days. To change the data upload interval, enter a new integer value (in days) in the Upload
Period field.
• The read-only Last Upload field displays the date and time the usage data was last sent to Talend servers.
Before actually starting the Job, let's inspect the input data and the expected output data.
The file structure usually called Schema in Talend Studio includes the following columns:
• First name
• Last name
• Address
• City
The table structure is slightly different, therefore the data expected to be loaded into the DB table should have
the following structure:
In order to load this table, we will need to use the following mapping process:
The Name column is filled out with a concatenation of first and last names.
The Address column data comes from the equivalent Address column of the input file, but supports a upper-case
transformation before the loading.
The County column is fed with the name of the County where the city is located using a reference file which will
help filtering Orange and Los Angeles counties' cities.
To do so, we will use a reference file, listing cities that are located in Orange and Los Angeles counties such as:
City County
Agoura Hills Los Angeles
Alhambra Los Angeles
Aliso Viejo Orange
Anaheim Orange
Arcadia Los Angeles
1. Creation of the Job, configuration of the input file parameters, and reading of the input file,
3. Definition of the reference file parameters, relevant mapping using the tMap component, and selection of inner
join mode,
• On the left-hand side: the Repository tree view that holds Jobs, Business Models, Metadata, shared Code,
Documentation and so on.
• On the right-hand side: the Palette of business or technical components depending on the software tool you
are using within Talend Studio.
To the left of the Studio, the Repository tree view that gives an access to:
• The Business Modeler: For more information, see Modeling a Business Model.
• The Job Designer: For details about this part, see Getting started with a basic Job.
• The Metadata Manager: For details about this part, see Managing Metadata.
• Contexts and routines: For details, see Using contexts and variables and Managing routines.
To create the Job, right-click Job Designs in the Repository tree view and select Create Job.
In the dialog box displaying then, only the first field (Name) is required. Type in California1 and click Finish.
An empty Job then opens on the main window and the Palette of technical components (by default, to the right of
the Studio) comes up showing a dozen of component families such as: Databases, Files, Internet, Data Quality
and so on, hundreds of components are already available.
To read the file California_Clients, let's use the tFileInputDelimited component. This component can be found
in the File/Input group of the Palette. Click this component then click to the left of the design workspace to place
it on the design area.
Let's define now the reading properties for this component: File path, column delimiter, encoding... To do so, let's
use the Metadata Manager. This tool offers numerous wizards that will help us to configure parameters and allow
us to store these properties for a one-click re-use in all future Jobs we may need.
As our input file is a delimited flat file, let's select File Delimited on the right-click list of the Metadata folder in
the Repository tree view. Then select Create file delimited. A wizard dedicated to delimited file thus displays:
• At Step 1, only the Name field is required: simply type in California_clients and go to the next Step.
• At Step 2, select the input file (California_Clients.csv) via the Browse... button. Immediately an extract of the
file shows on the Preview, at the bottom of the screen so that you can check its content. Click Next.
• At Step 3, we will define the file parameters: file encoding, line and column delimiters... As our input file is
pretty standard, most default values are fine. The first line of our file is a header containing column names.
To retrieve automatically these names, click Set heading row as column names then click Refresh Preview.
And click Next to the last step.
• At Step 4, each column of the file is to be set. The wizard includes algorithms which guess types and length of
the column based on the file first data rows. The suggested data description (called schema in Talend Studio)
can be modified at any time. In this particular scenario, they can be used as is.
We can now use it in our input component. Select the tFileInputDelimited you had dropped on the design
workspace earlier, and select the Component view at the bottom of the window.
Select the vertical tab Basic settings. In this tab, you'll find all technical properties required to let the component
work. Rather than setting each one of these properties, let's use the Metadata entry we just defined.
Select Repository as Property type in the list. A new field shows: Repository, click "..." button and select the
relevant Metadata entry on the list: California_clients. You can notice now that all parameters get automatically
filled out.
At this stage, we will terminate our flow by simply sending the data read from this input file onto the standard
output (StdOut).
To do so, add a tLogRow component (from the Logs & Errors group).
To link both components, right-click the input component and select Row/Main. Then click the output component:
tLogRow.
This Job is now ready to be executed. To run it, select the Run tab on the bottom panel.
Enable the statistics by selecting the Statistics check box in the Advanced Settings vertical tab of the Run view,
then run the Job by clicking Run in the Basic Run tab.
The content of the input file display thus onto the console.
• transformations
• rejections
• and more...
Remove the link that binds together the job's two components via a right-click the link, then Delete option. Then
place the tMap of the Processing component group in between before linking the input component to the tMap
as we did it previously.
Eventually to link the tMap to the standard output, right-click the tMap component, select Row/*New Output*
(Main) and click the tLogRow component. Type in out1 in the dialog box to implement the link. Logically, a
message box shows up (for the back-propagation of schemas), ignore it by clicking on No.
To the left, you can see the schema (description) of your input file (row1). To the right, your output is for the
time being still empty (out1).
Drop the Firstname and Lastname columns to the right, onto the Name column as shown on the screen below.
Then drop the other columns Address and City to their respective line.
• Change the Expression of the Name column to row1.Firstname + " " + row1.LastName. Concatenate the
Firstname column with the Lastname column following strictly this syntax (in Java), in order for the columns
to display together in one column.
• Change the Expression of the Address column to row1.Address.toUpperCase() which will thus change the
address case to upper case.
Then remove the Lastname column from the out1 table and increase the length of the remaining columns. To do
so, go to the Schema Editor located at the bottom of the tMap editor and proceed as follows:
1. Select the column to be removed from the schema, and click the cross icon.
2. Select the column of which you need increase the length size.
3. Type in the length size you intend in the length column. In this example, change the length of every remaining
column to 40.
As the first name and the last name of a client is concatenated, it is necessary to increase the length of the name column
in order to match the full name size.
No transformation is made onto the City column. Click OK to validate the changes and close the Map editor
interface.
If you run your Job at this stage (via the Run view as we did it before), you'll notice the changes that you defined
are implemented.
For example, the addresses are displayed in upper case and the first names and last names are gathered together
in the same column.
Then drop this newly created metadata to the top of the design area to create automatically a reading component
pointing to this metadata.
Double-click again on the tMap component to open its interface. Note that the reference input table (row2)
corresponding to the LA and Orange county file, shows to the left of the window, right under your main input
(row1).
Now let's define the join between the main flow and the reference flow. In this use case, the join is pretty basic
to define as the City column is present in both files and the data match perfectly. But even though this was not
the case, we could have carried out operations directly at this level to establish a link among the data (padding,
case change...)
To implement the join, drop the City column from your first input table onto the City column of your reference
table. A violet link then displays, to materialize this join.
Now, we are able to use the County column from the reference table in the output table (out1).
Eventually, click the OK button to validate your changes, and run the new Job.
As you can notice, the last column is only filled out for Los Angeles and Orange counties' cities. For all other lines,
this column is empty. The reason for this is that by default, the tMap implements a left outer join mode. If you
want to filter your data to only display lines for which a match is found by the tMap, then open again the tMap,
click the tMap settings button and select the Inner Join in the Join Model list on the reference table (row2).
To do so, let's first create the Metadata describing the connection to the MySQL database. Double-click Metadata/
MySQL/DemoMySQL in the referential (on the condition that you imported the Demo project properly). This
opens the Metadata wizard.
On Step2 of the wizard, type in the relevant connection parameters. Check the validity of this connection by
clicking on the Check button. Eventually, validate your changes, by clicking on Finish.
Drop this metadata to the right of the design workspace, while maintaining the Ctrl key down, in order to create
automatically a tMysqlOutput component.
Reconnect the out1 output flow from the tMap to the new component tMysqlOutput (Right-click/Row/out1):
1. Type in LA_Orange_Clients in the Table field, in order to name your target table which will get created
on the fly.
2. Select the Drop table if exists and create option or on the Action on table field.
3. Click Edit Schema and click the Reset DB type button (DB button on the tool bar) in order to fill out
automatically the DB type if need be.
Run again the Job. The target table should be automatically created and filled with data in less a second!
In this scenario, we did use only four different components out of hundreds of components available in the Palette
and grouped according to different categories (databases, Web service, FTP and so on)!
And more components, this time created by the community, are also available on the community site
(talendforge.org).
For more information regarding the components, see Talend Components Reference Guide.
In this scenario, a pre-defined csv file containing customer information is loaded in a database table. Then the
loaded data is selected using a tMap, and output to a local file and to the console using the output stream feature.
The file structure usually called Schema in Talend Studio includes the following columns:
• id (Type: Integer)
Thus the expected output data should have the following structure:
• id (Type: Integer)
All the three columns above come from the respective columns in the input data.
1. Create the Job, define the schema for the input data, and read the input file according to the defined schema.
A complete Job looks as what it displays in the following image. For the detailed instruction for designing the
Job, read the following sections.
1. Drop a tFileInputDelimited component onto the design workspace, and double-click the to open the Basic
settings view to set its properties.
2. Click the three-dot button next to the File name/Stream field to browse to the path of the input data file.
You can also type in the path of the input data file manually.
3. Click Edit schema to open a dialog box to configure the file structure of the input file.
4. Click the plus button to add six columns and set the Type and columns names to what we listed in the
following:
To do so:
1. Drop a tJava component onto the design workspace, and double-click it to open the Basic settings view to
set its properties.
new java.io.File("C:/myFolder").mkdirs();
globalMap.put("out_file",new java.io.FileOutputStream("C:/myFolder/
customerselection.txt",false));
The command we typed in this step will create a new directory C:/myFolder for saving the output file
customerselection.txt which is defined followingly. You can customize the command in accordance with actual
practice.
3. Connect tJava to tFileInputDelimited using a Trigger > On Subjob Ok connection. This will trigger tJava
when subjob that starts with tFileInputDelimited succeeds in running.
2. Click the three-dot button next to Map Editor to open a dialog box to set the mapping.
3. Click the plus button on the left to add six columns for the schema of the incoming data, these columns should
be the same as the following:
4. Click the plus button on the right to add a schema of the outgoing data flow.
5. Select New output and Click OK to save the output schema. For the time being, the output schema is still
empty.
6. Click the plus button beneath the out1 table to add three columns for the output data.
7. Drop the id, CustomerName and CustomerAge columns onto their respective line on the right.
2. Select the Use Output Stream check box to enable the Output Stream field and fill the Output Stream
field with the following command:
(java.io.OutputStream)globalMap.get("out_file")
You can customize the command in the Output Stream field by pressing CTRL+SPACE to select built-in command
from the list or type in the command into the field manually in accordance with actual practice. In this scenario, the
command we use in the Output Stream field will call the java.io.OutputStream class to output the filtered data
stream to a local file which is defined in the Code area of tJava in this scenario.
3. Connect tFileInputDelimited to tMap using a Row > Main connection and connect tMap to
tFileOutputDelimited using a Row > out1 connection which is defined in the Map Editor of tMap.
4. Click Sync columns to retrieve the schema defined in the preceding component.
1. Drop a tLogRow component onto the design workspace, and double-click it to open its Basic settings view.
4. Click Sync columns to retrieve the schema defined in the preceding component.
The selected data is also output to the specified local file customerselection.txt.
For an example of Job using this feature, see Scenario: Utilizing Output Stream in saving filtered data to a local
file of tFileOutputDelimited in Talend Components Reference Guide.
For the principle of the Use Output Stream feature, see How to use the Use Output Stream feature.
The use case below describes how to use the Implicit Context Load feature of your Talend Studio to load context
parameters dynamically at the time of Job execution. For how to load context parameters explicitly at the time
of Job execution, see the documentation of the tContextLoad component in the Talend Components Reference
Guide. For more information on using contexts and variables, see Using contexts and variables.
The Job in this use case is composed of only two components. It will read employees data stored in two MySQL
databases, one for testing and the other for production purposes. The connection parameters for accessing these
two databases are stored in another MySQL database. When executed, the Job loads these connection parameters
dynamically to connect to the two databases.
db_testing:
key value
host localhost
port 3306
username root
password talend
database testing
db_production:
key value
host localhost
port 3306
username root
password talend
database production
You can create these database tables using another Talend Job that contains tFixedFlowInput and tMysqlOutput
components. For how to use these components, see the Talend Components Reference Guide.
2. Select the Contexts view of the Job, and click the [+] button at the bottom of the view to add five rows in the
table to define the following variables, all of type String, without defining their values, which will be loaded
dynamically at Job execution: host, port, username, username, password, and database.
4. Click in the Value field of the newly created variable and click the button that appears to open the Configure
Values dialog box, and click New... to open the New Value dialog box. Enter the name of one of the database
tables holding the database connection details and click OK.
5. Click New... again to define the other table holding the database connection details. When done, click OK
to close the Configure Values dialog box.
Now the variable be_connection has a list of values db_testing and db_production, which are the database
tables to load the connection parameters from.
6. Select the Prompt check box next to the Value field of the db_connection variable to show the Prompt fields
and enter the prompt message to be displayed at the execution time.
2. Fill the Host, Port, Database, Username, Password, and Table Name fields with the relevant
variables defined in the Contexts tab view: context.host, context.port, context.database,
context.username, and context.password respectively in this example.
3. Fill the Table Name field with employees, which is the name of the table holding employees information
in both databases in our example.
4. Then fill in the Schema information. If you stored the schema in the Repository, then you can retrieve it by
selecting Repository and the relevant entry in the list.
In this example, the schema of both the database tables to read is made of six columns: id (INT, 2 characters
long), name (VARCHAR, 20 characters long), email (VARCHAR, 25 characters long), sex (VARCHAR,
1 characters long), department (VARCHAR, 10 characters long), and position (VARCHAR, 10 characters
long).
5. Click Guess Query to retrieve all the table columns, which will be displayed on the Run tab, through the
tLogRow component.
6. In the Basic settings view of the tLogRow component, select the Table option to display data records in
the form of a table.
The following example shows how to configure the Implicit Context Load feature in the Job view for this particular
Job. If you want to configure the feature to be reused across different Jobs, select File > Edit Project properties
from the menu bar to open the Project Settings dialog box, go to Job Settings > Implicit context load, select
the Implicit tContextLoad check box, and set the parameters following steps 2 through 6 below. Then in the Job
view, select the Use Project Settings check box to apply the settings to the Job.
1. From the Job view, select the Extra vertical tab, and select the Implicit tContextLoad check box to enable
context loading without using the tContextLoad component explicitly in the Job.
2. Select the source to load context parameters from. A context source can be a two-column flat file or a two-
column database table. In this use case the database connection details are stored in database tables, so select
the From Database option.
3. Define the database connection details just like defining the basic settings of a database input component.
In this example, all the connection parameters are used just for this particular Job, so select Built-In from
the Property Type list and fill in the connection details manually.
4. Fill the Table Name field with the context variable named db_connection defined in the Contexts view of
the Job so that we will be able to choose the database table to load context parameters from dynamically
at Job execution.
5. As we will fetch all the connection details from the database tables unconditionally, leave the Query
Condition field blank.
6. Select the Print operations check box to list the context parameters loaded at Job execution.
A dialog box pops up asking you to select a database. Select a database and click OK.
The loaded context parameters and the content of the employees table of the selected database are displayed
on the Run console.
2. Now press F6 to launch the Job again and select the other database when prompted.
The loaded context parameters and the content of the employees table of the other database are displayed
on the Run console.
Related topics:
• Job Settings
• Context settings
For more information on the multi-thread execution feature, see How to execute multiple Subjobs in parallel.
1. In the Repository tree view, right-click the Job created in the use case Using the Implicit Context Load feature
and select Duplicate from the context menu. Then, in the [Duplicate] dialog box enter a new name for the
Job, employees_testing in this example, and click OK.
2. Open the new Job, and label the components to better reflect their roles.
5. On Extra tab of the Job view of the Job employees_testing, fill the Table Name field of database settings
with db_testing; on the Extra tab of the Job view of the Job employees_production, fill the Table Name
field with db_production.
1. Create a new Job and add two tRunJob components on the design workspace, and label the components to
better reflect their roles.
2. In the Component view of the first tRunJob component, click the [...] button next to the Job field and specify
the Job it will run, employees_testing in this example.
Configure the other tRunJob component to run the other Job, employees_production.
For more information about tRunJob, see Talend Components Reference Guide.
3. On the Extra tab of the Job view, select the Multi thread execution check box to activate the Multi-thread
Execution feature.
2. In the parent Job, press F6 of click Run on the Run view to start execution of the child Jobs.
The child Jobs are executed in parallel, reading employees data from both databases and displaying the data
on the console.
An alternative way of implementing parallel execution is using the tParallelize component. For more information,
see How to orchestrate parallel executions of Subjobs and the Talend Components Reference Guide.
Prerequisites:
Make sure an existing task is available in the Job Conductor page of Talend Administration Center.
2. Connect the tREST component to the tLogRow component using a Row > Main connection.
3. Connect the tSetGlobalVar component to the tREST component using a Trigger > OnSubjobOK
connection.
1. In the Contexts tab view, click the [+] button four times to add four variables.
3. In the Value field under the Default context, enter the variable values:
For the tac_url variable, type in the URL of the Talend Administration Center Web application, http://
localhost:8080/org.talend.administrator for example.
For the tac_user variable, type in the administrator user name in Talend Administration Center Web
application, admin@company.com for example.
For the tac_pwd variable, type in the administrator password in Talend Administration Center Web
application, admin for example.
For the task_id variable, type in the ID of the task you want to generate, 1 for example.
1. In the Repository tree view, expand Code to display the Routines folder.
3. The [New routine] dialog box opens. Enter the information required to create the routine, then click Finish
to proceed to the next step.
The newly created routine appears in the Repository tree view, directly below the Routines node. The routine
editor opens to reveal a model routine which contains a simple example, by default, comprising descriptive
text in blue, followed by the corresponding code.
4. At the beginning, right after the package routines line of code, add the following:
import com.sun.org.apache.xml.internal.security.utils.Base64;
To do so, start typing and press Ctrl+Space to open the list of templates, then select
com.sun.org.apache.xml.internal.security.utils.*; then replace the * sign with Base64.
For more information about the parameters and actions available in the MetaServlet, see the Talend
Administration Center User Guide.
MetaServlet.base64Encode("{\"actionName\":\"runTask\",\"taskId\":\"" +
context.task_id + "\",\"mode\"
:\"synchronous\",\"context\":{\"Default\":\"" +
((String)globalMap.get("tMsgBox_1_RESULT")) + "\"},
3. Then double-click the tREST component to display its Basic settings view.
4. Fill the URL field with the URL of the Web service you are going to invoke. For this use case, type in:
to call the service and encode the MetaServlet parameters in a Json format.
5. From the HTTP Method list, select GET to send an HTTP request for generating a task.
In this way, the MetaSerlet is invoked via the REST API of Talend Administration Center with the relevant
parameters.
6. In the Basic settings view of the tLogRow component, select the Basic option to display the result in the
Run console.
The console shows that the tREST component sends an HTTP request to the server end to run the specified
task, and that the task has been executed without errors.
In the Job Conductor page of Talend Administration Center, the status of the task is now ready to run.
For more information on how to create those rules, see Centralizing a Validation Rule in Managing Metadata. As
they are stored at the metadata level in the Repository, they can be easily reused and modified.
1. From the Palette, drop these components onto the design workspace: a database input component and
a database output component, here tMysqlInput and tMysqlOutput, to upload the data, a tLogRow
component to display the rejected data in the console and a tJava component to display the number of lines
processed in the console.
2. Connect the input and output database components using a Row > Main link, and connect the tMysqlInput
to the tJava components using a Trigger > OnSubjobOk link.
You will be able to create the reject link between the tMysqlOutput and tLogRow components only when you will
have applied the validation rule to the tMysqlOutput component.
2. Select Repository as Property type and click the three-dot button next to the field to retrieve the connection
properties that corresponds to the metadata you want to check.
3. Select Repository from the Schema drop down list and click the three-dot button next to the field to retrieve
the schema that corresponds to your database table.
4. Click the three-dot button next to the Table field to select the table to check.
5. Click Guess Query to automatically retrieve the query corresponding to the table schema.
7. Select Repository as Property type and click the three-dot button next to the field to retrieve the connection
properties that corresponds to the database table in which you want to load the new data.
8. Click the three-dot button next to the Table field to select the table in which you will load the data.
9. In the Action on table list, select Default and in the Action on data list, select the action corresponding to
the one(s) defined in the validation rule you apply on the Job. Here, as we selected On insert and On update
in the referential check validation rule we use, so select Update or insert to trigger the rule.
10. If the schema of the input and output components did not synchronize automatically, click Sync columns
and the schema of the input flow will automatically be retrieved.
Applying the validation rule and viewing the Job execution result
2. Select the Use an existing validation rule check box to apply the validation rule to the component.
3. In the Validation Rule Type list, select Repository and click the three-dot button to select the validation
rule from the [Repository Content] window.
4. Right-click tMysqlOutput, select Row> Rejects in the menu and drag to tLogRow to create a reject link
between the two components.
If you have enabled the Reject link option for this validation rule you can retrieve the rejected data to the
reject flow.
6. In the Code field, type in the code that will display the number of updated, inserted and rejected lines
processed:
System.out.println("Updates:
"+((Integer)globalMap.get("tMysqlOutput_1_NB_LINE_UPDATED"))+"\nInserts:
"+((Integer)globalMap.get("tMysqlOutput_1_NB_LINE_INSERTED"))+"\nRejects:
"+((Integer)globalMap.get("tLogRow_1_NB_LINE")));
Valid data is inserted or updated in the database table and the console displays the rows rejected by the validation
rule, along with the number of updates, inserts and rejects processed in the Job.
1. From the Palette, drop these components onto the design workspace: a database input component, here
tMysqlInput, that you will read and check the values of, two tFileOutputDelimited components to extract
the valid data in one file and the rejected ones in another file, and a tJava component to display the number
of lines processed in the console.
2. Connect the input database component to the first tFileOutputDelimited component using a Row > Main
link, and connect the tMysqlInput component to the tJava component with a Trigger > OnSubjobOk link.
You will be able to create the reject link between the tMysqlInput and the second tFileOutputDelimited component
only when you will have applied the validation rule to the tMysqlInput component.
2. Select Repository as Property type and click the three-dot button next to the field to retrieve the connection
properties that corresponds to the metadata you want to check.
3. Select Repository from the Schema drop down list and click the three-dot button next to the field to retrieve
the schema that corresponds to your database table.
4. Click the three-dot button next to the Table field to select the table to check.
5. Click Guess Query to automatically retrieve the query corresponding to the table schema.
2. Select the Use an existing validation rule check box to apply the validation rule to the component.
3. In the Validation Rule Type list, select Repository and click the [...] button to select the validation rule
from the [Repository Content] window.
4. Right-click tMysqlInput, select Row> Reject in the menu and drag to the second tFileOutputDelimited
component to create a reject link between the two components.
If you have enabled the Reject link option for this validation rule you can retrieve the rejected data to a
reject flow.
Configuring the output components and viewing the Job execution result
2. In the File Name field, specify the path and name of the file to write with the valid data.
4. Select the Include Header check box to include column headers in the output data.
5. Repeat the steps above on the second tFileOutputDelimited component to configure the output of the
rejected data.
7. In the Code field, type in the code that will display the number of updated, inserted and rejected lines
processed:
System.out.println("Valid data:
"+((Integer)globalMap.get("tFileOutputDelimited_1_NB_LINE"))+"\nRejected
data: "+((Integer)globalMap.get("tFileOutputDelimited_2_NB_LINE")));
Valid data is outputted in the first delimited file and rejects to the second, and the console displays the number of
valid lines and the number of rejects processed in the Job.
These rules provide details that you have to respect when writing the template statement, a comment line or the
different relevant syntaxes.
These rules helps to use the SQL code in specific use cases, such as to access the various parameters defined in
components.
• An SQL statement can span lines. In this case, no line should be ended with ; except the last one.
There is no exception to the lines in the middle part of a SQL statement or within the <%... %> syntax.
• You can define new variables, use Java logical code like if, for and while, and also get parameter values.
For example, if you want to get the FILE_Name parameter, use the code as follows:
<%
String filename = __FILE_NAME__;
%>
• This syntax cannot be used within an SQL statement. In other words, it should be used between two separated
SQL statements.
#sql sentence
DROP TABLE temp_0;
<%
#loop
for(int i=1; i<10; i++){
%>
#sql sentence
DROP TABLE temp_<%=i %>;
<%
}
%>
#sql sentence
DROP TABLE temp_10;
In this example, the syntax is used between two separated SQL templates: DROP TABLE temp_0; and DROP TABLE
temp_<%=i%>;.
The SQL statements are intended to remove several tables beginning from temp_0. The code between <% and
%> generate a sequence of number in loop to identify tables to be removed and close the loop after the number
generation.
• Within this syntax, the <%=...%> or </.../> syntax should not be used.
<%=...%> and </.../> are also syntax intended for the SQL templates. The below sections describe related
information.
Parameters that the SQL templates can access with this syntax are simple. They are often used for connection purpose and
can be easily defined in components, such as TABLE_NAME, DB_VERSION, SCHEMA_TYPE, etc.
• This syntax can be used to generate any variable value, and also the value of any existing parameter.
• Inside this syntax, the <%...%> or </.../> syntax should not be used.
#sql sentence
DROP TABLE temp_<%=__TABLE_NAME__ %>;
The code is used to remove the table defined through an associated component.
For more information about what components are associated with the SQL templates, see Designing a Job.
For more information on the <%...%> syntax, see the previous section.
For more information on the </.../> syntax, see the following section.
Parameters that the SQL templates can access with this syntax are simple. They are often used for connection purpose and
can be easily defined in components, such as TABLE_NAME, DB_VERSION, SCHEMA_TYPE, etc.
• It can be used to generate the value of any existing parameter. The generated value should not be enclosed by
quotation marks.
• Inside this syntax, the <%...%> or <%=...%> syntax should not be used.
#sql sentence
DROP TABLE temp_</TABLE_NAME/>;
The statement identifies the TABLE_NAME parameter and then removes the corresponding table.
For more information on the <%...%> and <%=...%> syntaxes, see the previous sections.
The following sections present more specific code used to access more complicated parameters.
Parameters that the SQL templates can access with this syntax are simple. They are often used for connection purpose and
can be easily defined in components, such as TABLE_NAME, DB_VERSION, SCHEMA_TYPE, etc.
The below code composes an example to access some elements included in a component schema. In the following
example, the ELT_METADATA_SHEMA variable name is used to get the component schema.
<%
String query = "select ";
SCHEMA(__ELT_METADATA_SHEMA__);
for (int i=0; i < __ELT_METADATA_SHEMA__.length ; i++) {
query += (__ELT_METADATA_SHEMA__[i].name + ",");
}
query += " from " + __TABLE_NAME__;
%>
<%=query %>;
In this example, and according to what you want to do, the __ELT_METADATA_SHEMA__[i].name
code can be replaced by __ELT_METADATA_SHEMA__[i].dbType, __ELT_METADATA_SHEMA__ [i].isKey,
__ELT_METADATA_SHEMA__[i].length or __ELT_METADATA_SHEMA__[i].nullable to access the other fields
of the schema column.
Make sure that the name you give to the schema parameter does not conflict with any name of other parameters.
To access these tabular parameters that are naturally more flexible and complicated, two approaches are available:
</.../> is one of the syntax used by the SQL templates. This approach often needs hard coding for every
parameter to be extracted.
For example, a new parameter is created by user and is given the name NEW_PROPERTY. If you want to access
it by using </NEW_PROPERTY/>, the below code is needed.
else if (paramName.equals("NEW_PROPERTY")) {
ElementParameterParser.getObjectValue(node, "__NEW_PROPERTY__");
......
The below code shows the second way to access the tabular parameter (GROUPBY).
<%
String query = "insert into " + __TABLE_NAME__ + "(id, name, date_birth) select sum(id), name, date_birth from cust_teradata
group by";
EXTRACT(__GROUPBY__);
%>
<%=query %>;
• The extract statement must use EXTRACT(__GROUPBY__);. Upcase should be used and no space char is allowed.
This statement should be used between <% and %>.
• Use __GROUPBY_LENGTH__, in which the parameter name is followed by _LENGTH, to get the line number of the
tabular GROUPBY parameters you define in the Groupby area on a Component view. It can be used between
<% and %> or <%= and %>.
• Use code like __GROUPBY_INPUT_COLUMN__[i] to extract the parameter values. This can be used between
<% and %> or between <%= and %>.
• In order to access the parameter correctly, do not use the identical name prefix for several parameters.
For example in the component, avoid to define two parameters with the names PARAMETER_NAME and
PARAMETER_NAME_2, as the same prefix in the names causes erroneous code generation.
For more information on how to define routines, to access to system routines or to manage system or user routines,
see Managing routines.
Before starting any data integration processes, you need to be familiar with Talend Studio Graphical User Interface
(GUI). For more information, see GUI.
To access these routines, double click on the Numeric category, in the system folder. The Numeric category
contains several routines, notably sequence, random and decimal (convertImpliedDecimalFormat):
The three routines sequence, resetSequence, and removeSequence are closely related.
• The sequence routine is used to create a sequence identifier, named s1 by default, in the Job. This sequence
identifier is global in the Job.
• The resetSequence routine can be used to initialize the value of the sequence identifier created by sequence
routine.
• The removeSequence routine is used to remove the sequence identifier from the global variable list in the Job.
System.out.printIn(Numeric.sequence("s1",1,1));
System.out.printIn(Numeric.sequence("s1",1,1));
System.out.printIn(Numeric.convertImpliedDecimalFormat("9V99","123"));
The routine automatically converts the value entered as a parameter according to the format of the implied decimal
provided:
To access these routines, double click on the Relational class under the system folder. The Relational class
contains several routines, notably:
To check a Relational Routine, you can use the ISNULL routine, along with a tJava component, for example:
System.out.printIn(Relational.ISNULL(NULL));
To access these routines, doubleclick on StringHandling under the system folder. The StringHandling class
includes the following routines:
The routine replaces the old element with the new element specified.
The routine returns a whole number which indicates the position of the first character specified, or indeed the first
character of the substring specified. Otherwise, - 1 is returned if no occurrences are found.
System.out.printIn(StringHandling.LEN("hello world!"));
The check returns a whole number which indicates the length of the chain, including spaces and blank characters.
The routine returns the string with the blank characters removed from the beginning.
To access the routines, double click on TalendDataGenerator under the system folder:
You can customize the fictitious data by modifying the TalendGeneratorRoutines. For further information on
how to customize routines, see Customizing the system routines.
System.out.printIn(TalendDataGenerator.getFirstname();
System.out.printIn(TalendDataGenerator.getLastname();
System.out.printIn(TalendDataGenerator.getUsCity();
System.out.printIn(TalendDataGenerator.getUsState();
System.out.printIn(TalendDataGenerator.getUsStateId();
System.out.printIn(TalendDataGeneraor.getUsStreet();
The set of data taken randomly from the list of fictitious data is displayed in the Run view:
To access these routines, double click on TalendDate under the system folder:
The current date is initialized according to the pattern specified by the new date() Java function and is displayed
in the Run view:
In this example the current date is initialized by the Java function new date()and the value -1 is displayed in the
Run view to indicate that the current date is earlier than the second date.
The current date, followed by the new date are displayed in the Run view:
System.out.printIn(TalendDate.parsedate("yyyy/MM/dd HH:mm:ss",
"1979-10-20 19:00:59"));
System.out.printIn(D.toString());
System.out.printIn(TalendDate.getPartofDate("DAY_OF_MONTH", D));
System.out.printIn(TalendDate.getPartOfDate("MONTH", D));
System.out.printIn(TalendDate.getPartOfDate("YEAR", D));
System.out.printIn(TalendDate.getPartOfDate("DAY_OF_YEAR", D)):
System.out.printIn(TalendDate.getPartOfDate("DAY_OF_WEEK", D));
In this example, the day of month (DAY_OF_MONTH), the month (MONTH), the year (YEAR), the day number
of the year (DAY_OF_YEAR) and the day number of the week (DAY_OF_WEEK) are returned in the Run view.
All the returned data are numeric data types.
In the Run view, the date string referring to the months (MONTH) starts with 0 and ends with 11: 0 corresponds to January,
11 corresponds to December.
System.out.printIn(TalendDate.getDate(CCYY-MM-DD));
To access these routines, double click on TalendString under the system folder. The TalendString class contains
the following routines:
In this example, the "&" character is replaced in order to make the string XML compatible:
The star characters are removed from the start, then the end of the string and then finally from both ends:
System.out.printIn(TalendString.removeAccents("sâcrebleü!"));