WTA WorkForce API Specs Guide - 18.3
WTA WorkForce API Specs Guide - 18.3
ATTENDANCE
API Specifications
For use with WT&A 18.3.x
January 2019
WT&A API Specifications 18.3 Document Information
Document Information
Author(s) Rhino, Code Ninjas, and Honey Badgers Teams
Editor Nancy Faerber, Andrew Durand
Owner Iain Shearer
Revision History
Version Date Description Author
1.0 Jan 2017 First published INT Team
2.0 March 2017 Person Data API Example 2 script, changed the Karl J
PersonDataAPI.getAsgnmtRecordForEmployee to
PersonDataAPI.getAsgnmtRecord
3.0 April 2017 Added Time Off Request API, add new method for Timesheet Greg W
Operations API (getCurrentPeriodDates).
4.0 April 2017 Added Swipe Import API Greg W
5.0 November 2017 Added previously undocumented APIs: Generic Export, Incremental CodeNinjas
Export, Export Stage, LD Import, and Policy Mapping. Team
For 17.3, new methods were added to Assignment Group, File
Manager, Job Queue, Person Data, Policy Info, and Policy Set.
6.0 February 2018 Added previously undocumented APIs: Bank Import, Time Entry. CodeNinjas
Added new Appendix consisting of previously undocumented utility: Team
API_UTIL.
7.0 February 2018 New APIs added to support WT&A 18.1.0: LD Data, Retro Trigger, XML CodeNinjas
Reader, and XML Writer Team
Existing APIs enhanced for WT&A 18.1.0: Incremental Export, Time
Entry Import
8.0 March 2018 Added previously undocumented APIs: Decoder, Employee Import Honey
Library, and JS_UTIL Badgers Team
9.0 June 2018 New APIs added to support WT&A 18.2.0: Badge Data, Schedule Data CodeNinjas
Output, Timesheet Exception, and Timesheet Output. and Rhino
Existing APIs enhanced for WT&A 18.2.0: ACT, Assignment Group, Teams
Badge Import, Decoder, Employee Import Library, Export Stage, File
Manager, Generic Export, Incremental Export, Instance Info, LD
Import, Policy Creation, and Timesheet Operations.
For more information about the new and enhanced APIs, refer to the
WT&A Release Notes 18.2.0.
Added previously undocumented APIs: KPI Chart Library, PPI Data
Connector
10.0 October 2018 New APIs added to support WT&A 18.3.0: Advanced Scheduler Data. CodeNinjas
Existing APIs enhanced for WT&A 18.3.0: Assignment Group, Badge and Rhino
Import, File Manager, Job Queue, JS_UTIL (see the Appendix), LD Data, Teams
Retro Trigger, Schedule Detail Output, Swipe Import, Time Entry
Import, Time Off Request, Timesheet Operations, Timesheet Output,
XML Writer.
For more information about the new and enhanced APIs, refer to the
WT&A 18.3 Release Notes.
11.0 January 2019 Removed importDataFromResultsSet method from the LD Import API Andrew D
LEGAL NOTICES
Copyright (c) 2000-2018 WorkForce Software, LLC. All rights reserved.
WorkForce Software
38705 Seven Mile Rd.
Livonia, MI 48152
Workforce Software considers the enclosed information a trade secret. By receiving this information, you agree to keep this
information confidential. This information may not be distributed outside your organization. It may not be duplicated in any way
without the express written consent of WorkForce Software, except that you are given permission to duplicate it in electronic or
printed form for the purpose of distribution within your organization to gather requirements or evaluate our software.
The information supplied is distributed on an "as is" basis, without any warranty. WorkForce Software shall not have any liability to
any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information
contained herein.
Trademark names may appear throughout this material. Rather than list the names and entities that own those trademarks or insert
a trademark symbol with each mention of the trademarked name, WorkForce Software states that it is using those names in editorial
fashion only for the benefit of the trademark owner, with no intention of infringing on the trademark. No mention of a company or
trademark is intended to convey endorsement or other affiliation with WorkForce Software.
If you have been provided this document under any other circumstances, you must contact WorkForce Software at 877-4WFORCE
(877-493-6723) to arrange to have this material returned immediately.
Contents
About this Document .................................................................................................................................... 6
ACT API .......................................................................................................................................................... 7
ACT Case Export API .................................................................................................................................... 23
Advanced Scheduler API ............................................................................................................................. 36
Advanced Scheduler Data API ..................................................................................................................... 59
Assignment Group API ................................................................................................................................ 69
Badge Data API ............................................................................................................................................ 88
Badge Import API ........................................................................................................................................ 94
Bank Export API ......................................................................................................................................... 118
Bank Import API ........................................................................................................................................ 136
Batch Job Log Policy Tracker API ............................................................................................................... 139
Decoder API .............................................................................................................................................. 142
Email API ................................................................................................................................................... 155
Employee Import Library .......................................................................................................................... 167
Export Stage API ........................................................................................................................................ 185
File Manager API ....................................................................................................................................... 190
File Writer API ........................................................................................................................................... 201
FTP API ...................................................................................................................................................... 207
Generic Export API .................................................................................................................................... 231
Incremental Export API ............................................................................................................................. 243
Instance Info API ....................................................................................................................................... 263
Job Queue API ........................................................................................................................................... 273
KPI Chart Library ....................................................................................................................................... 288
LD Data API................................................................................................................................................ 305
LD Import API ............................................................................................................................................ 308
Line Approval API ...................................................................................................................................... 315
Person Data API ........................................................................................................................................ 322
Policy Creation API .................................................................................................................................... 345
Policy Info API ........................................................................................................................................... 366
Note: With the 17.2.0 release, “EmpCenter” was renamed to “WorkForce Time and Attendance”.
Additionally, the module formerly named "WorkForce Time & Attendance" is now "WorkForce Time".
The term “EmpCenter” appears in this document for backward compatibility purposes.
Intended Audience
• System administrators at customer sites
• WorkForce Software implementation and support teams
ACT API
Overview and Capabilities
The ACT API provides a mechanism for scripts to use in order to look up information about Absence
Compliance Tracker (ACT) Cases, such as the leave begin/end dates, the status of the Case, and what types of
leave the Case is eligible for. This information can be used as needed by scripts, for example as part of an
ACT Case Export process to look up additional relevant information about the ACT Case being processed for
export.
The ACT API also includes functionality for adding additional case managers to an ACT Case. This can be used
to automate the assignment of ACT Cases to the appropriate case manager(s), instead of requiring those
assignments to be made manually through the ACT web interface.
Additionally, this API also provides a mechanism for importing new ACT Cases and for importing the worked
time history and leave usage details for employees. This can be used as part of the go-live process to
transition leave information and open cases from an external system into ACT, or can be used in an ongoing
manner for customers who are using ACT without the WorkForce Time module in order to maintain the
employees’ current worked time history details.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• Basic ACT functionality
Components
This API consists of the following component(s):
• The ACT_API Distributed JavaScript library
• The WorkedTimeRecord, LeaveUsageRecord, ACTCaseScriptable, WorkflowEventScriptable, and
HistoryEventScriptable Java classes
Setup
No setup is necessary for the ACT API. The distributed library is automatically available within WorkForce
Time and Attendance.
Use Cases
Importing Worked Time
The ACT API can be used to import worked time to use in determining eligibility for different leaves. This
time would typically be imported for one of two reasons:
• To load information about time worked prior to the customer switching to WorkForce Time and
Attendance.
• To load ongoing information about time worked if worked time is not being tracked on WorkForce
Time and Attendance timesheets (for customers not using WorkForce Time and Attendance for Time
and Attendance).
Regardless of which of these two scenarios is present for a customer, the API will import the worked time in
the same manner.
// Get the source data in some fashion. Assuming a standard ResultSet is returned
// here
var source = getSourceData();
// Update the worked time using all of the records that were added to the cache
api.updateWorkedTimeHistory();
Note: If there is already existing worked time for an employee that has new worked time processed, all of the
existing worked time for that employee with a work date between the earliest and latest work dates in
the new set of data being processed for that employee will be removed and replaced by the new data
being imported.
The following script example demonstrates using the ACT API to import worked time associated with specific
pay codes:
includeDistributedPolicy("ACT_API");
includeDistributedPolicy("EMPLOYEE_IMPORT_UTIL");
// Get the source data in some fashion. Assuming a standard ResultSet is returned
// here
var source = getSourceData();
// Update the worked time using all of the records that were added to the cache
api.updateWorkedTimeHistory();
Note: The replacement process for a date range will replace existing worked time records for the employee
regardless of pay code. The import process does not only replace existing records with the pay codes
being imported.
// Get the source data in some fashion. Assuming a standard ResultSet is returned
// here
var source = getSourceData();
// Update the leave usage using all of the records that were added to the cache
api.updateLeaveUsage();
Note: If any of the employees with historic leave usage imported already have existing leave usage that was
previously imported, that existing leave usage will be removed and replaced by the new data being
imported.
// Get the source data in some fashion. Assuming a standard ResultSet is returned
// here
var source = getSourceData();
// Update the leave usage using all of the records that were added to the cache
api.updateLeaveUsage();
// Get the source data in some fashion. Assuming a standard ResultSet is returned
// here
var source = getSourceData();
// Update the leave usage using all of the records that were added to the cache
api.updateLeaveUsage();
// Get the source data in some fashion. Assuming a standard ResultSet is returned
// here
var source = getSourceData();
// Define the conversion function that should be used to compute the correct usage
// in leave units
function convertHoursToLeaveUnits(employee, leaveDate, hours, leaveUnit, stdDailyHours,
stdWeeklyHours, fte) {
// The returned value should be the leave usage converted from hours into the
// appropriate leave units
return hours / stdWeeklyHours;
}
// Update the leave usage using all of the records that were added to the cache
api.updateLeaveUsage();
// Get the source data in some fashion. Assuming a standard ResultSet is returned
// here
var source = getSourceData();
Note: The ACT API cannot be used to import updates to existing cases.
The following script example demonstrates using the ACT API to assign an additional case manager to an ACT
Case:
// Do something to get the ID of the case to be assigned to additional case managers. This is
// assumed to return an ID for an existing ACT Case.
var actCaseId = getExistingActCaseId();
// Get the loginId of the case manager that is to be assigned to the existing ACT Case. This
// is assumed to return a loginId for a user that has rights to become a ACT case manager.
var caseManagerId = getCaseManagerLoginId();
// Do something to get the ID of the case to be looked up. This is assumed to return
// an ID for an existing ACT Case
var actCaseId = getExistingActCaseId();
// Print out information about history events associated with the case
var historyEvents = actCase.getHistoryEvents();
for (var i = 0; i < historyEvents.length; ++i) {
var historyEvent = historyEvents[i];
log.info("Case has history event of type " + historyEvent.act_history_event_type+
" with order " + historyEvent.event_history_order);
}
// Define the criteria for the employee whose cases should be found
var criteria = new MatchCondition(MatchTable.EMPLOYEE, "DISPLAY_EMPLOYEE",
MatchOperator.EQUALS, "12345");
// Iterate over the cases for the employee and perform necessary actions
log.info("Employee has the following ACT Cases:");
for (var i = 0; i < actCases.length; ++i) {
var actCase = actCases[i];
log.info("Case " + actCase.act_case + ", for dates " + actCase.start_date +
" through " + actCase.end_date);
}
// Iterate over the leaves for the case and perform necessary actions
log.info("Case has the following leaves:");
for (var i = 0; i < leaves.length; ++i) {
var leave = leaves[i];
log.info("Leave name: " + leave);
}
The following example can be used to get all the leave usages for the provided ACT Case:
includeDistributedPolicy("ACT_API");
Troubleshooting
The job log of the script using the ACT API will contain information messages and, in the case of problems,
any error messages generated during processing. This job log should be reviewed if there are any problems
encountered while using the API.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The ACT API consists of the following component(s):
1. The AbsenceCaseAPI, which defines a mechanism for importing historic ACT data and for accessing
information about ACT Cases.
2. The WorkedTimeRecord Java class, which defines information for a single slice of worked time
3. The LeaveUsageRecord Java class, which defines information for a single slice of historic leave usage
4. The ACTCaseScriptable Java class, which encapsulates data about an ACT Case and provides methods
for looking up related information
5. The WorkflowEventScriptable Java class, which encapsulates data about a Workflow Event for an
ACT Case and provides methods for looking up related information
6. The HistoryEventScriptable Java class, which encapsulates data about a History Event for an ACT Case
and provides methods for looking up related information
AbsenceCaseAPI
AbsenceCaseAPI(parms)
Creates a new instance of the Absence Case API. This call accepts the following parameters:
Parameter Name Description
enableDebugging Indicates if additional debug output should be written to the
job log when performing any actions with this instance of
the API. Defaults to false if not specified.
Table 2: Parameters that can be defined when instantiating the Absence Case API
addRecord(record)
Adds an additional historic record (either a WorkedTimeRecord or a LeaveUsageRecord) into the internal
cache for this instance of the API. The data in that cache will be processed by a subsequent call to
updateWorkedTimeHistory or
updateLeaveUsage._WorkedTimeRecord_LeaveUsageRecord_updateWorkedTimeHistory()_updateLeave
Usage()
updateWorkedTimeHistory()
Processes all of the WorkedTimeRecord objects that have been added to the cache for this API using
addRecord. For each employee with new records added, the existing worked time history records will be
replaced for the date range spanned by the new records being processed for that
employee._WorkedTimeRecord_addRecord(record)
updateLeaveUsage()
Processes all of the LeaveUsageRecord objects that have been added to the cache for this API using
addRecord. For each employee with new records added, the existing leave usage records will be replaced by
the new records being processed for that employee._LeaveUsageRecord_addRecord(record)
importNewACTCase(parms)
Creates a new ACT Case using the definition provided by the parameter settings.
Parameter Name Description
displayEmployee The ID of the employee that the absence case is for.
beginDate The starting date of the absence.
endDate The ending date of the absence.
leaveTypes Array of different leave types that apply to the case. Can include both distributed
or customer-defined leave types.
reason The reason for the absence. Needs to match the name of a defined ACT_REASON
policy or one of the following integer values:
1 (DIST_PREGNANCY)
2 (DIST_DONATION)
3 (DIST_SELF_HEALTH_CONDITION)
4 (DIST_OTHER_PERSON_HEALTH_CONDITION)
5 (DIST_CHILD_BONDING)
6 (DIST_CHILD_PLACEMENT)
7 (DIST_MILITARY_EXIGENCY)
8 (DIST_MILITARY_DEPLOYMENT)
9 (DIST_CRIME_VICTIM)
10 (DIST_OTHER)
caseStatus The status the case should be in after it is created. Valid statuses are: PENDING,
OPEN, APPROVED, CLOSED.
workflowEvent The Workflow Event to use when creating the case. Defaults to
IMPORT_APPROVED_CASE if not specified.
caseType Defines what type of absence is reflected by the case. Valid values are:
CONTINUOUS, INTERMITTENT, or REDUCED_SCHEDULE. Defaults to CONTINUOUS
if not specified.
personAffected Identifies the person affected by the reason for the absence. Valid values are SELF,
SPOUSE, DOMESTIC PARTNER, CHILD, PARENT, or OTHER. Defaults to SELF if not
specified.
comments Any comments associated with the absence case
Table 3: Parameters that can be defined when creating a new ACT Case
assignCaseManager(caseId, loginId)
Adds the user with the specified loginId value as a case manager for the ACT Case with the indicated display
ID. The user being assigned as a case manager must have a general role that allows them to act as a case
manager, otherwise they will be unable to be assigned the case.
getACTCase(caseId)
Returns information for the ACT Case with the specified display ID.
getEligibleLeaves (caseId)
Returns eligible leaves for the specified ACT case.
WorkedTimeRecord
WorkedTimeRecord(parms)
Creates a new immutable WorkedTimeRecord object containing the settings defined by the specified
parameters. Parameters that can be defined include:
displayEmployee The ID of the employee that the worked time is associated with
workDate The date that the worked time is associated with
hours The number of hours worked on the work date
payCode The type of worked time represented by this record
comments Any comments associated with this record
Table 4: Parameters that can be defined when creating a new WorkedTimeRecord
LeaveUsageRecord
LeaveUsageRecord(parms)
Creates a new immutable LeaveUsageRecord object containing the settings defined by the specified
parameters. Parameters that can be defined include:
Parameter Name Description
displayEmployee The ID of the employee associated with the leave.
leaveDate The date that the leave was used on.
leaveType The type of leave used. This needs to match either a distributed or a customer-
specific leave type policy.
hours The amount of leave used on the specified date.
comments Any comments associated with the leave usage.
stdDailyHours The employee’s standard daily hours on the date the leave was used. Based on
the ACT configuration, this may be used to convert the hours into the
appropriate leave units.
stdWeeklyHours The employee’s standard weekly hours on the date the leave was used. Based
on the ACT configuration, this may be used to convert the hours into the
appropriate leave units.
fte The employee’s full time equivalent (FTE) percentage on the date the leave was
used. Based on the ACT configuration, this may be used to convert the hours
into the appropriate leave units.
firstUsageDate The first-usage date for the leave.
conversionFunction A custom function that should be used to calculate the correct leave units for
the record instead of using the method defined in the configuration.
Table 5: Parameters that can be defined when creating a new LeaveUsageRecord
ACTCaseScriptable
getHistoryEvent(eventOrder)
Returns the History Event that is defined with the specified History Event order value for this ACT Case.
getHistoryEvents()
Returns all the History Events associated with this ACT Case, ordered by History Event order from earliest to
latest.
getWorkflowEvent(eventId)
Returns the Workflow Event with the specified ID that is associated with this ACT Case.
getWorkflowEvents()
Returns all the Workflow Events associated with this ACT Case.
WorkflowEventScriptable
getRelatedHistoryEvent()
Returns the History Event that is linked to this Workflow Event.
getRollbackHistoryEvent()
Returns the History Event that caused this Workflow Event to be rolled back.
HistoryEventScriptable
getRelatedWorkflowEvent()
Returns the Workflow Event that is linked to this History Event.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• Basic ACT Case information, including how status changes and case dates work
Components
This API consists of the following component(s):
1. The Distributed JavaScript Library ACT_CASE_EXPORT_API
2. ActCaseExportRecord which describes a single ACT case
Setup
No setup is necessary for the ACT Case Export API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Initializing the ACT Case Export API
The main use for the ACT Case Export API is to detect changes that have happened to relevant ACT Cases
since the last time that the export was run. An ACT Case is considered relevant to the export if one of the
following conditions is true:
1) The ACT Case currently has a status with a higher rank than the minimum specified when initializing
the export (see ACT Case statuses table below.)
2) The ACT Case has already been exported during a previous run of the ACT Case Export
When the ACT Case Export API is initialized, one of the arguments provided specifies the minimum status a
case needs to have in order to be considered eligible for inclusion in the export. This is because many uses of
the ACT Case Export would not want to include cases as soon as they are created; for instance, a process that
wants to update the HR system with information about whether an employee is on leave probably doesn’t
want to update HR immediately when the employee submits the case, but rather would want to wait until
the case manager has actually approved the case as valid before updating HR. The hierarchy of statuses is as
follows:
ACT Case Status Level
PENDING 0
OPEN 1
APPROVED 2
CANCELLED 3
CLOSED
DELETED
DENIED
Table 6: ACT Case statuses and their order of precedence
Only cases whose status is at the same or a higher level than the minimum status specified when initializing
the ACT Case Export API will be processed by the export (if they have not previously been exported already).
In addition to the case’s status, the ACT Case Export API will also look at which history event triggers have
been generated for a case since the last time it ran. This allows the export to control which cases it’s
interested in: depending on how it’s being used, the process may need to find cases that have had certain
types of data changes (such as the beginning or ending date of the absence having changed), cases that have
had documents attached, or if the case’s status has changed.
Each run of the ACT Case Export needs to be associated with an export process ID. This ID is used to allow
the ACT Case Export API to track multiple independent sets of changes to ACT Cases by associating those
changes with a particular ID, in the event that there are multiple jobs in a configuration that are dependent
on those changes. Each time the ACT Case Export is run it will only find changes relative to the last time the
export was run for that same export process ID. (For example, a customer could have a job to send out
emails when a case is created and a second job to update HR with absence information once the case
becomes approved. In this example each of these two jobs should be associated with a different export
process ID, so that the changes detected by one job can also be picked up by the other job.)
The following examples demonstrate some scenarios where the API is initialized to identify changes based on
several different ACT Case statuses and history event triggers:
// Rest of export logic would go here. See other use cases for examples.
// Rest of export logic would go here. See other use cases for examples.
Sometimes, in addition to the status and history event triggers, it may be desirable to limit which cases are
processed based on the beginning or ending dates of the absence as well. For instance, the system receiving
the data may not be interested in knowing about a change that was made to an absence after that absence
has already ended. The ACT Case Export API allows for cutoff dates to be specified for both the start date
and end date of the absences; the start date cutoff specifies the latest start date that an absence can have
and still be processed by the export, and the end date cutoff specifies the earliest end date an absence can
have and still be processed.
The following example demonstrates specifying the cut off dates when initializing the ACT Case Export:
// Define the latest start date a case can have to be considered for processing.
// Only cases starting two weeks or less into the future will be exported.
var startDateCutoff = WFSDate.today().addDays(14);
// Define the lastest end date a case can have to be considered for processing.
// Only cases ending 30 days ago or later will be exported.
var endDateCutoff = WFSDate.today().addDays(-30);
// Rest of export logic would go here. See other use cases for examples.
Note: The start date cutoff and end date cutoff do not represent a date range. (That is, the export is not
processing only cases whose absence dates overlap with the range defined by the start date and end
date values.) In almost every case, the end date cutoff should be a date before the start date cutoff.
// Load the ACT Case data for the approved cases that have been created and
// not yet exported
var changedCases = api.getModifiedExportRecords();
// Iterate over the records that have been modified and take some sort of action
for (var i = 0; i < changedCases.length; ++i) {
var currentCase = changedCases[i];
log.info("ACT Case " + currentCase.act_case + " has changed");
}
// Close the export once it is finished to commit the fact the records have been
// processed to the database
api.close();
In this example, the API looks for any cases that currently are in an APPROVED status that have had a history
event associated with either the CASE_CREATED_BY_ADMIN or CASE_CREATED_BY_EMPLOYEE triggers
generated since the last time the export ran. Since an ACT Case will only be evaluated once it hits the
APPROVED state, this means that once an ACT Case gets approved it will be returned by the call to
getModifiedExportRecords(). An ACT Case will only ever be returned once in this script, since a Case should
never have a second history event claiming to have created it.
The second option for getting ACT Case information is to get the data not only for the Cases that have
relevant changes, but also for any cases that have previously been exported already. This allows for logic to
be used that aggregates information across all of the currently-relevant ACT Cases, or for cancellation records
to be generated for previously-exported cases if needed. The following example demonstrates how to load
data for all ACT Cases with relevant changes or that have been previously exported:
Example 2: Getting records for all cases that have previously been exported
includeDistributedPolicy("ACT_CASE_EXPORT_API");
// Load the ACT Case data for the approved cases that have been created and
// not yet exported, as well as any cases that have already been exported.
var cases = api.getExportRecords();
In this example, data for the cases is loaded and isChangedSinceLastExport() is called for each record. This
method will return true if the ACT Case had a change linked to one of the specified history event triggers
since the last time the ACT Case Export was run, or false if the ACT Case had not had such a change. The set
of all ACT Cases where isChangedSinceLastExport() returns true make up the records that would have
been returned by a call to getModifiedExportRecords(), while the ACT Cases where false is returned are
only returned by the getExportRecords() method.
When processing an ACT Case record, the fields on that record can be used to specify a value for the
export_destination field. This tells the ACT Case Export that not only was that record evaluated by the
script, but something was actually done with it. This is used to drive additional behavior that will be
examined in the next use case. The specific value specified for the export_destination doesn’t have any
direct system use, but it is generally recommended to store the location of where the data was sent (e.g. the
file name the data was written to if exporting to a file).
// Define the fields from the ACT Case record that should be available to the script.
// The ACT_CASE.START_DATE field will be available as a date-type object aliased as
// "startDate", and the ACT_CASE_.END_DATE field will be available as a date-type
// object aliased as "endDate"
api.addFieldMapping("startDate", "date", "start_date");
api.addFieldMapping("endDate", "date", "end_date");
// Load the ACT Case data for the approved cases that have been created and
// not yet exported, as well as any cases that have already been exported.
var cases = api.getModifiedExportRecords();
In this example, you can see that several calls to addFieldMapping() are made before getting the export
records, and then the appropriate alias is used in order to retrieve the corresponding value during
processing.
In addition to using fields directly from the ACT Case, it can also be desirable to define additional custom
fields that don’t direct map to the ACT_CASE table. These fields can store values as part of the export logic,
and when the API is closed the final values on the records will be persisted to the database. This is useful
when generating cancellations, since this allows the script to access the values that were calculated at the
time during the previous execution of the export. (See the “Getting Previously-Exported Case Data” section
below for more information.)_Getting_Previously-Exported_Case
The following example demonstrates looking up and storing information for custom fields on the export
records being processed:
// Define the fields from the ACT Case record that should be available to the script.
// object aliased as "endDate"
api.addFieldMapping("startDate", "date", "start_date");
api.addFieldMapping("endDate", "date", "end_date");
api.addFieldMapping("employee", "string", "employee");
// Load the ACT Case data for the approved cases that have been created and
// not yet exported, as well as any cases that have already been exported.
var cases = api.getModifiedExportRecords();
// Define the person data API to use to look up some relevant information
var personDataApi = new PersonDataAPI();
// These will both log out blank values at this point, since no values have been
// set at this point in the script
log.info("Employee ID = " + currentCase.employeeId);
log.info("ACT Company = " + currentCase.actCompany);
Note: The only fields that are available by default without calling addFieldMapping() are ACT_CASE and
LAST_EVENT_ORDER_EXPORTED. All other fields from the ACT Case, and any custom fields, must be
defined using that method in order to be available or persisted by the ACT Case Export API.
// Load the ACT Case data for the approved cases that have been created and
// not yet exported, as well as any cases that have already been exported.
var cases = api.getModifiedExportRecords();
Note: getPreviouslyExportedRecord() will return the most recent record for the indicated ACT Case that had a
non-blank EXPORT_DESTINATION specified. If the EXPORT_DESTINATION is never specified when
processing new records, getPreviouslyExportedRecord() will always return null.
Note: If multiple runs of the export need to be rolled back, the rollback operation needs to be performed for
each of the runs (by specifying its batch job ID), starting with the most recent job and ending with the
oldest.
Troubleshooting
The job log of the script using the ACT Case Export API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
Unable to set custom An attempt was made to store a Ensure that all custom fields defined
export field FIELD_NAME to value on an ACT Case Export with addFieldMapping() are of the
value VALUE – error parsing record that was incompatible with correct type for the type of values
value the type of field, e.g. attempting to that need to be stored in them.
store the string “ABC” in a number
field.
Field name ‘FIELD_NAME’ An attempt was made to read a Ensure that addFieldMapping() is
has not been defined for value from an ACT Case Export called when setting up the ACT Case
the ActCaseExportRecord Record with a field name that had Export API for every field that needs
not been defined. to be referenced off of the records.
Cannot define a custom An attempt was made to specify a Ensure that all field names specified in
mapping for field name custom field name on an ACT Case addFieldMapping() do not overlap
‘FIELD_NAME’. Standard Export record that matched a field with the actual field names defined in
mapping is already defined name in the ACT_CASE_EXPORT the ACT_CASE_EXPORT table.
using that name. table.
Table 7: ACT Case Export typical error messages, root problems, and solutions
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The ACT Case Export API consists of the following component(s):
1. The ActCaseExportAPI, which defines a process for determining which ACT Cases have had changes
and for storing information about them.
2. The ActCaseExportRecord, which stores information about a single ACT Case.
See the contents of the ACT_CASE_EXPORT_API policy in the Distributed JavaScript Library category for full
documentation on these methods. The following is a summary of the available methods and common uses:
ActCaseExportAPI
ActCaseExportAPI()
Creates a new instance of the ACT Case Export API
getExportRecords()
Returns an array of ActCaseExportRecords for all ACT Cases that meet the status and date criteria used to
initialize the API and have either had a change associated with one of the indicated history event triggers or
that have previously been exported. This allows retractions to be generated if a Case was previously
exported but now no longer meets the criteria for inclusion in the export.
getModifiedExportRecords()
Returns an array of ActCaseExportRecords for all ACT Cases that meet the status and date criteria used to
initialize the API and have had a change associated with one of the indicated history event triggers.
getPreviouslyExportedRecord(actCaseId)
Returns an ActCaseExportRecord reflecting the most recent data for the specified ACT Case that was
previously exported for the current export ID. If no data for the ACT Case has been previously exported, null
will be returned. Only records that had an export target specified during processing previously are
considered to have been exported by this method.
close(ch)
Writes the ActCaseExportRecords, including any custom field mappings and their current export state, to the
database. This is necessary to call in order to finalize the changes associated with this export and stop them
from being picked up again during the next run of the export.
rollbackPriorExport(exportId, jobId)
Removes all records from the database associated with the specified export ID and batch job ID. This allows
those same changes to be re-evaluated during the next run of the export.
ActCaseExportRecord
isChangedSinceLastExport()
Returns true if the ACT Case represented by this record had changes linked to any of the specified history
event triggers since the last time it was exported, or false if it does not. This is needed when calling
getExportRecords() on the ActCaseExportAPI, where not all of the records returned will necessarily have had
any relevant changes._getExportRecords()
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• Basic understanding of Advanced Scheduler components
Components
This API consists of the following component(s):
• The distributed JavaScript library ADVANCED_SCHEDULER_API
Setup
No setup is necessary for the Advanced Scheduler API. The distributed library is automatically available
within WorkForce Time and Attendance.
In order to sync employee data between WorkForce Time and Advanced Scheduler, employees need to be
assigned a general role that includes the system feature AS_EMPLOYEE. In order to sync user data between
WorkForce Time and Advanced Scheduler, users need to be assigned a General Role or delegated an
assignment group with a Group Role that includes the AS_USER system feature.
Use Cases
Syncing Employee Data Between WorkForce Time and Advanced
Scheduler
The Advanced Scheduler API provides the means by which employee demographic data is synchronized
between the WorkForce Time and Advanced Scheduler modules of WorkForce Time and Attendance. A
Scheduled Script process should be set up to run daily (after the main employee import completes), which
would sync up the AS data with the latest data that has been loaded into WorkForce Time.
Data can be mapped from WorkForce Time to Advanced Scheduler using one of two methods (or both
methods can be combined together if desired). The simplest way, if the WorkForce Time data can be used
exactly as is without needing to be modified or transformed, is to use a Decoder to map the data over. The
Decoder for this sync process is unusual in that instead of having a single source column that maps to one or
more destination tables, as Decoders are usually structured, this process uses a Decoder that has up to four
source columns all mapping to a single destination.
Figure 1: Sample decoder mapping for the Advanced Scheduler sync process
When using a Decoder with this process, the Decoder needs to have at least one of the EMPLOYEE, ASGNMT,
EMPLOYEE_MASTER, or ASGNMT_MASTER columns specified. Any of these columns are optional, but at
least one of the four must be present in order for mappings to be applied. The Decoder needs to specify
AS_ASSIGNMENT as the destination for the mappings.
Note: The AS_ASSIGNMENT column that must be specified in the Decoder does not match the actual table
name for the Advanced Scheduler Assignment data, which is AS_ASGNMT.
The following script example demonstrates using the Advanced Scheduler API to sync data between
WorkForce Time and Advanced Scheduler using only a Decoder to define the mappings:
Sometimes custom logic or transformations need to be applied to the WorkForce Time data in order to
correctly format it for Advanced Scheduler. In these cases, a simple Decoder mapping can’t be used since the
Decoder can’t transform the data. The Advanced Scheduler API allows for a custom mapping function to be
defined which allows for custom logic to be inserted into the mapping process, allowing whatever script-
based logic is needed to be used to derive the final mappings.
The following script example demonstrates using a custom mapping function to populate the AS Assignment
record:
// The custom mapping function containing the logic to be used to map data to the
// AS Assignment record
function customMappingFunction(record, employeeMaster, asgnmtMaster, employee,
asgnmt) {
record.alternate_employee_id = employee.display_employee;
record.seniority_date = employee.seniority_date;
record.birth_date = employee.birth_date;
record.phone_no1 = employee.home_phone;
record.phone_no2 = employee.mobile_phone;
record.pay_rate = asgnmt.base_pay_rate;
Note: Both a Decoder and a custom mapping function can be specified. In that case, the Decoder is applied
before the custom mapping function is called and the record argument provided there will be pre-
populated with the results of the Decoder mapping.
In certain cases, it is desirable to have the import not overwrite existing values for a particular field. This
would typically be the case when the Maintain Employees screen is being used to populate a particular field
on the AS Assignment record, or when the WorkForce Time data does not include Rotation Pattern, OT Cost
Control, and/or Fatigue Management information for the employee and these need to be manually
maintained.
The Advanced Scheduler API offers two different approaches for fields to keep. The first approach is used
whenever a particular field or set of fields should have their values preserved globally, regardless of any
specific data conditions. (This would most commonly be used for fields being manually maintained).
The following script example demonstrates specifying the pay rate and seniority date fields on the AS
Assignment record as fields to keep:
In other cases, whether a field is preserved or not may depend on certain conditions being present in the
data. For these cases, an additional property called fieldsToKeep can be used on the record being mapped
within the custom mapping function to specify which fields should be preserved for just that record currently
being evaluated.
The following script example demonstrates using the Advanced Scheduler API to specify that the
user_field10 value should be preserved, but only for employees who are excluded from one-touch callout:
// The custom mapping function containing the logic to be used to map data to the
// AS Assignment record
function customMappingFunction(record, employeeMaster, asgnmtMaster, employee,
asgnmt) {
record.alternate_employee_id = employee.display_employee;
record.seniority_date = employee.seniority_date;
record.birth_date = employee.birth_date;
record.phone_no1 = employee.home_phone;
record.phone_no2 = employee.mobile_phone;
record.pay_rate = asgnmt.base_pay_rate;
Note: It is possible to define both global fields to keep and per-record fields to keep at the same time, if
needed. However, if a field is marked as a global field to keep there is no way to disable that on a
record-by-record basis.
If the customer can provide all of the necessary fields for the Model import, but for some reason needs to
use different names for some or all of the fields in the file, then an alias map can be defined to translate the
standard field names into custom names.
The following script example demonstrates import Model data from a file, where the standard “StartDate”
and “EndDate” fields have been aliased as “Begin Date” and “Close Date”, respectively:
The Rotation Pattern details define the pattern itself. These indicate which days are working days and which
are not in the Rotation Pattern, as well as determining the length of the pattern.
The Rotation Pattern details import uses the RotationPatternName to identify which Rotation Pattern should
be updated to include the details. This needs to match the name of an existing Rotation Pattern within
Advanced Scheduler. The DayNumber field is then used to identify which day within the pattern the details
belong to. If there is an error on one of the days for a Rotation Pattern, then all of the details for that
Rotation Pattern being processed during the import will be errored out and the Rotation Pattern will remain
unchanged.
Note: Day numbers are not allowed to be duplicated or have gaps without records in the import data for the
same Rotation Pattern.
The following script example demonstrates using the Advanced Scheduler API to import Rotation Pattern
details from a file:
The FM metadata defines the Fatigue Management settings for the Rotation Pattern, including the outage
status and how long the MDO period is. The FM metadata import uses the RotationPatternName to identify
which Rotation Pattern should be updated with the FM metadata. This needs to match the name of an
existing Rotation Pattern within Advanced Scheduler.
FM metadata within Advanced Scheduler is effective dated, and the FM metadata import follows the
standard WorkForce Time and Attendance import rules for effective date handling. Specifically, if the start
date on the record being imported is earlier than an existing start date for that same Rotation Pattern, then
the later effective-dated records for that Rotation Pattern will be removed. If the start date on the record
being imported is later than an existing start date for that Rotation Pattern, then the end date of that record
will be adjusted to the day before the new start date so as to avoid any overlap. If the start date exactly
matches an existing start date for the Rotation Pattern, then that existing record will be updated in place.
The following script example demonstrates using the Advanced Scheduler API to import Rotation Pattern FM
metadata from a file:
includeDistributedPolicy("FILE_MANAGER_API");
The Rotation Pattern details define the pattern itself. These indicate which days are working days and which
are not in the Rotation Pattern, as well as determining the length of the pattern.
The Rotation Pattern details import uses the RotationPatternName to identify which Rotation Pattern should
be updated to include the details. This needs to match the name of an existing Rotation Pattern within
Advanced Scheduler. The DayNumber field is then used to identify which day within the pattern the details
belong to. If there is an error on one of the days for a Rotation Pattern, then all of the details for that
Rotation Pattern being processed during the import will be errored out and the Rotation Pattern will remain
unchanged.
Note: Day numbers are not allowed to be duplicated or have gaps without records in the import data for the
same Rotation Pattern.
The following script example demonstrates using the Advanced Scheduler API to import Rotation Pattern
details from a file:
Importing Qualifications
The Advanced Scheduler API can be used to import Qualification data from a file into Advanced Scheduler.
The Qualifications import keys off of the value in the QualificationName field: if an existing Qualification is
found with that same name then the Qualification will be updated with the new data, otherwise a new
Qualification will be created.
In addition to creating the Qualification itself, the Qualifications import will also make that Qualification
available to the specified Scheduling Unit.
The following script example demonstrates using the Advanced Scheduler API in order to import Qualification
data from a file:
Importing Stations
The Advanced Scheduler API can be used to import Station data from a file into Advanced Scheduler. The
Stations import keys off of the value of the StationName field specified in the source data: if an existing
Station is found with that name then that Station will be updated with the new data, otherwise a new Station
will be created.
The following script example demonstrates using the Advanced Scheduler API to import Station data:
Note: The import will always create a new Slot, never update an existing one. It is not possible to use this
import to process updates to Slots after they’ve been created.
The following script example demonstrates using the Advanced Scheduler API to import Event and Slot data:
The data that is exported to the file can also be filtered so that only certain portions of the data are written
to the file. For instance, it may only be desirable to export the records for a certain subset of the data, such
as only the most recent effective-dated Model record.
The following script example demonstrates how the filtering provided by the Advanced Scheduler API can be
used to export only Model data that is effective on 3000-12-31 to a file:
// Defines the logic that should be used to determine which Models are exported.
// This will be executed once for each Model record, and should return true if
// that record should be exported, false if it should not.
function modelFilterFunction(record) {
// Only export the records with an end date of 3000-12-31
return (record.EndDate == "3000-12-31");
}
Troubleshooting
The job log of the script using the Advanced Scheduler API will contain information messages and, in the case
of problems, any error messages generated during processing. This job log should be reviewed if there are
any problems encountered while using the API.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
See the contents of the ADVANCED_SCHEDULER_API policy in the Distributed JavaScript Library category for
full documentation on these methods. The following is a summary of the available methods and common
uses:
AdvancedSchedulerAPI
AdvancedSchedulerAPI(parms)
Creates a new instance of the Advanced Scheduler API, using the provided settings. The available parameters
that can be defined are:
Parameter Description
debug Indicates if additional debug information should be written to the job log to aid in
troubleshooting. Defaults to false if not specified.
Indicates if a file should be renamed after it has been processed, when importing data from
archiveFiles
files. Defaults to false if not specified.
Table 9: Available parameters when initializing the Advanced Scheduler API
importAssignmentData(parms)
Synchronizes the Advanced Scheduler Assignment and User data with the WorkForce Time employee data.
This needs to be run on an ongoing basis in order to ensure that Advanced Scheduler is scheduling employees
correctly. The available parameters that can be defined are:
Parameter Description
decoderPolicy The Decoder Policy to apply in order to map data from WorkForce Time to the
Advanced Scheduler assignments.
Custom mapping function that can be used to map data from WorkForce Time
mappingFunction to the Advanced Scheduler assignments, or to apply custom logic to derive
the data to be mapped.
Field names on the Advanced Scheduler assignment record whose values
fieldsToKeep
should not be overwritten by the import process.
Indicates whether existing Rotation Pattern assignments should be preserved
keepRotationPatterns
by the import process. Defaults to false if not specified.
importModelsFromFile(fileManager, aliases)
Imports Advanced Scheduler Model data from the specified file. The provided FileManager object is
expected to already have been opened and validated.
The provided aliases allow the standard field names to be overridden, in case the customer needs to provide
different field names in the file than the standard values. The standard field names expected by the import
are:
ModelName StationName ShiftName QualificationNames ModelType StartDate
EndDate DifficultyRating AllowJobOverlap CalloutOrder CalloutWritesOTEqualization FMStatus
SunStart SunEnd SunProficiency SunScheduleOrder SunRequiredHeadCount SunOptionalHeadCount
MonStart MonEnd MonProficiency MonScheduleOrder MonRequiredHeadCount MonOptionalHeadCount
TuesStart TuesEnd TuesProficiency TuesScheduleOrder TuesRequiredHeadCount TuesOptionalHeadCount
WedStart WedEnd WedProficiency WedScheduleOrder WedRequiredHeadCount WedOptionalHeadCount
ThuStart ThuEnd ThuProficiency ThuScheduleOrder ThuRequiredHeadCount ThuOptionalHeadCount
FriStart FriEnd FriProficiency FriScheduleOrder FriRequiredHeadCount FriOptionalHeadCount
SatStart SatEnd SatProficiency SatScheduleOrder SatRequiredHeadCount SatOptionalHeadCount
HolStart HolEnd HolProficiency HolScheduleOrder HolRequiredHeadCount HoldOptionalHeadCount
Description UserField1 UserField2 UserField3 UserField4 UserField5
UserField6 UserField7 UserField8 UserField9 UserField10 UserField11
UserField12 UserField13 UserField14 UserField15
Table 11: Standard field names expected for importing Model data
importNuclearRotationPatternMastersFromFile(fileManager, aliases)
Imports Advanced Scheduler Rotation Pattern master data, containing the information necessary for a
Nuclear Industry customer, from the specified file. The provided FileManager object is expected to already
have been opened and validated.
The provided aliases allow the standard field names to be overridden, in case the customer needs to provide
different field names in the file than the standard values. The standard field names expected by the import
are:
RotationPatternName Description SchedulingGroup SchedulingUnit UserField1
UserField2 UserField3 UserField4 UserField5 UserField6
UserField7 UserField8 UserField9 UserField10
Table 12: Standard field names expected for importing Nuclear Industry Rotation Pattern master data
importNuclearRotationPatternDetailsFromFile(fileManager, aliases)
Imports Advanced Scheduler Rotation Pattern detail data, containing the information necessary for a Nuclear
Industry customer, from the specified file. The provided FileManager object is expected to already have been
opened and validated.
The provided aliases allow the standard field names to be overridden, in case the customer needs to provide
different field names in the file than the standard values. The standard field names expected by the import
are:
importNuclearRotationPatternFMMetadataFromFile(fileManager, aliases)
Imports Advanced Scheduler Rotation Pattern FM metadata, containing the information necessary for a
Nuclear Industry customer, from the specified file. The provided FileManager object is expected to already
have been opened and validated.
The provided aliases allow the standard field names to be overridden, in case the customer needs to provide
different field names in the file than the standard values. The standard field names expected by the import
are:
RotationPatternName AveragingWeekStartDay PeriodType
StandardOnlinePeriodLength ShiftLength OutageStatus
StartDate EndDate Period1StartDay
Period2StartDay Period3StartDay Period4StartDay
Period5StartDay
Table 14: Standard field names expected for importing Nuclear Industry Rotation Pattern FM metadata
importPetroChemRotationPatternMastersFromFile(fileManager, aliases)
Imports Advanced Scheduler Rotation Pattern master data, containing the information necessary for a
Petrochemical Industry customer, from the specified file. The provided FileManager object is expected to
already have been opened and validated.
The provided aliases allow the standard field names to be overridden, in case the customer needs to provide
different field names in the file than the standard values. The standard field names expected by the import
are:
RotationPatternName Description SchedulingGroup SchedulingUnit UserField1
UserField2 UserField3 UserField4 UserField5 UserField6
UserField7 UserField8 UserField9 UserField10 StartDate
EndDate OutageStatus ShiftLength
Table 15: Standard field names expected for importing Petrochemical Industry Rotation Pattern master data
importPetroChemicalRotationPatternDetailsFromFile(fileManager, aliases)
Imports Advanced Scheduler Rotation Pattern detail data, containing the information necessary for a
Petrochemical Industry customer, from the specified file. The provided FileManager object is expected to
already have been opened and validated.
The provided aliases allow the standard field names to be overridden, in case the customer needs to provide
different field names in the file than the standard values. The standard field names expected by the import
are:
RotationPatternName DayNumber IsWorking Model UserField1
UserField2 UserField3 UserField4 UserField5
Table 16: Standard field names expected for importing Petrochemical Industry Rotation Pattern detail data
importQualificationsFromFile(fileManager, aliases)
Imports Advanced Scheduler Qualification data from the specified file. The provided FileManager object is
expected to already have been opened and validated.
The provided aliases allow the standard field names to be overridden, in case the customer needs to provide
different field names in the file than the standard values. The standard field names expected by the import
are:
QualificationName Description Organization
Facility SchedulingGroup SchedulingUnit
Table 17: Standard field names expected for importing Qualification data
importAssignmentQualficiationsFromFile(fileManager, aliases)
Imports Advanced Scheduler Assignment Qualification data from the specified file. The provided
FileManager object is expected to already have been opened and validated.
The provided aliases allow the standard field names to be overridden, in case the customer needs to provide
different field names in the file than the standard values. The standard field names expected by the import
are:
EmployeeID QualificationName Proficiency
StartDate ExpirationDate TransactionType
Table 18: Standard field names expected for importing Assignment Qualification data
importEventsAndSlotsFromFile(fileManager, aliases)
Imports Advanced Scheduler Assignment Event and Slot data from the specified file. Events will be created if
they don’t already exist; otherwise new slots will be added onto the existing event that is already defined.
This import does not support updating existing slots or events in any fashion. The provided FileManager
object is expected to already have been opened and validated.
The provided aliases allow the standard field names to be overridden, in case the customer needs to provide
different field names in the file than the standard values. The standard field names expected by the import
are:
Model StartDate StartTime
EndDate EndTime EventName
RequiredHeadCount WorkOrderNo Note
Table 19: Standard field names expected for importing Event and Slot data
importStationsFromFile(fileManager, aliases)
Imports Advanced Scheduler Assignment Station data from the specified file. The provided FileManager
object is expected to already have been opened and validated.
The provided aliases allow the standard field names to be overridden, in case the customer needs to provide
different field names in the file than the standard values. The standard field names expected by the import
are:
StationName Description Organization Facility
Area SchedulingGroup SchedulingUnit UserField1
UserField2 UserField3 UserField4 UserField5
Table 20: Standard field names expected for importing Station data
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The ADVANCED_SCHEDULER_DATA_API Distributed JavaScript Library
• The API_UTIL Distributed JavaScript Library
Setup
No setup is necessary for the Advanced Scheduler Data API. The distributed library is automatically available
in WT&A.
Use Cases
Get Models by using AdvancedSchedulerMatchCondition
The following script examples demonstrate how models can be retrieved.
includeDistributedPolicy("ADVANCED_SCHEDULER_DATA_API");
var apiParams = {
enableDebugLogging: true
};
matchCondition.and(new AdvancedSchedulerMatchCondition(matchConditionParam));
var actualRecords = asAPI.getModels(matchCondition);
if (actualRecords.length != 0) {
log.info("JOBASSIGN_DESCRIPTION: " + actualRecords[0].JOBASSIGN_DESCRIPTION + " JOB_ID:
" + actualRecords[0].JOB_ID);
}
var apiParams = {
enableDebugLogging: true
};
matchCondition.and(new AdvancedSchedulerMatchCondition(matchConditionParam));
var actualRecords = asAPI.getQualifications(matchCondition);
if (actualRecords.length != 0) {
log.info("SKILL_DESCRIPTION: " + actualRecords[0].SKILL_DESCRIPTION + "
CREATED_BY: " + actualRecords[0].CREATED_BY);
}
var apiParams = {
enableDebugLogging: true
};
if (actualRecords.length != 0) {
log.info("DEPT_NAME: " + actualRecords[0].DEPT_NAME + "
DEPT_ID: " + actualRecords[0].DEPT_ID);
}
var apiParams = {
enableDebugLogging: true
};
if (actualRecords.length != 0) {
log.info("USER_ID: " + actualRecords[0].USER_ID);
}
includeDistributedPolicy("ADVANCED_SCHEDULER_DATA_API");
var apiParams = {
enableDebugLogging: true
};
var apiParams = {
enableDebugLogging: true
};
if (actualRecords.length != 0) {
log.info("JOB_DESCRIPTION: " + actualRecords[0].JOB_DESCRIPTION);
}
var apiParams = {
enableDebugLogging: true
};
if (actualRecords.length != 0) {
log.info("SCHEDULING_GROUP: " + actualRecords[0]. SCHEDULING_GROUP);
}
var apiParams = {
enableDebugLogging: true
};
if (actualRecords.length != 0) {
log.info("RP_CODE: " + actualRecords[0].RP_CODE);
}
var apiParams = {
enableDebugLogging: true
};
if (actualRecords.length != 0) {
log.info("DESCRIPTION: " + actualRecords[0].DESCRIPTION);
}
var apiParams = {
enableDebugLogging: true
};
if (actualRecords.length != 0) {
log.info("ASGNMT: " + actualRecords[0].ASGNMT);
}
var apiParams = {
enableDebugLogging: true
};
if (asgnmtRecords.length != 0) {
log.info("ASGNMT: " + asgnmtRecords[0].ASGNMT);
}
if (empRpDetailRecords.length != 0) {
log.info("RPATTERN_ID: " + empRpDetailRecords[0].RPATTERN_ID);
}
var apiParams = {
enableDebugLogging: true
};
if (asgnmtRecords.length != 0) {
log.info("ASGNMT: " + asgnmtRecords[0].ASGNMT);
}
if (empRpDetailRecords.length != 0) {
log.info("RPATTERN_ID: " + empRpDetailRecords[0].RPATTERN_ID);
}
if (rpDetailsRecords.length != 0) {
log.info("RPATTERN_ID: " + rpDetailsRecords[0].RPATTERN_ID + " RP_DETAIL_ID: " +
rpDetailsRecords[0].RP_DETAIL_ID);
}
Troubleshooting
The job log of the script using the Advanced Scheduler Data API will include informational messages, and in
the case of problems, error messages. This job log should be reviewed if there are problems using the API.
The following table lists some common error messages, their causes, and the solution.
Error Message Cause Solution
Match condition is not provided Required match condition has Provide
not been provided to calling AdvancedSchedulerMatchCondition
method
Table 23: Advanced Scheduler Data API common error messages, causes, and solutions
API Reference
Knowledge of JavaScript programming is necessary to make best use of this section.
AdvancedSchedulerDataAPI
The following methods are available for AdvancedSchedulerDataAPI.
AdvancedSchedulerDataAPI (params)
Constructor to create a new instance of the API. Takes an object as parameter, which includes the parameter
defined in the following table.
Parameter Description
enableDebugLogging True if debug content should be written to the log, false if not.
getModels (advancedSchedulerMatchCondition)
Performs a lookup on the tbl_mst_job_assignment table using the provided match condition and returns
matching rows as ReadOnlyScriptables.
Parameter Description
advancedSchedulerMatchCondition Match condition to retrieve matching records.
getUsers (advancedSchedulerMatchCondition)
Performs a lookup on the tbl_mst_users table using the provided match condition and returns matching rows
as ReadOnlyScriptables.
Parameter Description
advancedSchedulerMatchCondition Match condition to retrieve matching records.
getSchedulingUnits (advancedSchedulerMatchCondition)
Performs a lookup on the tbl_mst_department table using the provided match condition and returns
matching rows as ReadOnlyScriptables.
Parameter Description
advancedSchedulerMatchCondition Match condition to retrieve matching records.
getQualifications (advancedSchedulerMatchCondition)
Performs a lookup on the tbl_mst_skills table using the provided match condition and returns matching rows
as ReadOnlyScriptables.
Parameter Description
advancedSchedulerMatchCondition Match condition to retrieve matching records.
getSchedulingGroups (advancedSchedulerMatchCondition)
Performs a lookup on the tbl_mst_division table using the provided match condition and returns matching
rows as ReadOnlyScriptables.
Parameter Description
advancedSchedulerMatchCondition Match condition to retrieve matching records.
getStations (advancedSchedulerMatchCondition)
Performs a lookup on the tbl_mst_jobs table using the provided match condition and returns matching rows
as ReadOnlyScriptables.
Parameter Description
advancedSchedulerMatchCondition Match condition to retrieve matching records.
getOrganizations (advancedSchedulerMatchCondition)
Performs a lookup on the tbl_mst_organization table using the provided match condition and returns
matching rows as ReadOnlyScriptables.
Parameter Description
advancedSchedulerMatchCondition Match condition to retrieve matching records.
getRotationPatterns (advancedSchedulerMatchCondition)
Performs a lookup on the tbl_mst_rotational_pattern table using the provided match condition and returns
matching rows as ReadOnlyScriptables.
Parameter Description
advancedSchedulerMatchCondition Match condition to retrieve matching records.
getFacilities (advancedSchedulerMatchCondition)
Performs a lookup on the tbl_mst_facility table using the provided match condition and returns matching
rows as ReadOnlyScriptables.
Parameter Description
advancedSchedulerMatchCondition Match condition to retrieve matching records.
getAssignments (advancedSchedulerMatchCondition)
Performs a lookup on the as_asgnmt table using the provided match condition and returns matching rows as
AsAsgnmtScriptables. AsAsgnmtScriptable gives support to get emp_rp_details records by using
AsAsgnmtScriptable#getEmpRpDetails, getEmpRpDetails returns a list of EmpRpDetailsScriptable. Similarly,
EmpRpDetailsScriptable gives support to get rp_details records by using EmpRpDetailsScriptable
#getRpDetails, getRpDetails returns ReadOnlyScriptables of rp_details records.
Parameter Description
advancedSchedulerMatchCo Match condition to retrieve matching records.
ndition
NOTE: The Assignment Group API will no longer automatically revoke assignment group delegations that
were manually re-delegated after having initially been delegated by the API.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• How assignment groups function
Components
This API consists of the following component(s):
• AssignmentGroupOperator, AssignmentGroupFilter, AssignmentGroupParameters, and
AssignmentGroupAPI. These components are in the distributed JavaScript Library,
ASSIGNMENT_GROUP_API.
• This library automatically includes the API_UTIL library as well, for providing access to additional
functionality needed within the Assignment Group API.
Setup
No setup is necessary for the Assignment Group API. The distributed library is automatically available in
WorkForce Time and Attendance.
Use Cases
Creating a new assignment group
This script excerpt demonstrates how the Assignment Group API can be used to create a new assignment
group. A new group will be created named “Sample Assignment Group” using the provided definition and
settings, containing all assignments where employee.other_string5 = “A1150” and asgnmt.ld2 != “BV113”.
Note: Even if there is already a group with that description, or that matches the specified settings, a new
assignment group would be created by this script.
includeDistributedPolicy("ASSIGNMENT_GROUP_API");
// Initialize the API. For this example, the argument here is irrelevant.
var assignmentGroupAPI = new AssignmentGroupAPI("GROUP_CREATION");
Note: Even if there is already a group with that description, a new assignment group would be created by this
script.
includeDistributedPolicy("ASSIGNMENT_GROUP_API");
// Initialize the API. For this example, the argument here is irrelevant
var assignmentGroupAPI = new AssignmentGroupAPI("PROFILE_GROUPS");
Note: Even if there is already a group with that description, a new assignment group would be created by this
script.
includeDistributedPolicy("ASSIGNMENT_GROUP_API");
// Initialize the API. For this example, the argument here is irrelevant
var assignmentGroupAPI = new AssignmentGroupAPI("DATA_GROUPS");
If the assignment data included assignments in locations A, B, and D, this would create three assignment
groups named “Location A”, “Location B”, and “Location D”.
// Initialize the API. For this example, the argument here is irrelevant.
var assignmentGroupAPI = new AssignmentGroupAPI("GROUP_CREATION");
This will attempt to find a group with the specified description, filters, and settings. An existing group will
only be considered a match if all of those attributes are the same. If a filter is different, or a setting, or the
description, then the group will not be considered a match. If no matching group is found, the value null will
be returned instead of the ID of the group.
// Initialize the API. For this example, the argument here is irrelevant.
var assignmentGroupAPI = new AssignmentGroupAPI("GROUP_CREATION");
This will attempt to find a group record with the specified description, filters, and settings. An existing group
record will only be considered a match if all of those attributes are the same: if a filter is different, or a
setting, or the description, then the group will not be considered a match. If no matching group record is
found, the value null will be returned instead of the record.
// Initialize the API. For this example, the argument here is irrelevant.
var assignmentGroupAPI = new AssignmentGroupAPI("GROUP_CREATION");
This will attempt to find groups with the specified description, filters, and settings. An existing group will only
be considered a match if all of those attributes are the same. If a filter is different, or a setting, or the
description, then the group will not be considered a match. If no matching group is found, an empty array
will be returned.
// Initialize the API. For this example, the argument here is irrelevant.
var assignmentGroupAPI = new AssignmentGroupAPI("GROUP_CREATION");
// Initialize the API. For this example, the argument here is irrelevant.
var assignmentGroupAPI = new AssignmentGroupAPI("GROUP_CREATION");
With this approach, you should be guaranteed to get an ID returned since if no matching group was found a
new group would have been created.
// Indicate that the user should end up with rights to the group
assignmentGroupAPI.addGroupRight(userLoginId, groupId, groupRole);
}
// Assign the new group rights and revoke any old rights that were
// not specified.
assignmentGroupAPI.commitGroupRights();
After this script is run, assignment groups would have been delegated for each record in the data source
being processed. This could include multiple delegations for the same user if desired. If the user had not
previously been delegated those rights, then a new delegation would have been created.
If a user had previously been delegated rights using this same process name (“DELEGATION_SCRIPT_1”), and
those rights were not delegated again within the source data being processed, then those existing
delegations would have been revoked by the call to commitGroupRights().
Since this script doesn’t include any calls to addGroupRight(), it is effectively saying that all of the existing
delegations are no longer valid and should be revoked. Any rights previously delegated that are associated
with the specified process name would be revoked in this scenario.
This will provide access to the IDs for the assignments in the group. The Person Data API can be used to look
up the entire corresponding assignment record if additional information is needed for further processing.
// Get the IDs of only the active assignments in the assignment group
var assignments = assignmentGroupAPI.getAssignmentsInGroup(groupId, true);
In this case, whether a soft-terminated assignment is considered active or not depends on how the “display
soft-terminated assignments” setting from the AssignmentGroupParameters is set for the assignment group
being evaluated. If the group is set to display those assignments, then they will be considered active here.
Otherwise they are considered terminated.
In this case, the assignments already in the group would remain in the group and the new assignment would
be added. If the group already contained the specified assignment, no changes would be made to the group
membership.
// This is an array that will be populated with the assignment IDs for
// all of the assignments that should be included in the group
var assignmentsForGroup = [];
In this case, any assignments in the assignment group that were not in the array of assignment IDs would be
removed from the assignment group, and any new assignments in the array that were not already in the
assignment group would be added to its membership.
Example 1: Delegate all assignment group rights without any group filtration.
includeDistributedPolicy("ASSIGNMENT_GROUP_API");
//define parameters
var parms = {
supervisorField: "Manager_ID",// column name in ASGNMT table pointing to the managers ID
supervisorMatchID: "Other_String12",// column name in ASGNMT or EMPLOYEE table to which
// supervisor id maps.
descriptionFunction: descriptionFunction,
groupRole: "MANAGER",
isAsgnmtTable: true, //true if supervisorMatchID comes from ASGNMT table, false if comes
//from EMPLOYEE table.
assignDelegator: true,
delegatorLoginId: "WORKFORCE"
};
//Call to function
assignmentGroupAPI.delegateAssignmentGroupRightViaHierarchy(parms);
//define group filter function in oder to exclude any group from being delegated to any top
//manager.
var filterFunction = function(asgnmt_grp,topSupervisorAppUser) {
//Call to function
assignmentGroupAPI.delegateAssignmentGroupRightViaHierarchy(parms);
Troubleshooting
The job log of the script using the Assignment Group API will include informational messages, and in the case
of problems, error messages. This job log should be reviewed if there are problems using the API.
Some common error messages, their causes, and the solution:
Error Message Cause Solution
TYPE is an invalid assignment The parameters for an Check that the group type is set
group type. Valid choices are assignment group specified an to either AUTO or MANUAL in the
‘AUTO’ and ‘MANUAL’. (Not case invalid group type. assignment group parameters.
sensitive)
OPTION is an invalid evaluate-as- The parameters for an Check that the as-of date option
of option. Valid options are assignment group specified an is set to either AS_OF_TODAY,
‘AS_OF_TODAY’, invalid as-of date option. CURRENT_PERIOD_END, or
‘CURRENT_PERIOD_END’, or SYSTEM_SETUP in the assignment
‘SYSTEM_SETUP’. (Not case group parameters.
sensitive)
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The following is a summary of the available methods and common uses:
AssignmentGroupOperator
EQUALS
Only assignments where the value in the specified field exactly matches the indicated value will be included
in the assignment group.
GREATER_THAN
Only assignments where the value in the specified field is greater than the indicated value will be included in
the assignment group. For string fields, this uses lexicographic ordering (so “3” is greater than “20”).
LESS_THAN
Only assignments where the value in the specified field is less than the indicated value will be included in the
assignment group. For string fields, this uses lexicographic ordering (so “20” is less than “3”).
GREATER_THAN_OR_EQUALS
Only assignments where the value in the specified field is the same as or greater than the indicated value will
be included in the assignment group.
LESS_THAN_OR_EQUALS
Only assignments where the value in the specified field is the same as or less than the indicated value will be
included in the assignment group.
NOT_EQUALS
Only assignments where the value in the specified field does not exactly match the indicated value will be
included in the assignment group.
IS_NULL
Only assignments where the value in the specified field is null will be included in the assignment group.
IS_NOT_NULL
Only assignments where the value in the specified field is not null will be included in the assignment group.
LIKE
Only assignments where the value in the specified field matches the pattern defined by the indicated value
will be included in the group.
% — A substitute for zero or more characters
_ — A substitute for a single character
[charlist] — Sets and ranges of characters to match
[^charlist] or [!charlist] — Matches only a character NOT specified within the brackets
NOT_LIKE
Only assignments where the value in the specified field does not match the pattern defined by the indicated
value will be included in the group.
IN
Only assignments where the value in the specified field is one of the items defined by the comma-delimited
list contained in the indicated value will be included in the group.
AssignmentGroupFilter
AssignmentGroupFilter(tableName, fieldName, operator, value)
Creates a new AssignmentGroupFilter that matches all assignments whose value in the indicated table/field
combination matches the indicated value, using the specified operator.
getTableName()
Returns the name of the table referenced by the filter. Will be either EMPLOYEE or ASGNMT. May be useful
for debugging.
getFieldName()
Returns the name of the field within the table referenced by the filter. May be useful for debugging.
getOperator()
Returns the AssignmentGroupOperator used by the filter. May be useful for debugging.
getValue()
Returns the value that the filter is comparing against. May be useful for debugging.
AssignmentGroupParameters
AssignmentGroupParameters()
Creates a new parameters object, using the default parameter settings.
setAssignmentGroupType(type)
Sets the desired assignment group type to either AUTO or MANUAL. If this is not called, then the default
assignment group type is AUTO.
getAssignmentGroupType()
Returns the assignment group type. May be useful for debugging.
setLd(fieldIndex, value)
Sets the LD field with the indicated index on the assignment group to the specified value.
getLd(fieldIndex)
Returns the value associated with the LD field with the indicated index on the assignment group. May be
useful for debugging.
setDisplaySoftTerminated(displaySoftTerminated)
Sets whether the assignment group should display soft-terminated assignments or not. If this is not called,
then the default behavior is for the group not to display soft-terminated assignments.
getDisplaySoftTerminated()
Returns whether the assignment group will display soft-terminated assignments or not. May be useful for
debugging.
setEOPPGroup(isEOPPGroup)
Sets whether the assignment group is visible on the end-of-period processing screen or not. If this is not
called, then the default behavior is for the group not to be displayed on the end-of-period processing screen.
getEOPPGroup()
Returns whether the assignment group is visible on the end-of-period processing screen or not. May be
useful for debugging.
setEvaluateAsOf(evaluateAsOf)
Sets which date is used to evaluate the membership of an automatic assignment group. Valid options are
“AS_OF_TODAY”, which looks at the assignment records that are effective on the current system date,
“CURRENT_PERIOD_END”, which looks at the assignment records that are effective on the last date of the
current period, or “SYSTEM_SETUP”, which uses the default behavior for the groups as defined by the System
Setup policy. If this is not called, then the default behavior is to use the behavior defined by the System
Setup policy.
getEvaluateAsOf()
Returns which date will be used for evaluating the assignment group membership. May be useful for
debugging.
setHrSystemId(hrSystemId)
Sets the HR system that is associated with the assignment group, for tracking purposes.
getHrSystemId()
Returns the HR system that is associated with the assignment group. May be useful for debugging.
AssignmentGroupAPI
AssignmentGroupAPI(processName)
Creates a new instance of the Assignment Group API, associated with the specified process name. The
process name should uniquely identify a process that is using the API for group delegation (e.g., “Location
Group Delegation”), and will be used during delegation processing to ensure that all groups delegated by that
process are revoked if they are no longer valid. Generally, this process name should be distinct for each
different job that is using the API, though in rare cases it may be desirable to share the same process name
across more than one job if the intent is that the delegations created by one job should potentially be
revoked by the other job.
recalculateAssignmentGroups(waitUntilFinished)
Launches the process to recalculate the membership for all automatic assignment groups. Depending on the
value for waitUntilFinished, this will either wait until the recalculation is complete before resuming script
execution or will immediately continue with the next action in the script.
createPolicyProfileGroups(parameters)
Creates assignment groups for each policy profile, each containing just the assignments in that policy profile.
Groups will have names of the form ".POLICY_PROFILE", where POLICY_PROFILE is replaced with the policy
profile description as defined on the Policy Profile Master policy in Policy Editor. Only creates a group if no
duplicate is found. A call to recalculateAssignmentGroups() is required to populate the assignment groups.
indicated database field. If no prefix is provided, then the group name will just be of the form "VALUE".
These groups will not be automatically delegated to any users except for those with the Administrator role
specified on the System Setup policy. Only creates a group if no duplicate is found. A call to
recalculateAssignmentGroups() is required to populate the assignment groups.
getAssignmentGroupsForUser(loginId, asOfDate)
Returns information about all of the assignment groups that are delegated to the user with the indicated
login ID on the specified as-of date.
name, which will allow only the rights that were delegated through a specific job to be updated. If called
multiple times for the same loginId and displayId, the groupRole that will ultimately be applied will be the
latest one provided. The allowRedelgation parameter will default to true if not specified.
commitGroupRights(assignDelegator, delegatorLoginId)
Commits the changes to the group rights that have been indicated by calls to addGroupRight(). This will
perform the following changes to the group rights:
1. If the user does not already have rights to the indicated group, they will be delegated access to that
group.
2. If the user is already delegated access to the group, with the same group role, then that delegation
will remain unchanged.
3. If the user is already delegated access to the group, but with a different group role, then the group
role associated with the delegation will be updated to reflect the new group role.
4. If a user was previously delegated a group, but that delegation is not in the collection of queued
rights specified by calls to addGroupRight(), then that existing delegation will be revoked.
Only delegations associated with the process name specified for the API will be updated during this
processing. Assignment groups will be recalculated automatically after this operation is performed.
getAssignmentsInGroup(displayId, activeOnly)
Returns the array of assignment IDs contained in the assignment group with the specified display ID.
Terminated assignments will be included in the array if activeOnly is specified as false, otherwise only the
active assignments will be included.
replaceAssignmentsInGroup(displayId, assignmentIds)
Replaces the membership of the assignment group with the specified display ID with the provided collection
of assignments. This method only applies to MANUAL assignment groups, and replaces existing assignments
regardless of active status.
addAssignmentToGroup(displayId, assignmentId)
Adds the specified assignment to the indicated assignment group, if that assignment is not already included
in that group's membership. This method only applies to MANUAL assignment groups.
delegateAssignmentGroupRightViaHierarchy (parms)
This function takes an object as an argument with some required and optional parameters. The parameters
are as follows:
Required:
• supervisorField - Column name in the ASGNMT table pointing to the manager/supervisor ID.
• supervisorMatchID - Column name in ASGNMT or EMPLOYEE table to which supervisor
Id(supervisorField) maps.
• descriptionFunction - Description function to evaluate group description. This function would have
lower manager employee object as an argument.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The BADGE_DATA_API Distributed JavaScript Library
Setup
No setup is necessary for the Badge Data API. The distributed library is automatically available in WT&A.
Use Cases
Finding Badges by ID and Group
The following script examples demonstrate how badge data can be retrieved by badge ID and badge group.
var apiParams = {
enableDebugLogging : true
};
// get badge
var badge = badgeDataAPI.findBadge(methodParams);
var apiParams = {
enableDebugLogging : true
};
//get badge
var badge = badgeDataAPI.findBadge(methodParams);
var apiParams = {
enableDebugLogging : true
};
//Set parameters
var methodParams = {
effDateTime: WFSDateTime.today
};
// get badges
var badge = badgeDataAPI.findBadges(methodParams);
var apiParams = {
enableDebugLogging : true
};
//Set desired parameters and remove rest. All parameters are optional.
var methodParams = {
badgeId: "ABC123",
badgeGroup: "PROXIMITY_CARDS",
employeeId: "21345", // Display_EMPLOYEE
dataSouceType: 1, //MANUAL_ENTRY
effDateTime: WFSDateTime.today
};
// get badges
var badge = badgeDataAPI.findBadges(methodParams);
var apiParams = {
enableDebugLogging : true
};
// get badges
var badge = badgeDataAPI.findBadges();
Example 1: Finding badges between given date-time range without optional parameters
This example shows how to retrieve badges between a date-time range.
includeDistributedPolicy("BADGE_DATA_API");
var apiParams = {
enableDebugLogging : true
};
// get badges
var badge = badgeDataAPI. findBadgesBetweenEffDateTimeRange (methodParams);
Example 2: Finding badges between given date-time range with optional parameters
This example shows how to retrieve badges between date-time range with optional parameters.
includeDistributedPolicy("BADGE_DATA_API");
var apiParams = {
enableDebugLogging : true
};
// get badges
var badge = badgeDataAPI. findBadgesBetweenEffDateTimeRange (methodParams);
Troubleshooting
The job log of the script using the Badge Data API will include informational messages, and in the case of
problems, error messages. This job log should be reviewed if there are problems using the API.
The following table lists some common error messages, their causes, and the solution.
Error Message Cause Solution
Required parameter is not Required parameter(s) are missing or Provide required parameter for
provided or null null in the method findBage or method
findBadgesBetweenEffDateTimeRange
Value is not a valid value for Provided data source type value is not Provide valid value for data
choice a valid value for the API method. source type which includes
MANUAL_ENTRY or
IMPORTED and their integer
value 1 or 2 respectively
Table 25: BADGE DATA API common error messages, causes, and solutions
API Reference
Knowledge of JavaScript programming is necessary to make best use of this section.
BadgeDataAPI
The following methods are available for the Badge Data API.
BadgeDataAPI (params)
Constructor to create a new instance of the API. Takes an object as parameter, which includes the parameter
defined in the following table.
Parameter Description
enableDebugLogging True if debug content should be written to the log, false if not
findBadge (params)
Retrieves badge by ID and Group. This method has some required and optional parameters, where the
optional parameters add additional criteria to get the desired badges. Takes an object as a parameter, which
includes the parameters defined in the following table.
Parameter Description
badgeId (required) ID for the badge which need to be retrieved
badgeGroup (required) Badge group name to get only badge of that group
dataSourceType (optional) Date-time to put additional criteria to get only active badge as of given date-
time
effDateTime (optional) Data source type to put additional criteria to get badge
findBadges (params)
Retrieves badge(s) by provided parameters. This method has all optional parameters and retrieves badges
according to the provided number of parameters. t retrieves all badges from the database if no parameter is
provided. Takes an object as a parameter, which includes the parameters defined in the following table.
Parameter Description
badgeId (optional) ID for the badge which need to be retrieved
badgeGroup (optional) Badge group name to get only badges of that group.
employeeId (optional) Display employee id, whose badges need to be retrieved
dataSourceType (optional) Date-time to put additional criteria to get only active badge as of given date-
time
effDateTime (optional) Data source type to put additional criteria to get badge
findBadgesBetweenEffDateTimeRange (params)
Retrieves badge(s) within the provided data-time range. This method has some required and optional
parameters, where the optional parameters add additional criteria to get the desired badges. Takes an object
as a parameter, which includes the parameters defined in the following table.
Parameter Description
startEffDateTime (required) Start date-time for the date-time range
endEffDateTime (required) End date-time for the date-time range
badgeId (optional) ID for the badge which need to be retrieved
badgeGroup (optional) Badge group name to get only badges of that group.
employeeId (optional) Display employee id, whose badges need to be retrieved
dataSourceType (optional) Date-time to put additional criteria to get only active badge as of given date-
time
onlyIncludeSubRanges True if only badges within sub-range of given range are required, false if
badges with any portion of that time range are required. False by default.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding experience
• Basic understanding of Badges within WorkForce Time and Attendance and how they work with
clock processing
Components
This API consists of the following component(s):
1. The BADGE_IMPORT_LIBRARY Distributed JavaScript library
2. The Badge Group policies defined within the configuration
Setup
No setup is necessary for the Badge Import API. The distributed library is automatically available within
WorkForce Time and Attendance, and the API will automatically generate the appropriate Badge Group
policies as needed.
Use Cases
Importing Badge Records from HR Data
The most common use for the Badge Import API is for loading Badge information that was included as part of
the main HR data. The Badge information is loaded onto the employee or assignment records by the main
Employee Import process, which defines the associations between employee and Badge ID as well as the
effective dating for that association, and then the Badge Import API transfers that information into the Badge
table.
When the Badge information is being imported out of the HR data, the API always looks at the
employee/assignment data that is effective on the current system date. If the employee is not active on that
date then the Badge data is considered to be terminated (which will end date the existing Badge record),
otherwise it is treated as active.
The following script example demonstrates the most basic case for importing Badge data using the HR data
as the source. It defines a Badge Group named “ALL_BADGES”, and then uses the information defined in the
assignment’s IMPORTED_BADGE_ID field to identify the Badge ID for the employee:
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Add the source definition to the API, defining the WorkForce Time and Attendance HR data as
the data
// source
api.addEmpCenterSource(sourceSettings);
Note: While Badges support being defined as either numeric or alpha-numeric, an alpha-numeric Badge can
store an ID that is all numbers. To avoid issues with leading zeros being dropped, and to ensure
compatibility with all possible Badge ID values, it is recommended to always define Badges as alpha-
numeric.
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Specify the criteria that must be met in order for a badge to be created
criteria: "a.policy_profile = 'BADGE_PROFILE'"
};
// Add the source definition to the API, defining the WorkForce Time and Attendance HR data as
the data
// source
api.addEmpCenterSource(sourceSettings);
Note: If an employee had previously met the criteria for being imported, and had a badge created, and now
due to a data change no longer meets the criteria for being imported, then their existing badge will be
deactivated the next time the Badge Import is run.
Example 3: Modifying the value in the field used as the import source
Sometimes the Badge ID supplied in the HR data is not quite sufficient on its own to use as the actual ID for
the Badge records. For instance, maybe the customer has multiple locations that each have their own pool of
Badge IDs. In this case, it is possible that the same Badge ID could end up being assigned to different
employees working out of different locations. In order to ensure that the Badge IDs loaded into the Badge
table are unique (so that the terminals can correctly resolve which employee the Badge belongs to), a prefix
denoting which location the badge belongs to can be appended to the Badge ID supplied in the HR data.
When using the HR data as the data source for the import, the Badge Import API supports the use of SQL in
the badgeIdField definition, which allows the value to be modified as needed. The following script example
demonstrates adding a prefix based on the location an employee works at to the Badge ID being imported:
includeDistributedPolicy("BADGE_IMPORT_LIBRARY");
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Add the source definitions to the API, defining the WorkForce Time and Attendance HR data
as the data
// source
api.addEmpCenterSource(location1Settings);
api.addEmpCenterSource(location2Settings);
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Add the source definition to the API, defining the WorkForce Time and Attendance HR data as
the data
// source
api.addEmpCenterSource(sourceSettings);
api.importBadges();
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Add the source definition to the API, defining the WorkForce Time and Attendance HR data as
the data
// source
api.addEmpCenterSource(sourceSettings);
// Define the badge group as a common prefix plus the first three characters of
// the location
var badgeGroup = "PRE" + location.substr(0, 3);
return badgeGroup;
}
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
var sourceSettings = {
// The file containing the badge data that should be processed
sourceFile: "interface/incoming/badge_data.csv",
// Add the source definition to the API, defining the CSV file as the data source
api.addCsvSource(sourceSettings);
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
sourceFile: "interface/incoming/badge_data.csv",
// All recurrences of this string will be removed from the beginning of the badge
// ID. This will remove the leading Es on the value in this case.
trimString: "E",
// Add the source definition to the API, defining the CSV file as the data source
api.addCsvSource(sourceSettings);
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Add the source definition to the API, defining the CSV file as the data source
api.addCsvSource(sourceSettings);
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Define the reclassification map. In this case, we're assuming that the badges
// are being grouped by state and that the source file contains the abbreviation
// for the state while the actual badge group should be the full state name.
var reclassificationMap = {
MI: "MICHIGAN",
OH: "OHIO",
IN: "INDIANA"
};
// Specify the default badge group to use if the reclassification map does
// not have an entry for the provided badge group value
defaultBadgeGroupReclassification: "UNKNOWN_STATE",
// Add the source definition to the API, defining the CSV file as the data source
api.addCsvSource(sourceSettings);
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// Add the source definition to the API, defining the CSV file as the data source
api.addCsvSource(sourceSettings);
// Define the badge group as a common prefix plus the first three characters of
// the location
var badgeGroup = "PRE" + location.substr(0, 3);
return badgeGroup;
}
// Define the general settings for the Badge Import. In this case, the
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Add the source definition to the API, defining the CSV file as the data source
api.addCsvSource(sourceSettings);
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Add the source definition to the API, defining the CSV file as the data source
api.addCsvSource(sourceSettings);
The following script example demonstrates using the Badge Import API to load Badge data from a fixed-width
file named “badge_data.txt”. The Badges will be placed in the Badge Group “ALL_BADGES”:
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Define which fields are present in the fixed-width file, their starting
// positions, and their length.
var fields = new FixedWidthFields();
fields.setEmployeeId(0, 10); // The employee ID is the first 10 characters
fields.setBadge(10, 7); // The badge ID is the 7 characters after that
// Add the source definition to the API, defining the fixed-width file as the
// data source
api.addFixedWidthSource(sourceSettings);
Example 2: Importing data from a fixed-width file while validating the file character set
includeDistributedPolicy("BADGE_IMPORT_LIBRARY");
// Define the general settings for the Badge Import. In this case, the
// default settings will all be used.
var importParameters = {
updateBadges: true,
checkForDuplicates: true,
overrideManualChanges: false,
allowOneBadgePerGroup: true
};
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Define which fields are present in the fixed-width file, their starting
// positions, and their length.
var fields = new FixedWidthFields();
fields.setEmployeeId(0, 10); // The employee ID is the first 10 characters
fields.setBadge(10, 7); // The badge ID is the 7 characters after that
// Add the source definition to the API, defining the fixed-width file as the
// data source
api.addFixedWidthSource(sourceSettings);
// Define the general settings for the Badge Import. In this case, the
// Create a new instance of the Badge Import API, using the defined settings.
// This will be used for processing the source data.
var api = new BadgeImport(importParameters);
// Add the source definition to the API, defining the SQL query as the data source
api.addSqlSource(sourceSettings);
Troubleshooting
The job log of the script using the Badge Import API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
Source file doesn’t exist The specified file containing the Ensure that the source data file
source data for a CSV or fixed- specified exists in the indicated
width source could not be found in directory and is able to be read by
the specified directory. WorkForce Time and Attendance.
Table 26: Badge Import API common error messages, causes, and solutions
API Reference
Knowledge of JavaScript programming is necessary to make best use of this section.
The Badge Import API uses the BadgeImport to import Badge data from a data source. The Badge Import API
takes advantage of FixedWidthFields, which define information about fields in a fixed-width file.
The following is a summary of the available methods and common uses:
BadgeImport
BadgeImport(importParameters)
Creates a new instance of the Badge Import API, which can be used for importing Badge and Badge Group
data.
The following parameters can be specified when creating a new instance of this object:
Parameter Name Description
updateBadges Determines whether Badge records allow updates,
or whether they will be deleted and replaced.
Defaults to allowing updates to Badge records.
Determines whether the import process will check
for duplicate Badge data or not. A Badge is
considered a duplicate if it shares the same Badge
checkForDuplicates ID as another Badge, the effective dates overlap,
and the Badges belong to different employees or
different Badge Groups. Defaults to checking for
duplicates.
overrideManualChanges Determines whether the import will allow a Badge
to be updated when it has been manually edited.
Defaults to not allowing updates to manually-edited
records.
allowOneBadgePerGroup Determines whether the import will enforce that an
employee only has a single Badge within a Badge
Group, ending their previous Badge if a new one is
imported in the same Badge Group. Defaults to
only allowing one Badge per employee in a Badge
Group.
Table 27: Allowed parameters when creating a new BadgeImport
addEmpCenterSource(sourceParameters)
Adds a new source of Badge data to the API, specifying that Badge data from the HR data was already
imported into the employee/assignment tables. This will be used to drive the import behavior when Badge
data is ultimately imported.
addSqlSource(sourceParameters)
Adds a new source of Badge data to the API, specifying that Badge data should be queried using a custom
query. This will be used to drive the import behavior when Badge data is ultimately imported.
The following parameters can be specified when calling this method:
Parameter Name Description
externalQuery Query to use to select badge information
dbConnectionPolicy DB Connection Info policy defining the database
connection to use to execute the query.
fieldNames JS object mapping, which maps from the expected
data component to the field in the source query
that contains values for that component.
Components that can be specified include
EMPLOYEE_ID, BADGE_GROUP, BADGE, EFF_DATE,
END_EFF_DATE, and STATUS_CODE.
badgeGroup Badge Group to add the badges to. Only used if
BADGE_GROUP is not defined in fieldNames.
JavaScript function that calculates the Badge Group
badgeGroupFunction
to add the badge to based on the source data.
badgeIdType The type of Badges being imported (either Alpha-
Numeric or Numeric).
addCsvSource(sourceParameters)
Adds a new source of Badge data to the API, specifying that Badge data should be read from a CSV file. This
will be used to drive the import behavior when Badge data is ultimately imported.
The following parameters can be specified when calling this method:
Parameter Name Description
sourceFile The fully-qualified file name for the file containing
the Badge data to be imported.
fieldNames JS object mapping, which maps from the expected
data component to the field in the source data that
contains values for that component. Components
that can be specified include EMPLOYEE_ID,
BADGE_GROUP, BADGE, EFF_DATE,
END_EFF_DATE, and STATUS_CODE.
badgeGroup Badge Group to add the badges to. Only used if
BADGE_GROUP is not defined in fieldNames.
JavaScript function that calculates the Badge Group
badgeGroupFunction
to add the badge to based on the source data.
badgeIdType The type of Badges being imported (either Alpha-
Numeric or Numeric).
badgeGroupReclassificationMap JS object mapping that maps Badge Group values in
the source data to the actual Badge Group policy
name that should be used.
defaultBadgeGroupReclassification Default Badge Group that should be assigned if the
Badge Group specified in the source data does not
match any mappings in the
badgeGroupReclassificationMap.
trimString Character or string of characters that should be
recursively removed from the beginning of the
Badge ID.
comments Value to be added to the comments field of all
Badge records created.
addFixedWidthSource(sourceParameters)
Adds a new source of Badge data to the API, specifying that Badge data should be read from a fixed-width
file. This will be used to drive the import behavior when Badge data is ultimately imported.
The following parameters can be specified when calling this method:
Parameter Name Description
sourceFile The fully-qualified file name for the file containing
the Badge data to be imported.
fixedWidthFields FixedWidthFields object defining the fields in the
file, their sizes, and relative positions.
badgeGroup Badge Group to add the badges to. Only used if
BADGE_GROUP is not defined in fixedWidthFields.
JavaScript function that calculates the Badge Group
badgeGroupFunction
to add the badge to based on the source data.
badgeIdType The type of Badges being imported (either Alpha-
Numeric or Numeric).
badgeGroupReclassificationMap JS object mapping that maps Badge Group values in
the source data to the actual Badge Group policy
name that should be used.
defaultBadgeGroupReclassification Default Badge Group that should be assigned if the
Badge Group specified in the source data does not
match any mappings in the
badgeGroupReclassificationMap.
trimString Character or string of characters that should be
recursively removed from the beginning of the
Badge ID.
comments Value to be added to the comments field of all
Badge records created.
dateFormat Format of dates found in the source fields.
encrypted Boolean to specify if the source file is encrypted.
addWebserviceSource(sourceParameters)
Adds a new source of Badge data to the API, specifying that Badge data should be read from the payload of a
web service call. This will be used to drive the import behavior when Badge data is ultimately imported.
The following parameters can be specified when calling this method:
Parameter Name Description
badgeData The WebserviceBadgeList object holding the data
from the webservice payload.
badgeGroup Badge Group to add the badges to. Only used if
BADGE_GROUP is not defined in the source data.
JavaScript function that calculates the Badge
badgeGroupFunction Group to add the badge to based on the source
data.
badgeIdType The type of Badges being imported (either Alpha-
Numeric or Numeric).
badgeGroupReclassificationMap JS object mapping that maps Badge Group values
in the source data to the actual Badge Group
policy name that should be used.
defaultBadgeGroupReclassification Default Badge Group that should be assigned if the
Badge Group specified in the source data does not
match any mappings in the
badgeGroupReclassificationMap.
trimString Character or string of characters that should be
recursively removed from the beginning of the
Badge ID.
comments Value to be added to the comments field of all
Badge records created.
Table 32: Allowed parameters when creating new web-service-based source
importBadges()
Imports Badge data using all of the defined Badge sources, in the order in which they were defined.
FixedWidthFields
FixedWidthFields
Defines a new object to store information about field widths and positions within a fixed-width file.
setEmployeeId(start, width)
Specifies the width of the Employee ID field in the file and the column position where the data begins.
setBadgeGroup(start, width)
Specifies the width of the Badge Group field in the file and the column position where the data begins.
setBadge(start, width)
Specifies the width of the Badge field in the file and the column position where the data begins.
setEffDate(start, width)
Specifies the width of the Eff Date field in the file and the column position where the data begins.
setEndEffDate
Specifies the width of the End Eff Date field in the file and the column position where the data begins.
setStatusCode(start, width)
Specifies the width of the Status Code field in the file and the column position where the data begins.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• WorkForce Time and Attendance banks and accruals/usage
Components
This API consists of the following component(s):
• The distributed JavaScript library BANK_EXPORT_API
Setup
No setup is necessary for the Bank Export API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Exporting Data Using the Standard CSV Adapter
The Bank Export API includes a standard Adapter that can be used to write data to a CSV file. With this
Adapter, data is mapped from the EMPLOYEE, ASGNMT, and BANK_EXPORT tables by means of a Decoder
policy. The field names specified for the destination in the Decoder will be used as the field names that are
created in the file, if a header row is being written.
This Adapter supports the following additional Export Control settings (beyond the standard ones supported
by all Adapters):
Property Name Required Description
filePath Y The directory in which the export
file should be created.
The name of the export file that
filename Y
should be created.
createHeader N True if a header row should be
written to the file, false if not.
Defaults to true if not specified.
Delimiter N The delimiter that should be
placed between fields in the CSV
file. Defaults to a comma (,) if not
specified.
Decoder Y The name of the Decoder policy
that should be used to map the
data to the output file.
decoderDestination N The name of the column in the
Decoder policy that specifies the
export file mappings. Defaults to
DESTINATION if not specified.
Table 33: Additional Export Control settings for the standard CSV Adapter
With that Decoder, the resulting file would include the header row:
EmployeeID,PolicyProfile,BankName,Balance,Date
The order of the fields in the Decoder determines the order in which the fields will be generated in the file.
// Create a new instance of the API, using the desired adapter and export control
// settings
var api = new BankExportAPI(adapter, exportControl);
if (priorData.employee != employee.employee ||
priorData.asgnmt != asgnmt.asgnmt ||
priorData.bank != bankData.bank ||
priorData.balance != bankData.balance) {
priorData = {
employee: employee.employee,
asgnmt: asgnmt.asgnmt,
bank: bankData.bank,
balance: bankData.balance
};
return true;
}
return false;
},
// Custom recordMapping function. Only exports a single decimal place for the
// bank balance, instead of the full precision
recordMapping: function(record, destination, employee, asgnmt, bankData) {
record.balance = new WFSDecimal(record.balance).toString("0.#"); }
};
// Create a new instance of the API, using the desired adapter and export control
// settings
var api = new BankExportAPI(adapter, exportControl);
// Define a custom comparator to define how the employees will be sorted in the
// export file. In this case, employees will be sorted in alphabetical order by
// their display_employee value
employeeComparator: BankExportAPI.createComparator(function(first, second) {
var personDataApi = new PersonDataAPI();
var today = WFSDate.today();
if (first.display_employee == second.display_employee) {
return 0;
}
if (first.display_employee > second.display_employee) {
return 1;
}
return -1;
}),
// Define a custom comparator to define how the banks will be sorted in the
// export file. In this case, banks will be sorted in reverse alphabetical order
bankComparator: BankExportAPI.createComparator(function(first, second) {
var firstStr = "" + first;
var secondStr = "" + second;
if (firstStr == secondStr) {
return 0;
}
if (firstStr > secondStr) {
return -1;
}
return 1;
}
};
// Create a new instance of the API, using the desired adapter and export control
// settings
var api = new BankExportAPI(adapter, exportControl);
// Define the output destination that should be used to write the data to the LD
// table. In this case, the data will be written to the LD7 table and will have two
// key fields
var outputDestination = BankExportAPI.getLdTableOutputDestination(7, 2);
// Define the custom adapter. This adapter will only export data for employees in
// the EXEMPT policy profile, storing the data to be exported on records in the LD
// table.
// Adapter method to write the fully-mapped bank export data to the output
// destination
recordProcessing: function(data) {
var ldRec = new DbRec("LD_STAGE");
ldRec.ld2 = data.ld2;
ldRec.ld1 = data.ld1;
ldRec.other_date1 = data.other_date1;
ldRec.other_number1 = data.other_number1;
ldRec.other_number2 = data.other_number2;
ldRec.other_number3 = data.other_number3;
ldRec.other_number4 = data.other_number4;
ldRec.other_number5 = data.other_number5;
ldRec.other_number6 = data.other_number6;
outputDestination.writeData(ldRec);
},
// Adapter method to cleanup the output destination. Will process the newly
// written LD_STAGE records through to the actual table.
cleanupOutputDestination: function() {
outputDestination.close();
}
});
// Create a new instance of the API, using the desired adapter and export control
// settings
var api = new BankExportAPI(adapter, exportControl);
// Define the output destination that should be used to write the data to the LD
// table. In this case, the data will be written to the LD7 table and will have two
// key fields
var outputDestination = BankExportAPI.createOutputDestination( {
// Map the data from the export record mapping to the LD record
var record = new DbRec("LD7");
record.ld2 = data.ld2;
record.ld1 = data.ld1;
record.other_date1 = data.other_date1;
record.other_number1 = data.other_number1;
record.other_number2 = data.other_number2;
record.other_number3 = data.other_number3;
record.other_number4 = data.other_number4;
record.other_number5 = data.other_number5;
record.other_number6 = data.other_number6;
record.eff_dt = WFSDate.VERY_EARLY_DATE;
record.end_eff_dt = WFSDate.VERY_LATE_DATE;
// Perform any cleanup necessary. This will commit the records to the LD table.
close: function() {
Packages.com.workforcesoftware.Util.DB.ListWriter.writeList(this.dbRecList);
}
});
// Define the custom adapter. This adapter will only export data for employees in
// the EXEMPT policy profile, storing the data to be exported on records in the LD
// table.
var adapter = BankExportAPI.createNewAdapter({
// Adapter method to write the fully-mapped bank export data to the output
// destination
recordProcessing: function(data) {
var ldRec = new DbRec("LD_STAGE");
ldRec.ld2 = data.ld2;
ldRec.ld1 = data.ld1;
ldRec.other_date1 = data.other_date1;
ldRec.other_number1 = data.other_number1;
ldRec.other_number2 = data.other_number2;
ldRec.other_number3 = data.other_number3;
ldRec.other_number4 = data.other_number4;
ldRec.other_number5 = data.other_number5;
ldRec.other_number6 = data.other_number6;
outputDestination.writeData(ldRec);
},
// Adapter method to cleanup the output destination. Will process the newly
// written LD_STAGE records through to the actual table.
cleanupOutputDestination: function() {
outputDestination.close();
}
});
// Create a new instance of the API, using the desired adapter and export control
// settings
var api = new BankExportAPI(adapter, exportControl);
// Define the output destination that should be used to write the data to the LD
// table. In this case, the data will be written to the LD7 table and will have two
// key fields
var outputDestination = BankExportAPI.getLdTableOutputDestination(7, 2);
// Define the custom adapter. This adapter will only export data for employees in
// the EXEMPT policy profile, storing the data to be exported on records in the LD
// table.
var adapter = BankExportAPI.createNewAdapter({
exportData.other_number1 = bankData.balance;
exportData.other_number2 = bankData.accrued;
exportData.other_number3 = bankData.used;
exportData.other_number4 = bankData.cleared;
exportData.other_number5 = bankData.terminated;
exportData.other_number6 = bankData.transferred;
return exportData;
},
// Adapter method to write the fully-mapped bank export data to the output
// destination
recordProcessing: function(data) {
var ldRec = new DbRec("LD_STAGE");
ldRec.ld2 = data.ld2;
ldRec.ld1 = data.ld1;
ldRec.other_date1 = data.other_date1;
ldRec.other_number1 = data.other_number1;
ldRec.other_number2 = data.other_number2;
ldRec.other_number3 = data.other_number3;
ldRec.other_number4 = data.other_number4;
ldRec.other_number5 = data.other_number5;
ldRec.other_number6 = data.other_number6;
outputDestination.writeData(ldRec);
},
// Adapter method to cleanup the output destination. Will process the newly
// written LD_STAGE records through to the actual table.
cleanupOutputDestination: function() {
outputDestination.close();
}
});
if (priorData.employee != employee.employee ||
priorData.asgnmt != asgnmt.asgnmt ||
priorData.bank != bankData.bank ||
priorData.balance != bankData.balance) {
priorData = {
employee: employee.employee,
asgnmt: asgnmt.asgnmt,
bank: bankData.bank,
balance: bankData.balance
};
return true;
}
return false;
}
};
// Create a new instance of the API, using the desired adapter and export control
// settings
var api = new BankExportAPI(adapter, exportControl);
Troubleshooting
The job log of the script using the Bank Export API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
Unable to identify The Export Control did not define Ensure that either the assignment
assignments to process; no either a MatchCondition to select selection condition or an assignment
match condition or assignments or an assignment group ID are specified in the Export
assignment group ID group ID. Control settings whenever a new
specified BankExportAPI object is instantiated.
No assignment group found The assignment group ID specified Ensure that the assignment group ID
with display ID on the Export Control did not specified in the Export Control settings
ASGNMT_GROUP_ID correspond to the display ID of any matches the display ID of an
assignments groups that are assignment group that exists.
currently defined.
Value provided for hook The value provided in the Export Ensure that all of the values provided
HOOK_NAME is expected to Control for one of the Custom in the Export Control settings for
be of type Function. Instead Hooks had a datatype other than Custom Hooks are functions.
it was of type TYPE Function.
Hook HOOK_NAME expects The function definition provided in Ensure that all Custom Hooks defined
to have ARG_COUNT the Export Control for the specified in the Export Control settings are
arguments, but instead Custom Hook included the wrong expecting to receive the right number
NUMBER arguments were number of arguments. of arguments for the type of Custom
specified Hook being defined.
No file path specified for the No value was provided for the Ensure that the filePath property is
export destination filePath property on the Export specified in the Export Control settings
Control when using the standard and that it points to the directory the
CSV Adapter. file should be exported to.
No file name specified for No value was provided for the Ensure that the fileName property is
the export destination fileName property on the Export specified in the Export Control settings
Control when using the standard and that it specifies the file name that
CSV Adapter. the export data should be written to.
No decoder policy specified No value was provided for the Ensure that the decoder property is
to define export mapping decoder property on the Export specified in the Export Control and that
Control when using the standard it specifies the name of the Decoder
CSV Adapter. policy that should be used to map data
from the employee/assignment/bank
data to the export file.
Call made to writeData with writeData was called on the Output Ensure that writeData is always called
a null value Destination with a null value for the with a non-null value when using the
data when using the standard LD standard LD Table Output Destination.
Table Output Destination.
Call made to writeData with writeData was called on the Output Ensure that writeData is always called
invalid object type. Object Destination with an invalid data with a DbRec for the LD_STAGE table
was expected to be of type type when using the standard LD when using the standard LD Table
com.workforcesoftware.Gen Table Output Destination. Output Destination.
.Other.DbRec.DbRecLd_stag
e, but instead it was of type
TYPE
Table 34: Bank Export API typical error messages, root problems, and solutions
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Bank Export API makes use of:
1) The BankExportAdapter, which defines a set of behavior for exporting bank information.
2) The OutputDestination, which defines how data should be written out by the Adapter.
3) The BankExportData, which stores the data being mapped during the export process.
4) The BankExportAPI, which runs the actual export process and allows for custom BankExportAdapters
and OutputDestinations to be defined.
See the contents of the BANK_EXPORT_API policy in the Distributed JavaScript Library category for full
documentation on these methods. The following is a summary of the available methods and common uses:
BankExportAdapter
defineAdapterSettings(customerSettings)
Runs as the very first step of export processing, defining what the processing behavior for the Adapter should
be based on the customer-specific settings.
getOutputDestination()
Returns the Output Destination that is being used by the Adapter.
exportPreProcessing()
Defines the behavior that the export should execute after all initial setup is complete, but before executing
the step to calculate the relevant bank data and write it to the BANK_EXPORT table.
computationPostProcessing()
Defines the behavior that the export should execute after the relevant bank data has been written to the
BANK_EXPORT table but before anything has been queried from that table to be written out to the Output
Destination.
setupOutputDestination()
Performs any initial setup that is needed for the Output Destination, before any data is written to it. For
example, this could be used to make sure the target file exists and is ready for writing, and to write a header
row to that file.
employeePreProcessing(employee)
Runs at the beginning of processing a single employee. For example, this could be used to write an opening
tag for employee-specific data to an XML file.
assignmentPreProcessing(employee, asgnmt)
Runs at the beginning of processing one assignment for the employee. For example, this could be used to
write an opening tag for assignment-specific data to an XML file.
recordProcessing(data)
Performs any processing necessary for the final mapped record, such as writing it to the Output Destination.
assignmentPostProcessing(employee, asgnmt)
Runs at the end of processing one assignment for the employee. For example, this could be used to write a
closing tag for assignment-specific data to an XML file.
employeePostProcessing(employee)
Runs at the end of processing a single employee. For example, this could be used to write a closing tag for
employee-specific data to an XML file.
exportPostProcessing()
Runs at the end of the export, after all employees have been processed. Allows for any final processing to be
done to write data to the Output Destination, such as writing a footer record to a CSV file.
cleanupOutputDestination()
Performs any cleanup needed to close and free up any resources used by the Output Destination, such as
closing the file handle or committing database changes. This should almost always call the close() method
on the Output Destination.
OutputDestination
writeData(data)
Writes the specified data to the Output Destination, in whatever fashion is appropriate for the target
destination. For instance, for a CSV file this could write out each property defined in the data, separated by a
comma, to the target file.
close()
Performs any cleanup needed to close and free up any resources used by the Output Destination, such as
closing the file handle or committing database changes.
BankExportData
BankExportData(fieldsAllowed…)
Creates a new record to contain mapped export data for processing by the Adapter and Custom Hooks. This
should be created during the recordMapping stage of the Adapter, and can be further modified during the
recordMapping Custom Hook. It will then be passed to the Output Destination to complete the export of the
record.
If one or more allowed fields are specified, only properties corresponding to one of the named fields will be
allowed to be populated on the BankExportData object. This allows the Adapter to enforce that only specific
fields are being mapped, such as when a file is going to contain only a fixed set of fields.
BankExportAPI
BankExportAPI(adapter, exportControlSettings)
Creates a new instance of the Bank Export API, using the specified Adapter and Export Control settings.
The Export Control includes the following standard properties:
Property Name Description
Banks Name of the Bank set that identifies which banks
should be processed by the export.
MatchCondition specifying which assignments
asgnmtCondition
should be processed by the export.
asgnmtGroup The display ID of the assignment group containing
the assignments that should be processed by the
export. (If both asgnmtCondition and asgnmtGroup
are specified, the asgnmtCondition will be used to
select the assignments.)
employeeComparator Comparator used to define the order in which
employees should be processed by the export.
In addition to the standard properties, the following Custom Hooks can also be defined:
Hook Name Definition
preprocessing Allows custom logic to be executed after the export
is set up but before the step to write data to
BANK_EXPORT is performed.
Allows custom logic to be executed after the step to
write data to BANK_EXPORT is completed but
postComputation
before doing any processing on the data that was
loaded into BANK_EXPORT.
employeePreProcessing Allows custom logic to be executed at the beginning
of processing each new employee.
asgnmtPreProcessing Allows custom logic to be executed at the beginning
of processing each new assignment for an
employee.
bankPreProcessing Allows custom logic to be executed at the beginning
of processing each new bank for an assignment.
recordFilter Allows custom logic to be defined to filter out bank
records that should not be exported. Cannot be
used to override the filter conditions defined by the
Adapter.
recordMapping Allows custom logic to be defined to update the
mapped data built by the Adapter before it is
processed to the Output Destination.
bankPostProcessing Allows custom logic to be executed at the end of
processing a bank for an assignment.
asgnmtPostProcessing Allows custom logic to be executed at the end of
processing an assignment for an employee.
employeePostProcessing Allows custom logic to be executed at the end of
processing a single employee.
postprocessing Allows custom logic to be executed at the end of
processing all employees.
Table 36: Custom Hooks that can be defined on the Export Control object
Additional settings beyond those listed in the above two tables can also be specified. These settings and
their use are defined by the specific Adapter being used.
exportBankData(startDate, endDate)
Executes the export process to evaluate all bank changes that fall within the specified start and end dates for
the applicable assignments. If an assignment doesn’t have a change to the bank on the specified start date, a
record will still be processed for that date containing the balance as of that date (with no other usage/accrual
associated with it).
createNewAdapter(adapterDefinition)
Static function that creates a new custom Adapter using the provided definition. The Adapter definition
provided needs to be a JavaScript object that defines functions implementing some/all of the methods
defined in the BankExportAdapter section._BankExportAdapter
createOutputDestination(destinationDefinition)
Static function that creates a new custom Output Destination using the provided definition. The Output
Destination definition provided needs to be a JavaScript object that defines functions implementing some/all
of the methods defined in the OutputDestination section._OutputDestination
createComparator(comparisonFunction)
Static function that creates a new Comparator to use for sorting the employees, assignments, or banks that
are being processed by the export. The comparison function should accept two arguments and return -1 if
the first argument should come before the second argument, 1 if the first argument should come after the
second argument, or 0 if both arguments should be treated as having the same value.
getCSVAdapter(enableDebugging)
Returns the standard Adapter for writing to a CSV file.
getLdTableOutputDestion(tableNumber, keyFields)
Returns the standard OutputDestination for writing to the indicated LD table, treating the data as containing
the indicated number of key fields.
Prerequisites
To use this API, you should know the following:
Components
The component(s) of the Bank Import Library include the following:
Setup
No setup is necessary for the Bank Balance API. The distributed library is automatically available within
WT&A.
Use Cases
Import bank balance data using a policy. Since this is a policy driven library, the source configuration
(whether it’s SQL, CSV, etc.) will be handled via the policy.
function mappingFunction(source) {
var AGGREGATE_BANKS = ["PTO_AGG"];
Troubleshooting
The job log of the script using the Bank Import Library will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
Error Message Problem Solution
No employeeId specified Self-explanatory Correct any error in file and retry
No asgnmtId specified Self-explanatory Correct any error in file and retry
No asOf date specified Self-explanatory Correct any error in file and retry
No balance specified Self-explanatory Correct any error in file and retry
Non-numeric balance Self-explanatory Correct any error in file and retry
<balance> specified
Static/Dynamic Bank <bank> Self-explanatory Make sure it exists and the name is
does not exist correct
Missing expected fields in Field list provided, are not present Make sure the fields provided and the
source data: <field names> in the source fields present in source match
As-of date <date> does not Self-explanatory As-of date must coincide with period
fall on a period end date end date
Specified locale parameter is Invalid locale parameter type or Make sure correct type is being passed
not a valid java.util.Locale missing locale parameter in
object setDateFormat
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Bank Import Library consists of the following public initializer component:
• BankBalanceImportAPI. It contains the following public methods:
o importBankBalances
o setTesting
o importToAggregateBanks
o setDateFormat
o setRequirePeriodEndDate
o setUseCurrentPeriodDate
o setDeleteExistingRecords
o setDynamicBankImport
The following is a summary of the available methods and common uses:
BankBalanceImportAPI(fieldNames, isDynamicBank)
Creates a new instance of the Bank Import API.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The BATCH_JOB_LOG_POLICY_TRACKER_API Distributed JavaScript library
• This automatically includes the API_UTIL Distributed JavaScript library
Setup
No setup is necessary for the Batch Job Log Policy Tracker API. The distributed library is automatically
available within WorkForce Time and Attendance.
Use Cases
Getting summary for a particular job
The Batch Job Log Policy Tracker API allows a script to lookup the summary of a particular job including run
count, successful run count, last run time etc. This is useful if a certain action needs to be performed after
the success of a particular job.
// get a record for a particular job by providing policy id and batch job type
var batchJobLogPolicyTracker = batchJobLogPolicyTrackerAPI
.getRecordByPolicyIdAndBatchJobType('INT_1065_LD_KEY_LD_IMPORT_TEST', 'SCRIPT_RUNNER');
Note: If the condition specified matches more than one assignment, an exception will be generated.
Troubleshooting
The job log of the script using the Batch Job Log Policy Tracker API will contain information messages and, in
the case of problems, any error messages generated during processing. This job log should be reviewed if
there are any problems encountered while using the API.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Batch Job Log Policy Tracker API consists of just one public component:
• BatchJobLogPolicyTrackerAPI
o It contains just 1 public method defined as, getRecordByPolicyIdAndBatchJobType(policyId:
String, batchJobType: String). returns a Readonly record of Batch_job_log_policy_tracker
class
The following fields are available in the returned record:
Batch_job_log_policy_tracker
BATCH_JOB_TYPE
An integer number that is used by the system to recognize this job internally (may not be of much use to the
script writers).
LAST_RUN_DTTM
WFSDateTime object for the last time this job ran regardless of success.
LAST_SUCCESSFUL_RUN_DTTM
WFSDateTime object for the last time this job ran successfully.
POLICY_ID
The policy ID for this job. Should be the same as what was supplied in the API get method.
RUNS_COUNT
Number of times this job executed regardless of success.
SUCCESSFUL_RUNS_COUNT
Number of times this job ran successfully.
SYSTEM_RECORD_ID
A number for this record that is unique across the entire database.
SYSTEM_TIMESTAMP
Last timestamp this record was written at.
SYSTEM_UPDATE_COUNTER
The number of times this record has been updated.
BatchJobLogPolicyTrackerAPI
Creates a new instance of the Timesheet Operations API.
Decoder API
Overview and Capabilities
A “decoder” is an easy way to define column mappings between third-party source files and WT&A tables via
a Decoder Policy. The goal is to be able to define all the mappings in a single place, and then use the decoder
to access the mappings in the code. This way, if the mappings need to be changed in the future, only the
Decoder Policy needs to be changed.
To use the decoder, first define a Decoder Policy in the Policy Editor. In the Decoder Text field, enter the
column mappings as you would a CSV file, with a heading row and delimited columns. After the Decoder
Policy has been defined, the decoder can be used by importing DECODER_API in a script and using the
methods defined in this Distributed JavaScript Library.
Prerequisites
To use this API, you should know the following:
• Basic JavaScript coding
• How to create a Decoder Policy
• Understand how a decoder works
Components
This Distributed JavaScript Library consists of a single component, DECODER_API.
Setup
No setup is necessary for Decoder API. This distributed library is automatically available within WorkForce
Time and Attendance.
You must create a Decoder Policy in the Policy Records > Interfaces > Decoder Policy section and refer to that
policy name in your script that uses the DECODER_API.
Use Cases
Creating a Decoder
The following example shows sample decoder text that has been entered into a Decoder Policy:
For more information on the Decoder Policy, see the WorkForce Time and Attendance Policy Configuration
Guide.
// Validates source data contains all the fields defined in decoder source column
decoder.validateSourceColumns(source, false);
// Call this method to change which column of the decoder text is being treated as the source
(i.e. to have the Decoder override the defined source column)
var tableName = "EMPLOYEE_STAGE";
decoder.setSourceColumn(tableName);
// Returns string array of all the fields for specified column name
var tableFields = decoder.getFieldsInColumn(tableName);
for (i = 0; i < tableFields.length; i++){
var connection;
var sql;
try {
connection = new Connection();
var query = "select e.eff_dt as EFF_DT, e.display_employee as EMP_ID, e.last_name as
LAST_NAME," + " e.first_name as FIRST_NAME, e.middle_name as MID_NAME, 'A' as PS_STATUS, null
as FTE," + " e.other_number1 as OTHER_NUMBER, e.emp_date1 as ORIG_HIRE, e.emp_date2 as
ADJ_HIRE," + " e.emp_date3 as TERM_DATE, e.ld1 as LD, au.last_login_dttm as TIMESTAMP from
employee e," + " app_user au where e.employee = au.employee";
// Validates if source data contains all the fields defined in decoder source column
decoder.validateSourceColumns(sql, false);
while (sql.next()) {
// Validates required fields, valid values and specified format of source data
decoder.validateSource(sql, true);
try {
if (employee.emp_date2 != WFSDate.valueOf("2018-01-01", "yyyy-MM-dd")) {
throw "employee.emp_date2";
}
else if (employee.middle_name != "ABC") {
throw "employee.middle_name";
}
} catch (e) {
throw "" + e + " field not set with default value";
}
}
}
catch (e) {
log.error(e);
}
finally {
if (connection)
connection.close();
if (sql)
sql.close();
}
Example 3: Using the decoder for CSV-based source data in File Manager Object
includeDistributedPolicy("DECODER_API");
includeDistributedPolicy("FILE_MANAGER_API");
importClass(Packages.com.workforcesoftware.Gen.Choice.Import_source);
};
fileManager.openFile(fileParams);
// Validates source data contains all the fields defined in decoder source column
decoder.validateSourceColumns(fileManager, false);
// Call this method to change which column of the decoder text is being treated as the source
(i.e. to have the Decoder override the defined source column)
var tableName = "EMPLOYEE_STAGE";
decoder.setSourceColumn(tableName);
// Returns string array of all the fields for specified column name
var tableFields = decoder.getFieldsInColumn(tableName);
for (i = 0; i < tableFields.length; i++){
log.info((i + 1) + " : " + tableFields[i]);
}
try {
// Gets decoder map object
employeeMap = decoder.getMap("SOURCE_DATA", "EMPLOYEE");
assignmentMap = decoder.getMap("SOURCE_DATA", "ASGNMT");
appUserMap = decoder.getMap("SOURCE_DATA", "APP_USER");
// Validates source data contains all the fields defined in decoder source column
decoder.validateSourceColumns(source, false);
while (source.next()) {
// Validates required fields, valid values and specified format of source data
decoder.validateSource(source, false);
try {
if (employee.emp_date2 != WFSDate.valueOf("2018-01-01", "yyyy-MM-dd")) {
throw "employee.emp_date2";
}
else if (employee.middle_name != "ABC") {
throw "employee.middle_name";
}
} catch (e) {
throw "" + e + " field not set with default value";
}
}
Troubleshooting
Following table lists some common exceptions and error messages caused by the Decoder API, their causes,
and possible solutions.
API Reference
Decoder
Decoder object can be created in a script by simply calling the constructor with the policy ID of the decoder.
Parameter Description
decoderPolicy: String Name of the Decoder Policy to use for building data maps
autoTrimValues: Boolean To have the decoder automatically trim leading and trailing
whitespace on all input data, set autoTrimValues to true
(default is false)
sourceColumnOverride: String To override what column of the Decoder Text is treated as
the source, use the sourceColumnOverride parameter
sourceType: Object To specify the type of the import source (CSV or SQL), use
the sourceType parameter (default is Import_type.CSV)
debugOn()
Turns on logging of debug messages.
debugOff()
Turns off logging of debug messages.
getHeader()
This method returns all of the header values (the column names) from the decoder as an array of strings.
validateSourceColumns(source, logErrors)
This method checks to ensure all of the fields specified in the source column of the decoder text actually
appear as columns in the source data.
Parameter Description
source: Scriptable/ResultSet Scriptable or ResultSet holding source data
logErrors: Boolean True: Will call log.error() and also throws an exception.
False: Will not call log.error(). Only throws an exception.
print()
This method pretty-prints the entire Decoder text to the script log, using log.info(). The output will appear
the same as the Decoder Text appears in the Policy Editor (after it is auto-formatted upon saving).
validateRequired(source, logErrors)
This method checks the source datato ensure that all fields with REQUIRED = Y are populated with a value.
Parameter Description
source: Scriptable/ResultSet Scriptable or ResultSet holding source data
logErrors: Boolean True: Will call log.error() and also throws an exception.
False: Will not call log.error(). Only throws an exception.
validateValues(source, logErrors)
This method checks the source data and ensures that all fields have one of the valid values (if a set of valid
values is defined for that column).
Parameter Description
source: Scriptable/ResultSet Scriptable or ResultSet holding source data
logErrors: Boolean True: Will call log.error() and also throws an exception.
False: Will not call log.error(). Only throws an exception.
validateFormats(source, logErrors)
This method checks the source data and ensures that all format specifications are valid.
Parameter Description
source: Scriptable/ResultSet Scriptable or ResultSet holding source data
logErrors: Boolean True: Will call log.error() and also throws an exception.
False: Will not call log.error(). Only throws an exception.
validateSource(source, logErrors)
This method validates the required fields, valid values and the format specifications for the source data. If
any of the required/valid values/format columns is not present in the Decoder Text, the method does not
perform that part of the validation.
Parameter Description
source: Scriptable/ResultSet Scriptable or ResultSet holding source data
logErrors: Boolean True: Will call log.error() and also throws an exception.
False: Will not call log.error(). Only throws an exception.
setAutoTrimValues(bool)
Call this method to change the value of autoTrimValues.
Parameter Description
bool: Boolean sets autoTrimValues to the specified Boolean value (default is
false). This value can also be defined in the constructor.
setSourceColumn(sourceColumnOverride)
Call this method to change which column of the Decoder Text is being treated as the source (that is, to have
the decoder override the defined source column).
Situations where you would want to do this would be when importing from a staging table to the actual
table.
For example, if your source ResultSet is the result of querying the EMPLOYEE_STAGE table, and you want to
import this data into the EMPLOYEE table, you would have to set the source to EMPLOYEE_STAGE (that is,
decoder.setSourceColumn("EMPLOYEE_STAGE") ).
This value can also be defined in the constructor.
Parameter Description
sourceColumnOverride: String Name of the table
setSourceType(sourceType)
Call this method to change the value of sourceType.
Parameter Description
sourceType: Import_source Import_source choice is used to define the type of source.
Allowable options are Import_source.CSV and
Import_source.SQL as Scriptable or String. Default value is
Import_source.CSV.
getFieldsInColumn(column)
Returns a list of all the fields that are defined in the decoder for the specified column.
Return type: Array of strings
Parameter Description
column: String Name of the column
Decoder Map
This is a private class and its constructor should not be called directly from a script. Call decoder.getMap() of
Decoder class to retrieve its object and corresponding methods. Following are the description of methods
defined in this class.
DecoderMap(decoderMapJava)
This constructor should not be called directly from a script. To retrieve a DecoderMap object, you should call
the decoder.getMap() method, which returns a DecoderMap.
Parameter Description
decoderMapJava: DecoderMapJava DecoderMapJava object
get(key)
This method assumes that each key value maps to only a single field. If the specified key maps to more than
one than one value, this method will throw an exception. If multiple fields are mapped to by a key, then
getOne or getAll should be called.
Pass in the key (source field name), and this will return the mapped value (destination field name) as string or
null if the key does not map to any value.
For example, if you have a source field, "EMP_ID", and this maps to "EMPLOYEE.EMPLOYEE", then if you have
a map from "SOURCE" to "EMPLOYEE", calling map.get("EMP_ID") would return "EMPLOYEE".
Parameter Description
key: String Source field name
getOne(key)
This method should be used in the case that a key value may map to multiple fields; however, only one of the
mapped values is needed (and it does not matter which of those values is returned). If the specified key maps
to more than one than one value, this method will return one, and only one, of the mapped values (there is
no guarantee as to which value will be returned). If all of the multiple fields mapped to a key are desired,
then getAll(key) should be called.
Pass in the key (source field name), and this will return the mapped value (destination field name) as string or
null if the key does not map to any value.
For example, if you have a source field, "EMP_ID", and this maps to "EMPLOYEE.EMPLOYEE" and
"EMPLOYEE.DISPLAY_EMPLOYEE", then if you have a map from "SOURCE" to "EMPLOYEE", calling
map.getOne("EMP_ID") would return either "EMPLOYEE" or "DISPLAY_EMPLOYEE".
Parameter Description
key: String Source field name
getAll(key)
This method should be used in the case that a key maps to multiple fields, and all of the mapped fields are
desired values as needed (and it does not matter which of those values is returned). If the specified key maps
to more than one than one value, this method will return one, and only one, of the mapped values (there is
no guarantee as to which value will be returned). If all of the multiple fields mapped to a key are desired,
then getAll(key) should be called.
Pass in the key (source field name), and this will return the mapped value (destination field name) as string
array or null if the key does not map to any value.
For example, If you have a source field, "EMP_ID", and this maps to "EMPLOYEE.EMPLOYEE" and
"EMPLOYEE.DISPLAY_EMPLOYEE", then if you have a map from "SOURCE" to "EMPLOYEE", calling
map.getAll("EMP_ID") would return ["EMPLOYEE", "DISPLAY_EMPLOYEE"].
Parameter Description
key: String Source field name
getKeys()
Returns all the keys for the map as an array of strings.
getValues()
Returns all of the values from the key-value pairs (as an array of strings) that make up the map. If the array
has keys which map to multiple values, the multiple values are listed individually, with nothing linking exactly
which of the values are values for the same key. If you need this information, you must call
getValuesAsArrays().
getValuesAsArrays()
Returns all of the values from the key-value pairs (as an array of arrays of strings: String [][]) that make up the
map. Each value is returned as an array.
For keys that map to a single value, the array will have length of 1.
For keys which map to multiple values, the multiple values are grouped together into an array.
The array returned by this method would be equivalent to calling getAll(key) on all the keys in the map, and
then making an array of those results.print().
This method prints the contents of DecoderMap to the script log.
Email API
Overview and Capabilities
The Email API provides a mechanism for scripts to send email notifications during processing. This can be
used either as an end in itself (e.g. a script to send notifications to users that one of their employees has not
punched in) or as part of a larger process (e.g. informing users of errors that occurred while processing data).
Emails generated in this fashion can be sent to any number of users, using the To, CC, or BCC options, and can
include attachments if desired.
Additionally, the Email API supports generating emails that are processed through the EmpCenter Email
Aggregator. This allows for many similar email messages intended for the same user to be created and then
combined into a single email sent to that user, eliminating many duplicate messages that the user would
otherwise receive. Emails sent in this fashion cannot include attachments, however, since the Email
Aggregator does not support attachments.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• How EmpCenter’s Email Dispatcher and Aggregator function
Components
This API consists of the following component(s):
• The Distributed JavaScript Library EMAIL_API
Setup
No setup is necessary for the Email API. The distributed library is automatically available within WorkForce
Time and Attendance.
Use Cases
Sending an Email
The most common use for the Email API is to generate an email notification that will be sent out
immediately. Emails sent in this fashion will go directly to the Email Dispatcher, which will send them out to
the target recipients the next time it triggers.
The following example shows how to create and send an email from within a script:
// Define the parameters for the email that should be sent. This is just
// a JavaScript object with properties corresponding to the email settings
var parameters = {
// Indicate who should receive the email
recipients: Email.createRecipientList("johnDoe@email.com<John Doe>", "TO"),
// Define the subject line for the email
// Define the text that makes up the body of the email message
body: "This is the actual content of the email message<br>HTML content <u>will" +
" not</u> be evaluated",
// Indicate which email address the message should appear to originate from
// (This will be the "reply-to" address for the message as well)
senderEmailAddress: "from@example.com"
};
// Queue the email with the Dispatcher, which will send it out when it next triggers
Email.createAndSendEmail(parameters);
The email generated in the above example will be sent as plain text. This is often useful for messages where
the content does not require any special formatting. When this encoding format is used, the message body
will appear exactly as specified within the script:
However, in some cases the ability to format the message content is required in order to ensure that the
message conveys the needed information in the most efficient fashion, or to embed hyperlinks in the
message content. The following example demonstrates how to send an email message encoded in HTML:
// Define the parameters for the email that should be sent. This is just
// a JavaScript object with properties corresponding to the email settings
var parameters = {
// Indicate who should receive the email
recipients: Email.createRecipientList("johnDoe@email.com<John Doe>", "TO"),
// Define the subject line for the email
subject: "Email Notification from Script",
// Define the text that makes up the body of the email message
body: "This is the actual content of the email message<br>HTML content <u>will" +
"</u> be evaluated",
// Indicate which email address the message should appear to originate from
// (This will be the "reply-to" address for the message as well)
senderEmailAddress: "from@example.com"
};
// Queue the email with the Dispatcher, which will send it out when it next triggers
Email.createAndSendEmail(parameters);
Emails generated in this fashion will evaluate any HTML tags embedded in the body text, and the user will see
the corresponding formatted message:
// Specify the time when the aggregator should send the emails
var sendTime = WFSDateTime.now().addMinutes(10);
// Iterate over some data source defining the emails that should be generated
while (source.next() ) {
// Define the parameters for the email that should be sent. This is just
// a JavaScript object with properties corresponding to the email settings
var parameters = {
// Indicate who should receive the email
recipients: Email.createRecipientList("johnDoe@email.com<John Doe>", "TO"),
// Define the subject line for the email
subject: "Email Notification from Script",
// Define the text that makes up the body of the email message
body: "This is the actual content of the email message",
// Indicate which email address the message should appear to originate from
// (This will be the "reply-to" address for the message as well)
senderEmailAddress: "from@example.com",
// Define a single string containing the recipients that should receive the email
// Recipients should be delimited by semi-colons, and can optionally have their
// name specified within angle brackets
var recipientString = "johnDoe@email.com<John Doe>;";
recipientString += "janeDoe@email.com<Jane Doe>;";
recipientString += "tomSmith@email.com";
// Define the parameters for the email that should be sent. This is just
// a JavaScript object with properties corresponding to the email settings
var parameters = {
// Indicate who should receive the email. All three users in the list will
// receive the email.
recipients: recipientList,
// Define the text that makes up the body of the email message
body: "This is the actual content of the email message",
// Indicate which email address the message should appear to originate from
// (This will be the "reply-to" address for the message as well)
senderEmailAddress: "from@example.com"
};
// Queue the email with the Dispatcher, which will send it out when it next triggers
Email.createAndSendEmail(parameters);
Note: When using the createRecipientList method, all of the recipients in the list will share the same send
option.
// Define the parameters for the email that should be sent. This is just
// a JavaScript object with properties corresponding to the email settings
var parameters = {
// Indicate who should receive the email. All users added to the set will
// receive the email.
recipients: recipients,
// Define the text that makes up the body of the email message
body: "This is the actual content of the email message",
// Indicate which email address the message should appear to originate from
// (This will be the "reply-to" address for the message as well)
senderEmailAddress: "from@example.com"
};
// Queue the email with the Dispatcher, which will send it out when it next triggers
Email.createAndSendEmail(parameters);
// Define the parameters for the email that should be sent. This is just
// a JavaScript object with properties corresponding to the email settings
var parameters = {
// Indicate who should receive the email
recipients: Email.createRecipientList("johnDoe@email.com<John Doe>", "TO"),
// Define the subject line for the email
subject: "Email Notification from Script",
// Define the text that makes up the body of the email message
body: "This is the actual content of the email message<br>HTML content <u>will" +
"</u> be evaluated",
// Indicate which email address the message should appear to originate from
// (This will be the "reply-to" address for the message as well)
senderEmailAddress: "from@example.com",
// Queue the email with the Dispatcher, which will send it out when it next triggers
Email.createAndSendEmail(parameters);
Note: All attachments must be located on the application server. It is not possible to send client-side
attachments through this email process.
// Define the recipient for the email. (This can be repeated as necessary to add
// additional recipients)
var recipient = new EmailRecipient("johnDoe.email.com", "John Doe",
"TO");email.addRecipient(recipient);
// Define the recipient for the email. (This can be repeated as necessary to add
// additional recipients)
var recipient = new EmailRecipient("johnDoe@email.com", "John Doe",
"TO");email.addRecipient(recipient);
while (employees.next() ) {
email.appendToBody("Employee " + employees.employeeId + "\r\n");
}
// Queue the email to be sent by the Aggregator. It will be sent at the indicated
// time after having been aggregated with any similar emails that were generated in
// the meantime
var sendTime = WFSDateTime.now().addMinutes(10);
email.queueForAggregator(sendTime);
Troubleshooting
The job log of the script using the Email API will contain information messages and, in the case of problems,
any error messages generated during processing. This job log should be reviewed if there are any problems
encountered while using the API.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Email API makes use of:
1. Static methods, which can be called without needing to instantiate any other objects.
2. The Email class, which allows for a message to be built through several different calls and then sent
out to recipients.
3. The EmailRecipient class, which defines a single recipient for an email message and how that
message should be sent to that recipient
See the contents of the EMAIL_API policy in the Distributed JavaScript Library category for full documentation
on these methods. The following is a summary of the available methods and common uses:
Static Methods
Email.createAndSendEmail(emailParams)
Creates a new email and queues it up to be sent by the Dispatcher. The parameters that can be specified for
this method are:
Parameter Name Description
recipients A single recipient or Java Collection of recipients
who should receive the email
The string that should appear as the subject line of
subject
the email
body The message content to be included in the email
format The content type (plain text or HTML) that should
be used for the message
senderName The name that should appear as the sender for the
email
senderEmailAddress The email address that should appear as the source
for the email
actCaseID The ACT case that the email is associated with
actDocumentId The ID of the ACT case document attached to the
email
attachments Map from file path for the attachment to the
corresponding content type for the attachment
Table 39: Parameters for the Email.createAndSendEmail method
Email.createEmailAndQueueForAggregator()
Creates a new email and queues it up to be processed by the Aggregator. The parameters that can be
specified for this method are:
Parameter Name Description
recipients A single recipient or Java Collection of recipients
who should receive the email
The string that should appear as the subject line of
subject
the email
Body The message content to be included in the email
format The content type (plain text or HTML) that should
be used for the message
senderName The name that should appear as the sender for the
email
senderEmailAddress The email address that should appear as the source
for the email
actCaseID The ACT case that the email is associated with
Email.createRecipientList(recipients, sendOption)
Converts a single string containing all of the recipients that should receive an email into a list of recipients,
each associated with the specified send option.
The string to be converted by this method is expected to be in the form:
"johnDoe@email.com<John Doe>;janeDoe@email.com<Jane Doe>"
getSystemMailName()
Returns the name used on emails sent by WorkForce Time and Attendance to identify their source, as
defined in the build properties.
getSystemMailAddress()
Returns the email address used on emails sent by WorkForce Time and Attendance to identify their source,
as defined in the build properties.
validateEmailAddress(emailAddress)
Returns true if the specified email address conforms to RFC822 standards for a valid email address, false if it
does not.
Email
Email(emailParams)
Creates a new email message, which can be programmatically built up as needed and then sent once it is
finished being built. The parameters that can be specified when instantiating this object are:
Parameter Name Description
recipients An array of EmailRecipients who should receive the
email
The string that should appear as the subject line of
Subject
the email
Body The message content to be included in the email
Format The content type (plain text or HTML) that should
be used for the message
senderName The name that should appear as the sender for the
email
senderEmailAddress The email address that should appear as the source
for the email
actCaseID The ACT case that the email is associated with
actDocumentId The ID of the ACT case document attached to the
email
attachments Map from file path for the attachment to the
corresponding content type for the attachment
Table 41: Parameters for the Email constructor
addRecipient(recipientParam, sendOption)
Adds a single new recipient list to the list of recipients for the email.
addRecipients(recipients)
Adds the specified array of recipients to the email. This array of recipients can be built using the
Email.createRecipientList static method._createRecipientList(recipients,_sen
getRecipients()
Returns the array of all recipients currently set to receive their email, along with the send option associated
with them.
getSubject()
Returns the current subject text that will appear in the email.
setSubject(text)
Sets the subject line of the email message to the specified text.
appendToSubject(text)
Adds the specified text to the end of the existing subject line for the email.
getBody()
Returns the current message body text for the email.
setBody(text)
Sets the message body for the email message to the specified text.
appendToBody(text)
Adds the specified text to the end of the existing email message body.
setEmailFormat(format)
Specifies the content type of the email message body (either plain text or HTML).
getEmailFormat()
Returns the current content type of the email message body (either plain text or HTML).
addAttachment(filePath, contentType)
Adds an additional attachment to the email, attaching the file located at the specified file path and
associating it with the indicated content type.
addAttachments(attachments)
Adds the specified attachments to the email. The argument here is a JS object, with properties
corresponding to the file paths for the attachments and values corresponding to the content types for those
files.
attachActDocument(documentId)
Attaches an ACT document with the specified ID to the email. The ACT Case ID must have been specified on
the email before calling this method.
setSender(emailAddress, name)
Specifies the email address and name that should appear as the originator for the email.
getSender()
Returns the email address and name that are currently set as the originator for the email.
setActCaseId(caseId)
Specifies which ACT Case this email is associated with.
getActCaseId()
Returns the current ACT Case that this email is associated with.
send()
Queues the email up for the Dispatcher, which will send it out to the identified recipients.
queueForAggregator(sendTime)
Queues the email up for the Aggregator, which will send it out to the identified recipients at the indicated
send time. Similar messages generated before that send time that are targeted to the same user will all be
combined into a single email message.
Prerequisites
To use this API, you should know the following:
• Basic JavaScript coding
• Background knowledge of Employee Import process.
Components
The components of the Employee Import API are as follows:
• EMPLOYEE_IMPORT_UTIL Distributed JavaScript library
• Wrapper class of Decoder using DECODER_API Distributed JavaScript library
• FILE_MANAGER_API Distributed JavaScript library
• JOB_QUEUE_API Distributed JavaScript library
Setup
No setup is necessary for the Employee Import Library. The distributed library is automatically available in
WorkForce Time and Attendance.
Use Cases
Creating and Accessing the Manager Map
The following script shows how to determine if an employee is a manager by creating the manager mapping.
// Returns a map of distinct values of a specified field (MANAGER_ID) from specified table
(ASGNMT_STAGE) for active employees
var ACTIVE_STATUSES = ["A", "L", "T"]; // Status codes indicating an employee is active
if (!mappingList.contains("Y")) {
throw "Policy Profile Mapping mappingParams.getMapping() method failed.";
}
Example 3: Accessing the Policy Profile name against the specified source data
// Sets the policy profile based on the specified record values.
assignment.policy_profile = ppMap.getProfileFromRecord(source);
if (policyProfile != policyProfileFromSource) {
throw "Policy Profile Mapping test using getProfileFromSource() method failed. Policy
profile expected: " + policyProfile + ", found: " + policyProfileFromSource;
}
log.info("Policy profile " + assignment.policy_profile + " lies in policy profile group " +
policyProfileGroup + " with current period begin date " + curPeriodBegin + " and current
period end date " + curPeriodEnd);
if (oldEmp != null) {
log.info("HR status of employee last record " + oldEmp.HR_STATUS + " with effective date "
+ oldEmp.EFF_DT + " and and effective date " + oldEmp.END_EFF_DT);
}
else {
log.error("Employee not found as of: " + asOfDate);
}
The following example provides a way to access the last employee/assignment record.
// Gets the last employee record
var oldEmployee = getOldEmployee();
if (oldEmployee.TERMINATION_DATE != null) {
var oldTermDate = WFSDate.valueOf(oldEmployee.TERMINATION_DATE, "yyyy-MM-dd");
log.info("Termination date of employee last record " + oldTermDate);
}
else {
log.error("Unable to Determine Role");
}
//Initializes all status codes/dates and effective dates for an assignment record
setAllStatusAndEffectiveDt(assignment, assignment.hr_status, ACTIVE_STATUSES, LEAVE_STATUSES,
GO_LIVE_DATE, assignment.eff_dt, assignment.emp_date1, assignment.emp_date3,
assignment.emp_date4, TERM_STATUSES);
// assignment record being imported has reflected a change in policy profile group from the
existing policy profile.
isSplitAssignment();
// Defines a new Decoder Map from specified source field to specified destination field
decoderData.addDecoderMap("employeeMap", "EMPLOYEE_STAGE");
decoderData.addDecoderMap("asgnmtMap", "ASGNMT_STAGE");
// Defines a new Decoder Map from specified destination field to specified source field
decoderData.addReverseMap("employeeSourceMap", "EMPLOYEE_STAGE");
// Returns the DecoderMap that has been defined with the specified reference name
var reverseMap = decoderData.getMap("employeeSourceMap");
if (!reverseMap) {
throw "No mapping has been defined from the data source to the EMPLOYEE_STAGE table";
}
// Applies specified mapping to decode values from source record to employee/assignment record
var employee = new DbRec("EMPLOYEE_STAGE");
// Removes all records in the employee_stage table that match the specified condition
var empMatchCondition = new MatchCondition('EMPLOYEE_STAGE', 'DISPLAY_EMPLOYEE',
MatchOperator.EQUALS, source.employeeId);
clearEmployeeStageRecords(empMatchCondition);
// Removes all records in the asgnmt_stage table that match the specified condition
var asgnmtMatchCondition = new MatchCondition('ASGNMT_STAGE', 'LD10', MatchOperator.EQUALS,
source.employeeId);
clearAsgnmtStageRecords(asgnmtMatchCondition);
Utility Functions
The following script extends the above examples and provides a way of using different utility methods.
// Creates a DB connection object
var connection = new Connection();
// Creates and returns a multi-map between specified key (login_id) and its corresponding
values (bo_user_id)
boUserIdMap = buildMapFromDBFields(connection, "login_id", "bo_user_id", "app_user");
// Checks if app_user.login_id exists in boUserIdMap
if (boUserIdMap.containsKey(app_user.login_id)) {
app_user.bo_user_id = boUserIdMap.get(app_user.login_id)[0];
}
// Creates a job queue object and initializes jobs for specified employee/assignment Import
Control policies
var jobQueue = createAndInitJobQueueForImport(connection, "EMPLOYEE_IMPORT_POLICY",
"ASGNMT_IMPORT_POLICY");
jobQueue.executeJobs();
Troubleshooting
The job log of the script using the Employee Import Library contains error messages generated during
processing. Review this job log if problems are encountered while using this API.
The following table lists some common error messages, their causes, and possible solutions.
Error Message Cause Solution
Policy profile is not defined or has Specified policy profile does not Provide valid policy profile in
not been initialized. exist in policy profile map. getPolicyProfilePeriodStatus
method.
No active status codes specified. Array of ACTIVE_STATUSES is Must populate status code in
empty. ACTIVE_STATUSES array.
Assignment with asgnmt.eff_dt is earlier than Provide appropriate dates.
computed_match_id not imported. split date of last asgnmt record.
API Reference
Knowledge of JavaScript programming is necessary to make best use of this section.
Import Options
importOptions is an object having multiple flags for controlling employee import behavior. The following
table lists standard import options along with their description and valid values.
Parameter Description
importOptions.allow_out_of_order_records true = records with effective dates earlier than an
Controls whether out-of-order effective-dated records existing date for the employee/assignment can
are allowed. When records are imported out of order, be loaded.
all records with effective dates after the effective date false = records with effective dates earlier than
of the record being imported will be deleted. an existing date for the employee/assignment
will error out.
importOptions.allow_import_closed_periods true = assignment activations in closed periods
Controls whether new assignments can be created are allowed.
with active dates in closed periods. false = assignment activations in closed periods
will have their earliest active date moved to the
current period begin date.
importOptions.allow_import_term_if_previous_term true = new termination records will be allowed to
Controls whether a terminated assignment record can import successfully.
be imported for an assignment that is already false = new termination records will not be
terminated. imported.
importOptions.check_split_assignment true = import will check for split assignments.
Controls whether the import will attempt to create false = import will not check for split assignments.
split assignments if needed for a policy profile change.
importOptions.override_eff_dt true = import will use the termination/rehire date
Controls whether the import will override the as the effective date for assignments changing
specified effective date for terminations and/or status.
rehires. false = import will always use the provided
effective date as the date of the change.
importOptions.update_staging_table_status true = import will update the processed state.
Controls whether the import will update the false = import will not update the processed
processed state in the staging table when state.
updateStagingTableStatus is called.
mapPolicyProfiles(sourceConnection)
Loads lookup table of periods and policy profile groups and returns mapping from policy profile name to
attributes related to that policy profile:
• PPG – The policy profile group ID containing the policy profile
• CurPrdBeg – The current period begin date for the policy profile
• CurPrdEnd – The current period end date for the policy profile
Parameter Description
sourceConnection: Connection Local database connection to use for querying
the data. If no connection is specified a new
connection to the local database will be
established
(optional) - if passed null or empty the method
will create the connection to the local database
and then in the end of the method the
connection will be closed).
getPolicyProfilePeriodStatus(policyProfile)
Determines the information about the current period for the specified policy profile. Returns object with
following attributes:
• PPG – The policy profile group ID containing the policy profile
• CurPrdBeg – The current period begin date for the policy profile
• CurPrdEnd – The current period end date for the policy profile
Parameter Description
policyProfile: String Name of policy profile to be evaluated
getOldEmployee(asOfDate)
Finds and returns the last employee record.
Parameter Description
asOfDate: WDate If provided, it will select the record that is
effective on this date.
getOldEmployeeAsOf(asOfDate)
Finds and returns the last employee record.
Parameter Description
asOfDate: WDate Selects the record that is effective on this date.
getOldAssignment(asOfDate)
Finds and returns the assignment as of date record.
Parameter Description
asOfDate: WDate If provided, it will select the record that is
effective on this date.
isSplitAssignment()
Checks if the assignment record being imported reflects a change in the policy profile group from the existing
policy profile for that assignment. If so, changes the effective date as needed and sets up the policy
configuration and split date fields to reflect a split assignment. Returns true if the assignment was split, false
if it was not.
Note: This function must be called after the status has been set on the assignment record.
doNotImport()
Sets all the import flags to false, preventing any import operation.
generatepass(length)
Random Password Generator (used for LDAP configurations).
Returns randomly generated password using ASCII characters in the decimal range 33 (!) to 122 (z). This
includes all upper and lower case alpha-numeric characters as well as printable characters such as
~!@#$%^&*()_+{}|:"';?><,.'\ etc.
Parameter Description
Length: Integer (Optional) - Number of characters in the
generated password. Default is 8. If length is less
getMaxID(workforceConn)
Gets the job id of the last job ran (should be the current job).
Parameter Description
workforceConn: Connection Local database connection to use for querying
the data
(optional) - if passed null or empty the method
will create the connection to the local database
and then in the end of the method the
connection will be closed.
getTimeZoneForState(state)
Returns best guess as to time zone based on specified state.
Parameter Description
state: String Name of state to determine its time zone.
Parameter Description
record: DbRecScriptable Either an employee or assignment record
leaveStatuses: String[ ] Array of valid string values that correspond to a leave
status
currentStatus: String Current status of the record. Defaults to the value of
the HR_STATUS field.
Parameter Description
validActiveStatuses: String[ ] Array of status codes that are considered to be
active.
goLiveDate: WDate The earliest date the assignment should be able
to be active on.
sourceEffDate: WDate The effective date that should be used (overrides
standard calculated value).
sourceOriginalHireDate: WDate The original hire date for the assignment.
sourceLastActiveDate: WDate The last-active date for the assignment.
sourceLatestHireDate: WDate The most recent hire date for the assignment.
validTerminatedStatuses: String[ ] Array of status codes that are considered to be
terminated.
PolicyProfileMapping()
The PolicyProfileMapping object provides the means to store mappings for policy profiles and to determine
which policy profile is applicable based on a specified set of values.
setFields()
Sets the names of the fields being mapped for logging purposes. Field names to set should be passed as an
argument, as shown below:
getProfile()
Returns the policy profile to use based on the specified data, null if no corresponding profile found.
You must pass in the same number of arguments as there are field labels for the mapping. Field data should
be passed as an argument:
getProfileFromRecord(record)
Returns the policy profile based on the values in the record, null if no corresponding profile found. Fields
array is used to determine values to pull, so needs to be setup before this call, using the actual field names
present in the source data.
Parameter Description
record: Object Record containing field data. Allowable objects
are DbRec, FileManagerJava and ResultSet.
getProfileFromSource(source)
Returns the policy profile based on the values in the result set, null if no corresponding profile found. Fields
array is used to determine values to pull, so needs to be setup before this call, using the actual field names
present in the source data.
Parameter Description
Source: FileManagerJava Result set containing field data
evaluateMatch(mapping, value)
Evaluates provided mapping with provided field value to determine whether or not said value matches said
mapping. Returns true if value meets mapping criteria, false otherwise.
Parameter Description
Mapping: String Mapping value or wildcard.
Mappings can be a straight value, a wild card (*),
or use IN(string) to indicate that the value needs
to be contained within the string.
Value: String Field value for comparison.
getMappingParms(mapping)
Returns an array of entries for the specified mapping. Used internally to allow arrays to be specified and
processed correctly within the mapping.
Parameter Description
mapping: String Single mapping or multiple mappings of the form
"['val1','val2',...]" or "!['val1','val2',...]"
Parameter Description
keyField: String The field in the table that houses the keys
valueField: String The field in the table that houses the values
table: String The table used to look up the keyfield and
valuefield
clearEmployeeStageRecords(matchCondition)
Removes all records in employee_ stage table that match the specified condition.
Parameter Description
matchCondition: MatchConditionJava Condition to delete the records from
employee_stage table
clearAsgnmtStageRecords(matchCondition)
Removes all records in asgnmt_stage tables that match the specified condition.
Parameter Description
matchCondition: MatchConditionJava Condition to delete the records from
asgnmt_stage table
clearAllEmployeeStageRecords()
Removes all records in employee_ stage table.
clearAllAsgnmtStageRecords()
Removes all records in asgnmt_stage table.
appendSplitAsgnmtMasterDescription()
Appends the split date to the description of the old half of a split assignment to ensure that the description
doesn't overlap with the new half's description, to preserve the uniqueness of assignment descriptions of
assignment_master table.
To be used in the post-policy profile change script of asgnmt import policy.
getMap(name)
Returns the DecoderMap that has been defined with the specified reference name. Requires that either
addDecoderMap or addReverseMap have been called previously in order to define the map that should be
retrieved.
Parameter Description
name: String Previously-defined reference name for the map
to be looked up
validateSourceData (sourceData)
Checks that the current record in the source data meets the necessary conditions (required fields populated,
data matches valid values, source data formats) as defined in the decoder.
Parameter Description
sourceData: ResultSet Source data result set cursor pointing to the
record to be evaluated
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• There are no Export Stage API components.
Setup
No setup is necessary for the Export Stage API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Clearing the data from the staging table
The Export Stage API allows to clear the data from the table.
includeDistributedPolicy(“EXPORT_STAGE_API”);
try{
//Create an Instance of API
Var exportStageAPI = new ExportStageApi();
exportStageAPI.commit();
}
Catch(e) {
exportStageAPI.rollback();
log.error(e);
}
Finally{
exportStageAPI.close();
}
try{
//Create an Instance of API
Var exportStageAPI = new ExportStageApi();
exportStageAPI.commit();
}
Finally{
exportStageAPI.close();
}
Example 3: Clear records from staging table based on the provided criteria
includeDistributedPolicy(“EXPORT_STAGE_API”);
//criteria
Var clearCriteria = {};
clearCriteria.whereClause = (“(export_id = ‘” + EXPORT_ID_VALUE + “’)”);
try{
//Clear out staging table
exportStageAPI.clearByCriteria(clearCriteria);
}
Finally{
exportStageAPI.close();
}
//criteria
Var criteria = {
whereClause: “EXPORT_ID = ‘SEMP_HOURS_EXPORT_TO_ARCOS’ “,
orderByClause: “employee, calc_string2”
};
includeDistributedPolicy(“EXPORT_STAGE_API”);
//criteria
Var criteria = {};
Criteria.orderByClause = “employee”;
Try{
//fetch the unprocessed records
Var records = exportStageAPI.getUnprocessedRecords(criteria);
//criteria
Var criteria = {};
Criteria.orderByClause = “employee”;
Troubleshooting
The job log of the script using the Export Stage API will contain information messages. In the case of
problems, any error messages generated during processing are due to a connection or database issue;
therefore the exception thrown contains details of the error.
API Reference
ExportStageApi
The ExportStageApi has the following methods available:
clearStaging()
Clears all records from the staging table, i.e. export_stage.
ClearProcessed()
Clear processed records from the staging table, i.e. export_stage.
clearByCriteria(criteria)
Clear records from the staging table based on the provided criteria(optional). However, the criteria
object should have the following property:
whereClause: a string that will be used as the SQL where clause of a query that clears records.
getRecords(criteria)
Returns the staging table records based on the provided criteria(optional). However, the criteria object
can have the following properties:
whereClause: a string that will be used as the SQL where clause of the query that retrieves
records(optional).
orderByClause: a string that will be used as the SQL order by clause to order the results(optional).
getUnprocessedRecords(criteria)
Returns the staging table records based on the provided criteria(optional). However, the criteria object
can have the following properties:
orderByClause: a string that will be used as the SQL order by clause to order the results(optional).
writeList(list)
Write a DBRecList to the database within the current database transaction.
commit()
Commit the current state of the database transaction.
rollback()
Roll back the state of the database transaction to the last commit.
close()
Close the database connection.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The Distributed JavaScript Library, FILE_MANAGER_API
Setup
No setup is necessary for the File Manager API. The distributed library is automatically available in
WorkForce Time and Attendance.
Use Cases
Reading data from a CSV file
This script excerpt demonstrates opening the file sample.csv and validating that it contains the header fields
EMPLOYEE, EFFECTIVE_DATE, PROJECT, and AMOUNT. It then reads values from the file and logs them out.
includeDistributedPolicy("FILE_MANAGER_API");
fm.openFile(fileParameters);
fileReader.next();
var secondLine = fileReader.getLine();
// close the reader
fileReader.close();
Reading data from a CSV file while validating the character set of a file
This script excerpt demonstrates opening the file sample.csv, but before that it validates that the character
set provided in the file is correct compared to what was passed as a parameter. It then reads values from the
file and logs them.
includeDistributedPolicy("FILE_MANAGER_API");
Reading data from a fixed-width file while validating the character set
of a file
This script excerpt demonstrates reading data from a fixed-width file, but before that it validates that the
character set provided in the file is correct compared to what was passed as a parameter. To access a field,
the starting position of the field and its width in characters must be specified. Position ordering begins with 0
at the far left of the line.
includeDistributedPolicy("FILE_MANAGER_API");
// Set and open the file. The filepath and filename are combined into a single parameter,
// plus other parameters for validating the character set are also provided.
var charSet = “UTF-8”;
Reading data from a file line by line while validating the character set of
a file
This script excerpt demonstrates reading data from a file line by line, but before that it validates that the
character set provided in the file is correct compared to what was passed as a parameter.
includeDistributedPolicy("FILE_MANAGER_API");
// set the file
var fileName = "workforce/sample_data/datafiles/file_manager_api_read_file_test.txt";
var fileReader = new FileLineReader({
filename: fileName,
charSet: 'UTF-16'
});
fileReader.open();
// read the file line by line
fileReader.next();
var firstLine = fileReader.getLine();
// move to the next line
fileReader.next();
var secondLine = fileReader.getLine();
// close the reader
fileReader.close();
fm.setDateFormat("yyyy-MM-dd");
fm.openFile(fileParameters);
fileReader.open();
// read the file line by line
fileReader.next();
var firstLine = fileReader.getLine();
// move to the next line
fileReader.next();
var secondLine = fileReader.getLine();
// close the reader
fileReader.close();
Archiving a file
After processing a file, it may be advisable to move it to an archive directory and rename it, so its contents
can be reviewed if necessary. The following example demonstrates archiving a file, sample.csv.
After this script is run, the file sample.csv will be moved to a directory named archive and a timestamp
prepended to the filename; for instance, 20180323135113_sample.csv.
The archiveFile option does not allow customization of the archived file name. If prepending a timestamp is
not the desired change, the new name can be fully specified using the file renaming operation, described
next.
After this script is run, the file sample.csv should be renamed to sample.bak and remain in the same
directory as it was initially.
After this script is run, the file sample.csv will be moved to a directory named newDir, but with the file name
unchanged.
After this script is run, the file sample.csv will be renamed to sample.bak and moved to a directory named
newDir.
Troubleshooting
The job log of the script using the File Manager API will include informational messages, and in the case of
problems, error messages. This job log should be reviewed if there are problems using the API.
Some common error messages, their causes, and the solution:
Error Message Cause Solution
FileNotFoundException: Source The file as specified could not be Check that the file path and name
File 'nonexistentFile.txt' could not found are correct, and the file is in place
be found in folder
Invalid delimiter. Delimiter must A delimiter of more than one Use a single-character delimiter
be a single character. character has been specified
Header fields are not set, so File contains no headers, and Add headers to the file, or
nothing will ever actually be read none were specified provide the header list to the API
from file.
Fields cannot be validated, Header validation was attempted Call validateFields after opening
because file is not open. before the file was opened the file
Extra fields present in import file: Actual and expected headers do Make sure the correct file is being
EXTRA_FIELD not match when validateFields is read, and the correct headers are
used specified. If extra fields are
API Reference
FileManager
The FileManager has the following methods available:
FileManager()
Create a new FileManager.
setFile(filePath, fileName)
Specify the path and name of the file to be processed.
getPath()
Returns the path of the file - as provided in the setFile call.
getName()
Returns the name of the file - as provided in the setFile call.
archiveFile(archivePath)
Prepend a current timestamp to the name of the file and move it to the specified path.
renameFile(newFilePath, newFileName)
Rename and/or move the file to the specified path and file name.
openFile(fileParameters)
Open the file specified by the call to setFile. The fileParameters are specified as a JavaScript object, which
can have the following properties:
• delimiter: The file delimiter. Defaults to a comma if not provided.
• hasHeaderRecord: specify true if there is a header record in the source file, with the field headers.
Defaults to false.
• useUppercaseHeaders: specify true if the headers in the source file should be converted to
uppercase. Defaults to false.
• discardBOM: specify true if the source file is in an encoding that uses Byte Order Marks (such as UTF-
8). Defaults to false.
• charSet: specify the character set encoding of the source file. If not specified, Java will attempt to
detect the character set based on the file contents.
• startingLineNumber: specify the line number to start reading the file at. Useful if there are non-data
header lines. Defaults to 1.
• fileHeader: If the source file does not have a header row, specify the field names of the source file as
an array of strings.
• encrypted: whether or not the file is encrypted. Defaults to false.
• enforceCharSetEncoding: Whether to enforce character set encoding validation or not. Defaults to
false.
closeFile()
Close the file used by the reader. This should always be done, to ensure the file is released.
validateFields(fieldArray, areExtraFieldsAllowed)
Compare the field headers found in the source file against those provided in the fieldArray. Will return false
if either there are fields in the fieldArray that are not found in the file, or if there are fields in the source file
that are not in the fieldArray (unless areExtraFieldsAllowed is true). If the fields match, or there are extra
fields in the source file and areExtraFieldsAllowed is true, will return true.
setDateFormat(format)
Specify a date format that will automatically apply to all date fields, unless overridden within calls to
getDateValue. If all dates in the source file will be in the same format, it is recommended to use this method
to indicate the format, then it will not be necessary to specify a format in the individual getDateValue calls.
setDateTimeFormat(format)
Specify a datetime format that will automatically apply to all datetime fields, unless overridden within calls to
getDateTimeValue. If all datetimes in the source file will be in the same format, it is recommended to use
this method to indicate the format, then it will not be necessary to specify a format in the individual
getDateTimeValue calls.
next()
Advance the reader to the next line of the file. If there is a next line, this will return true, otherwise it will
return false.
getStringValue(field, defaultValue)
Extract string data from the current line. Field specifies the name of the field to extract, and defaultValue is
an optional parameter that indicates what to return if the field is empty.
getNumericalValue(field, defaultValue)
Extract data from the current line and automatically attempt to convert it to a number value. Field specifies
the name of the field to extract, and defaultValue is an optional value that indicates what to return if the field
is empty.
FixedWidthFileReader
The FixedWidthFileReader has the following methods available:
open()
Open the file specified when the reader was created.
close()
Close the file used by the reader. This should always be done, to ensure the file is released.
next()
Advance the reader to the next line of the file. If there is a next line, this will return true, otherwise it will
return false.
getData(startPosition, width)
Extract string data from the current line. The startPosition specifies the position in the line to begin from (0 is
the far left) and the width specifies the number of characters.
FileLineReader
The FileLineReader has the following methods available:
FileLineReader(params)
Create a FileLineReader for the specified file. The params object contains the following parameters to read
the file:
Parameter Optional Description
filename Yes Name of file to read the text from
charset No The character set the file is encoded with
encrypted No Whether the file is encrypted or not
enforceCharSetEncoding No Whether the character set should be enforced or not.
charset must be provided if this is true
open()
Open the file specified when the reader was created.
close()
Close the file used by the reader. This should always be done, to ensure the file is released.
next()
Advance the reader to the next line of the file. If there is a next line, this will return true, otherwise it will
return false.
getLine()
Extract string data from the current line.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• How policy sets function within WorkForce Time and Attendance
Components
This API consists of the following component(s):
• The Distributed JavaScript Library policy FILE_WRITER_API
Setup
No setup is necessary for the File Writer API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Writing Delimited Text to a File
The File Writer API allows for delimited text to be written to a file. When operating in this fashion, the API
expects each field in the file to be written using a separate call, and the API will automatically insert the
delimiters as needed:
// Add each data attribute to be exported for the current record to the file
// buffer
fileWriter.append(record.field1);
fileWriter.append(record.field2);
fileWriter.append(record.field3);
fileWriter.append(record.field4);
fileWriter.append(record.field5);
// Now that all of the data has been written, close the file writer
fileWriter.close();
Note: Calls to append() only add the data to a buffer. That buffer does not actually get written to the file until
writeBuffer() is called.
If no additional action is taken, each value written to a delimited file will be automatically enclosed within
double quotes. This behavior can be turned off globally if desired for a file, and then re-enabled on a field-
by-field basis as needed if some fields should have quotes and some should not:
// Setting this to false turns off the default behavior of always quoting values
// being appended
useQuotes: false
};
var fileWriter = new FileWriter(parms);
// Add each data attribute to be exported for the current record to the file
// buffer
fileWriter.append(record.field1); // Will not have quotes
fileWriter.append(record.field2, true); // Will have quotes
fileWriter.append(record.field3, false); // Will not have quotes
fileWriter.append(record.field4); // Will not have quotes
fileWriter.append(record.field5, true); // Will have quotes
// Now that all of the data has been written, close the file writer
fileWriter.close();
// Setting this to false turns off the default behavior of always quoting values
// being appended
useQuotes: false,
// Add each data attribute to be exported for the current record to the file
// buffer
fileWriter.append(record.field1); // Will not have quotes
fileWriter.append(record.field2, true); // Will have quotes
fileWriter.append(record.field3, false); // Will not have quotes
fileWriter.append(record.field4); // Will not have quotes
fileWriter.append(record.field5, true); // Will have quotes
// Now that all of the data has been written, close the file writer
fileWriter.close();
// Now that all of the data has been written, close the file writer
fileWriter.close();
Note: When using writeLine(), the data is written to the file immediately, as opposed to being buffered for
later writing.
Troubleshooting
The job log of the script using the File Writer API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
FileWriter
FileWriter(parms)
Creates a new instance of the File Writer, using the settings specified in the provided parameters.
Parameter Name Description
fileName The full path and file name of the file that should be written to
charsetName Optional character encoding to use for the file. Valid options are:
US-ASCII
ISO-8859-1
UTF-8
UTF-16BE
UTF-16LE
UTF-16
Defaults to UTF-8 if not specified
delimiter Delimiter to use between fields when writing delimited data. Defaults to
comma (,) if not specified.
useQuotes True if all fields being written should be enclosed in quotes when writing
delimited data, false if quotes should be specified on a per-field basis.
Defaults to true if not specified.
lineBreak The character(s) to use between lines in the file. Defaults to a carriage return
plus a line break if not specified.
encryptionAlias The encryption alias to use if the file should be encrypted. Defaults to NONE if
not specified.
writeLine(text)
Writes the specified text to the file, followed by a line break. No special formatting, such as the addition of
quotes or delimiters, will be applied to the text.
append(text, forceQuotes)
Adds the specified text to the buffer for the file. If previous content has already been added to the buffer,
then a delimiter will be added to the buffer before the text content is appended. If the FileWriter has
quoting turned on globally, or if forceQuotes is true, then the text will be wrapped in double quotes before
being appended.
writeBuffer()
Writes the current contents of the buffer to the file, followed by a line break.
close()
Closes the file and stops any further writing from taking place.
FTP API
Definitions of Terms Used in This Section
FTP – File Transfer Protocol (FTP) is a standard network protocol used to transfer files between a client and a
server. This API provides FTP support for WorkForce Time and Attendance.
SFTP – SSH File Transfer Protocol (SFTP) is a standard network protocol used to transfer files between a client
and a server in a secure way using an extension of the Secure Shell (SSH) protocol.
FTPS – FTPS is an extension to the FTP protocol that adds support for the Transport Layer Security (TLS) and
Secure Socket Layer (SSL) cryptographic protocols to securely transfer files.
Overview
The FTP API provides support for transferring files to/from an FTP server from within an WorkForce Time and
Attendance script. It supports using the FTP, SFTP, or FTPS protocols for performing the files transfers.
Typically this would be used as part of a larger process, such as retrieving a file and then processing it to
import data or exporting data to a file and then transferring it to a remote server.
In addition to its abilities to move files between the WorkForce Time and Attendance server and the FTP
server, the FTP API can also be used for a number of different file-based operations against the files on either
the WorkForce Time and Attendance server or FTP server. This includes determining which files are located
on the server, determining if a specific file exists, creating directories, renaming files, deleting files, moving
files, and copying files.
Prerequisites
To use this API, you should be familiar with the following functionality:
• How to create policies within the Policy Editor
• Basic file transfer concepts, particularly those of getting files and putting files
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The FTP_SCRIPT_API distributed JavaScript library
• One or more FTP Connection Info policies in Policy Editor
Setup
Make sure the following two requirements are met before using the FTP API:
Field Description
Connection Type Specifies what type of file transfer connection should be established.
Host Specifies the host name of the FTP server that should be connected to.
Port Specifies which port the communication should use.
User ID Specifies which user ID should be used to establish the connection.
Authentication Specifies whether a password or a private key file should be used as part of the
Method authentication credentials
Password Specifies which password should be used to establish the connection.
Private Key File Path Specifies the location of the file containing the private key file that should be used
to establish the connection. (The key contained in the file must be in OpenSSH
format.)
Table 45: Fields in FTP Connection Info policy
Use Cases
Retrieving a File from a Remote Server
The FTP API can be used to copy a file from a remote file server to the local server, where it can be read by
the same or other interface processes. This is needed because WorkForce Time and Attendance is only able
to read from files that are present on the local server. The original file on the remote file server remains
unaltered by this process.
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define the location the file should be stored on the local server
var localDirectory = "interface/incoming";
var localFileName = "importData.csv";
// Define whether an existing file with that name on the local server should be
// overwritten or not
var replaceExisting = true;
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define the location the file should be stored on the local server
var localDirectory = "interface/incoming";
// Specify a suffix that should be added to the end of the file names
var suffix = "_unprocessed";
// Define whether an existing file with that name on the local server should be
// overwritten or not
var replaceExisting = true;
In this example, the local directory would end up with files named “file1_unprocessed.csv”,
“file2_unprocessed.csv”, “file3_unprocessed.csv”, and “file4_unprocessed.csv” after the operation is
complete.
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define the location the file should be stored on the local server
var localDirectory = "interface/incoming";
// Specify a suffix that should be added to the end of the file names
var suffix = "_unprocessed";
// Define whether an existing file with that name on the local server should be
// overwritten or not
var replaceExisting = true;
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define the location the file should be stored on the remote file server
var remoteDirectory = "inbound/payrollFiles";
var remoteFileName = "exportData.csv";
// Define whether an existing file with that name on the local server should be
// overwritten or not
var replaceExisting = true;
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define the location the file should be stored on the remote file server
var remoteDirectory = "inbound/payrollFiles";
// Specify a suffix that should be added to the end of the file names
var suffix = "_unprocessed";
// Define whether an existing file with that name on the local server should be
// overwritten or not
var replaceExisting = true;
In this example, the remote file directory would end up with files named “file1_unprocessed.csv”,
“file2_unprocessed.csv”, “file3_unprocessed.csv”, and “file4_unprocessed.csv” after the operation is
complete.
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define the location the file should be stored on the remote file server
var remoteDirectory = "inbound/payrollFiles";
// Specify a suffix that should be added to the end of the file names
var suffix = "_unprocessed";
// Define whether an existing file with that name on the local server should be
// overwritten or not
var replaceExisting = true;
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify whether any directories with names that match the pattern should be
// included in the results
var includeDirectories = false;
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify whether any directories with names that match the pattern should be
// included in the results
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
Renaming Files
The FTP API can be used to rename files on the local or remote file servers. The following script examples
demonstrate this functionality:
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the pattern to match all of the files in the directory that should be renamed
var pattern = "file_(\\d{14})\\.csv";
// Specify the suffix that should be added to all of the file names
var suffix = "_processed";
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the pattern to match all of the files in the directory that should be renamed
var pattern = "file_(\\d{14})\\.csv";
// Specify the suffix that should be added to all of the file names
var suffix = "_processed";
Moving Files
In addition to moving files to/from the local server and the remote file server, the FTP API can also be used to
move files between directories on the same server (either local or remote). In this case, the original file is
removed from the directory it was originally in and placed into a different directory on the same server. The
following script examples demonstrate this functionality:
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the name the file should have in the new directory
var newFileName = "employeeData.csv";
// Define whether an existing file with that name on the local server should be
// overwritten or not
var replaceExisting = true;
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the pattern to match all of the files in the directory that should be moved
var pattern = "file_(\\d{14})\\.csv";
// Specify the suffix that should be added to all of the file names
var suffix = "_processed";
// Define whether an existing file with that name on the local server should be
// overwritten or not
var replaceExisting = true;
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the name the file should have in the new directory
var newFileName = "targetFile.csv";
// Define whether an existing file with that name on the remote server should be
// overwritten or not
var replaceExisting = true;
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the pattern to match all of the files in the directory that should be moved
var pattern = "file_(\\d{14})\\.csv";
// Specify the suffix that should be added to all of the file names
var suffix = "_processed";
// Define whether an existing file with that name on the remote server should be
// overwritten or not
var replaceExisting = true;
Copying Files
The FTP API includes functionality for creating copies of files on either the local or remote file servers. With
this behavior, the original file remains untouched in its original directory and a second copy of the file is
created in either the same or a different directory on the same server. The following script examples
demonstrate this behavior:
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the name that the file should have in the new directory
var newFileName = existingFileName;
// Define whether an existing file with that name on the local server should be
// overwritten or not
var replaceExisting = true;
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the pattern to match all of the files in the directory that should be copied
var pattern = "file_(\\d{14})\\.csv";
// Specify the suffix that should be added to all of the file names
var suffix = "";
// Define whether an existing file with that name on the local server should be
// overwritten or not
var replaceExisting = true;
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the name that the file should have in the new directory
var newFileName = existingFileName;
// Define whether an existing file with that name on the remote server should be
// overwritten or not
var replaceExisting = true;
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the pattern to match all of the files in the directory that should be copied
var pattern = "file_(\\d{14})\\.csv";
// Specify the suffix that should be added to all of the file names
var suffix = "";
// Define whether an existing file with that name on the remote server should be
// overwritten or not
var replaceExisting = true;
Deleting Files
In addition to copying or moving files, the FTP API can also be used to delete files on either the local server or
the remote file server. The following script examples demonstrate this behavior:
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the pattern that should be matched to find the files to be deleted
var pattern = "old(.+)\\.csv";
// Specify whether directories with names matching the specified pattern should
// be deleted as well
var deleteDirectories = false;
// Delete all of the files that match the pattern from the indicated directory
ftpClient.deleteLocalFiles(directory, pattern, deleteDirectories);
Note: If deleting directories, a directory with a matching name must be empty in order to be deleted by this
process.
// Define which FTP Connection Info policy stores the connection information to use
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Specify the pattern that should be matched to find the files to be deleted
var pattern = "old(.+)\\.csv";
// Specify whether directories with names matching the specified pattern should
// be deleted as well
var deleteDirectories = false;
// Delete all of the files that match the pattern from the indicated directory
ftpClient.deleteRemoteFiles(directory, pattern, deleteDirectories);
Note: If deleting directories, a directory with a matching name must be empty in order to be deleted by this
process.
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
// Define which FTP Connection Info policy stores the connection information to use
var ftpConnectionInfoPolicy = "SAMPLE_CONNECTION";
// Create a new FTP Client using the specified FTP Connection Info policy
var ftpClient = new FTPClient(ftpConnectionInfoPolicy);
Troubleshooting
The job log of the script using the FTP API will contain information messages and, in the case of problems, any
error messages generated during processing. This job log should be reviewed if there are any problems
encountered while using the API.
Cannot rename local file The script attempted to rename a Ensure that all files being renamed
that does not exist file on the local server that does exist.
(FILE_NAME) not exist.
Cannot rename remote file The script attempted to rename a Ensure that all files being renamed
that does not exist file on the remote server that does exist.
(FILE_NAME) not exist.
Cannot move local file The script attempted to move a file Ensure that all files being moved exist.
FILE_NAME. File does not on the local server that does not
exist exist.
Error moving local file. File The script attempted to move a file Ensure that no files already exist with
FILE_NAME in DIRECTORY on the local server to a new the target name in the directory the
cannot be moved to location when a file with the target file is to be moved to, or set
NEW_DIRECTORY with name already existed in that new replaceExisting to true to allow the
name NEW_NAME as the location. API to overwrite the existing file.
file already exists. Either
set replaceExisting = true
or choose a new
destination file name
Cannot move remote file The script attempted to move a file Ensure that all files being moved exist.
FILE_NAME. File does not on the remote server that does not
exist exist.
Error moving remote file. The script attempted to move a file Ensure that no files already exist with
File FILE_NAME in on the remote server to a new the target name in the directory the
DIRECTORY cannot be location when a file with the target file is to be moved to, or set
moved to NEW_DIRECTORY name already existed in that new replaceExisting to true to allow the
with name NEW_NAME as location. API to overwrite the existing file.
the file already exists.
Either set replaceExisting =
true or choose a new
destination file name
Table 46: FTP API typical error messages, root problems, and solutions
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
FTPClient
FTPClient(ftpConnInfoPolicy)
Creates a new FTPClient instance, pointing at the FTP server defined by the specified FTP Connection Info
policy.
remoteFileExists(remoteDir, remoteFileName)
Determines whether or not a file of the specified name exists in the indicated directory on the remote file
server.
localFileExists(localDir, localFileName)
Determines whether or not a file of the specified name exists in the indicated directory on the local server.
the final name of the file being copied, this can either overwrite that existing file or generate an error
depending on the value of replaceExisting.
deleteRemoteFile(remoteDir, remoteFileName)
Deletes the specified file from the remote file server.
deleteLocalFile(localDir, localFileName)
Deletes the specified file from the local server.
createLocalDirectory(directoryPath)
Creates a new directory on the local server with the indicated name.
deleteLocalDirectory(directoryPath)
Deletes the directory with the specified name from the local server.
createRemoteDirectory(directoryPath)
Creates a new directory on the remote file server with the indicated name.
deleteRemoteDirectory(directoryPath)
Deletes the directory with the specified name from the remote file server.
setDebug(enableDebugging)
Specifies whether debug output should be written to the job log for actions performed by the API.
getHost()
Returns the host name specified in the FTP Connection Info policy used to instantiate this object.
getPort()
Returns the port number specified in the FTP Connection Info policy used to instantiate this object.
getUserID()
Returns the user ID specified in the FTP Connection Info policy used to instantiate this object.
getProtocolType()
Returns the protocol type—either FTP, SFTP, FTPS (implicit), or FTPS(explicit)—specified in the FTP
Connection Info policy used to instantiate this object.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• Background knowledge of Generic Export Specification/Section policy works for some of the functions of
API.
Components
This API consists of the following component(s):
1. The GENERIC_EXPORT_LIBRARY Distributed JavaScript library
2. The library also includes the FTP_LIBRARY Distributed JavaScript library, to perform the transfer
of files to a remote server using SFTP.
3. This library also includes the FILE_MANAGER_API Distributed JavaScript library, so that the data
can be written to file.
Setup
No setup is necessary for the Generic Export Library. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Exporting Data to a File
The Generic Export Library allows a script to transfer the data from a specific table or the table identified by
the first Generic Export Section in the Generic Export Specification. Before exporting the records, the API also
allows you to do some basic formatting.
Following are the settings for exporting the record:
Property Name Required Description
fileName Y Name of the file to export the data to.
tableName Y Name of the table to export data from.
sqlCriteria N Additional SQL selection criteria.
singleRecordFunction N Function identifying processing to perform for each record in
the table
postProcessFunction N Function identifying processing to perform once all records
have been processed
encryptionAlias N String identifying which encryption key to use from the
database
Example 1: Exporting data from the table identified by the first Generic Export Section in the
Generic Export Specification
Following are the other API methods used in this example:
The following script example demonstrates what the implementation might look like:
includeDistributedPolicy(“GENERIC_EXPORT_LIBRARY”);
//Set to true to use calc changes for the specified date range
exportAPI.useCalcChangeSetForSpecifiedRange(true);
TOTALS.recordCount++;
}
TOTALS["EST_GROSS"] += parseFloat(EST_GROSS);
TOTALS["TL_QUANTITY"] += parseFloat(TL_QUANTITY);
return (H + DELIMITER +
record.DESCRIPTION + DELIMITER +
record.OTHER_STRING1 + DELIMITER +
record.OTHER_STRING2 + DELIMITER +
record.OTHER_STRING3 + DELIMITER +
record.OTHER_STRING4 + DELIMITER +
PRIM_JOB_IND + DELIMITER +
record.LD1 + DELIMITER +
record.LD8 + DELIMITER +
record.LD2 + DELIMITER +
record.LD5 + DELIMITER +
record.LD6 + DELIMITER +
record.LD4 + DELIMITER +
exportAPI.formatDate("" + record.OTHER_DATE1, STD_DATE_FORMAT, EXPORT_DATE_FORMAT) +
DELIMITER +
exportAPI.formatDate("" + record.OTHER_DATE2, STD_DATE_FORMAT, EXPORT_DATE_FORMAT) +
DELIMITER +
exportAPI.formatDate("" + record.OTHER_DATE3, STD_DATE_FORMAT, EXPORT_DATE_FORMAT) +
DELIMITER +
exportAPI.formatDate("" + record.OTHER_DATE4, STD_DATE_FORMAT, EXPORT_DATE_FORMAT) +
DELIMITER +
OFF_CYCLE + DELIMITER +
SEQ_NBR + DELIMITER +
RT_SOURCE + DELIMITER +
exportAPI.formatDate("" + record.OTHER_DATE5, STD_DATE_FORMAT, EXPORT_DATE_FORMAT) +
DELIMITER +
exportAPI.formatDate("" + record.OTHER_DATE6, STD_DATE_FORMAT, EXPORT_DATE_FORMAT) +
DELIMITER +
record.OTHER_STRING6 + DELIMITER +
record.OTHER_STRING7 + DELIMITER +
AH_AMENDED_SEQ_NBR + DELIMITER +
EST_GROSS + DELIMITER +
CURRENCY_CD + DELIMITER +
TL_QUANTITY);
}
exportAPI.exportRecord(result);
}
//Calling API function to export data from the table identified by the first Generic Export
//Section in the Generic Export Specification
exportAPI.exportDataForSingleSourceTable(fileName, null, processCurrentRecord,
writeSummaryRecord, null);
//Sql Criteria
Var sqlCriteria = "CALC_BOOLEAN1 = 'F'";
//Connection Object
var connection = new Connection();
//Calling API function to export data from the table specified above
exportApi.exportDataFromTable(connection, tableName, fileName, sqlCriteria, exportFunction,
null, null);
//Files to move
var fileName = assignmentDescription + "_PAY_EXPORT_" + today + ".csv";
var totalsFileName = assignmentDescription + "_SUMMARY_TOTALS_" + today + ".csv";
.
.
.
includeDistributePolicy(“GENERIC_EXPORT_LIBRARY”);
function getQuarter(date) {
var month = date.getMonth();
//Follows USA government fiscal calendar and is one quarter ahead of the physical calendar
var quarter = "2";
if (month > 9) {
quarter = "1"
} else if (month > 6) {
quarter = "4"
} else if (month > 3) {
quarter = "3"
}
return quarter;
}
.
.
columnMap.put("CALC_STRING8", quarter);
//SQL Criteria
Var sqlCriteria = "CALC_BOOLEAN1 = 'F'";
if(exportAPI.isObjectDefined(exportData)){
//DO SOMETHING THE DATA OF THE TABLE i.e. write it to a file
}
}
Example 1: Clear all existing data from the tables being written to by this export
includeDistributedPolicy(“GENERIC_EXPORT_LIBRARY”);
includeDistributedPolicy(“GENERIC_EXPORT_LIBRARY”);
//SQL Criteria
Var sqlCriteria = “”;
Filter and stop the record from export when specified Field is zero or blank
API stops the record form being exported automatically when the specified field is zero or blank.
includeDistributedPolicy(“GENERIC_EXPORT_LIBRARY”);
//Determine if the specified field is blank then put empty string in place of that
if(exportAPI.fieldIsBlank(blankField)) {
columnMap.put(“Reg Hours”,””);
}
//Determine if the specified field is zero then put empty string in place of that
if(exportAPI.fieldIsZero(zeroField)) {
columnMap.put(“Hours 3 Amount”,””);
}
exportAPI.filterWhenAllFieldsAreBlank(blankField);
exportAPI.filterWhenAllFieldsAreZero(zeroField);
exportAPI.filterWhenAnyFieldsAreBlank(blankField);
exportAPI.filterWhenAnyFieldsAreZero(zeroField);
//Using the setFieldGID method to set the generated id for the field
exportAPI.setFieldGID(“SYSTEM_RECORD_ID”);
Troubleshooting
The job log of the script using the Generic Export Library will contain information messages and, in the case
of problems, any error messages generated during processing. This job log should be reviewed if there are
any problems encountered while using the API.
Error Message Problem Solution
No destination tables defined There is no Destination table You need to define the destination
for Generic Export Sections. defined in Generic Export table if this export is going to write
Unable to export data Specification-> Sections, while to a DB table.
configuring the policy
No Writer is defined No writer object is defined You must call
before exporting the record exportDataFromTable(…) or
exportDataForSingleSourceTable(…)
before calling the
exportRecord(record) function,
because either of those two
function will eventually initialize
the writer object.
API Reference
GenericExportSpecAPI:
The GenericExportSpecAPI has the following methods available:
useCalcChangeSetForSpecifiedRange(useCCS)
For exports run using a specified date range, defines whether the export will use the calc change sets for the
periods that overlap with the date range or not, for which useCCS (Boolean) value is passed.
clearAllSourceTables()
Clears all existing data from the tables being written to by this export.
getTables()
Returns the name of tables that will be written to by this Generic Export Specification.
exportRecord(record)
Writes a single record to the export file, followed by the line break.
formatNumber(number, formatStr)
Converts the provided number into a target format.
roundNumberToNearestQuarter(number)
Rounds the specified number to the nearest quarter (i.e. 4.30 rounds to 4.25).
roundNumber(number, precision)
Rounds a number to specified precision.
getExportParms()
Returns the export parameters associated with this particular export job.
isObjectDefined(object)
Returns false if the specified object is null or undefined, true otherwise.
GenericExportSectionAPI:
The GenericExportSectionAPI has the following methods available:
setFieldGID(fieldName)
Used to set a field in the Generic Export Section to a new Generated ID value.
filterWhenFieldIsBlank(field)
Stops the current record from being exported if the specified field is blank.
filterWhenFieldIsZero(field)
Stops the current record from being exported if the specified field contains a zero value.
filterWhenAnyFieldsAreBlank(fields)
Stops the current record from being exported if any of the specified fields do not contain data.
filterWhenAnyFieldsAreZero(fields)
Stops the current record from being exported if any of the specified fields contains a zero value.
filterWhenAllFieldsAreBlank(fields)
Stops the current record from being exported if all the specified fields do not contain values.
filterWhenAllFieldsAreZero(fields)
Stops the current record from being exported if all the specified fields contain a zero value.
fieldIsBlank(field)
Determines if the specified field is empty.
fieldIsZero(field)
Determines if the specified field contains a zero value.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
1. The INCREMENTAL_EXPORT_API Distributed JavaScript library.
• This automatically includes the API_UTIL Distributed JavaScript Library.
2. The MatchCondition, which defines a condition to select assignments.
3. Parms, which is a JS object containing parameters used for initializing the API. Expected properties
include the following:
• exportTarget {String} Unique name for export process. Used to differentiate between
multiple incremental exports.
• enableDebugLogging {Boolean} Set to true if debug logging should be generated; otherwise
false. Defaults to false.
• exportType {String} Determines the type of the export. TIME_SHEET_DETAIL,
SCHEDULE_DETAIL, and TIME_SHEET_OUTPUT are valid values. Defaults to
TIME_SHEET_OUTPUT.
Setup
No setup is necessary for the Incremental Export API. The distributed library is automatically available within
WT&A.
Use Cases
Computing the Differences in TIME_SHEET_OUTPUT,
TIME_SHEET_DETAIL, and SCHEDULE_DETAIL Against an Assignment
Example 1: Computing differences in TIME_SHEET_OUPUT against an assignment
includeDistributedPolicy("INCREMENTAL_EXPORT_API");
var apiParms = {
exportTarget: "Clarity Incremental Export",
enableDebugLogging: false
};
var condition = new MatchCondition("ASGNMT", "OTHER_STRING14", MatchOperator.EQUALS, "DIV17");
var differencesParms = {
startDate: WFSDate.today().addDays(-30),
endDate: WFSDate.today().addDays(999),
negateNumericFields: true,
retractOnTerm: true,
priorPeriodMode: "PRIOR_PERIOD_ON_CHANGE",
gracePeriod: 1440,
logData: true,
filter: function(record) { return (inArray(["REG","SICK"],record.pay_code)); },
orderFields: ["ID", "TIME_OFF_DATE"]
};
var api = new IncrementalExportAPI(apiParms);
api.computeDifferences(condition, differencesParms);
};
var api = new IncrementalExportAPI(apiParms);
api.computeDifferences(condition, differencesParms);
};
var api = new IncrementalExportAPI(apiParms);
api.computeDifferences(condition, differencesParms);
};
var api = new IncrementalExportAPI(apiParms);
api.computeDifferences(condition, differencesParms);
};
var api = new IncrementalExportAPI(apiParms);
api.computeDifferences(condition, differencesParms);
};
var api = new IncrementalExportAPI(apiParms);
api.computeDifferences(condition, differencesParms);
api.computeDifferencesForGroup(assignmentGroup, differencesParms);
};
var condition = new MatchCondition("ASGNMT", "OTHER_STRING14", MatchOperator.EQUALS, "DIV17");
var differencesParms = {
startDate: WFSDate.today().addDays(-30),
endDate: WFSDate.today().addDays(999),
negateNumericFields: true,
retractOnTerm: true,
priorPeriodMode: "PRIOR_PERIOD_ON_CHANGE",
gracePeriod: 1440,
logData: true,
filter: function(record) { return (inArray(["REG","SICK"],record.pay_code)); },
orderFields: ["ID", "TIME_OFF_DATE"],
diffSpecificFields: true,
fieldNames: [‘pay_code’]
};
var api = new IncrementalExportAPI(apiParms);
var assignmentGroup = ‘1038645186’;
api.computeDifferencesForGroup(assignmentGroup, differencesParms);
startDate: WFSDate.today().addDays(-30),
endDate: WFSDate.today().addDays(999),
retractOnTerm: true,
priorPeriodMode: "PRIOR_PERIOD_ON_CHANGE",
gracePeriod: 1440,
logData: true,
filter: function(record) { return (inArray(["REG","SICK"],record.pay_code)); },
orderFields: ["ID", "TIME_OFF_DATE"],
diffSpecificFields: true,
fieldNames: [‘pay_code’]
};
var api = new IncrementalExportAPI(apiParms);
var assignmentGroup = ‘1038645186’;
api.computeDifferencesForGroup(assignmentGroup, differencesParms);
exportType: "TIME_SHEET_DETAIL"
};
var condition = new MatchCondition("ASGNMT", "OTHER_STRING14", MatchOperator.EQUALS, "DIV17");
var differencesParms = {
startDate: WFSDate.today().addDays(-30),
endDate: WFSDate.today().addDays(999),
retractOnTerm: true,
priorPeriodMode: "PRIOR_PERIOD_ON_CHANGE",
gracePeriod: 1440,
logData: true,
filter: function(record) { return (inArray(["REG","SICK"],record.pay_code)); },
orderFields: ["ID", "TIME_OFF_DATE"]
};
var parms = {
filter: function(record) { return (inArray(["REG","SICK"],record.pay_code)); },
orderFields: ["ID", "TIME_OFF_DATE"]
};
var api = new IncrementalExportAPI(apiParms);
var batchId = api.computeDifferences(condition, differencesParms);
var exportDetails = api.getExportDetails(batchId, parms);
exportType: "SCHEDULE_DETAIL" };
var condition = new MatchCondition("ASGNMT", "OTHER_STRING14", MatchOperator.EQUALS, "DIV17");
var differencesParms = {
startDate: WFSDate.today().addDays(-30),
endDate: WFSDate.today().addDays(999),
retractOnTerm: true,
priorPeriodMode: "PRIOR_PERIOD_ON_CHANGE",
gracePeriod: 1440,
logData: true,
filter: function(record) { return (inArray(["REG","SICK"],record.pay_code)); },
orderFields: ["ID", "TIME_OFF_DATE"]
};
var api = new IncrementalExportAPI(apiParms);
var batchId = api.computeDifferences(condition, differencesParms);
api.getExportProperties(batchId);
gracePeriod: 1440,
logData: true,
filter: function(record) { return (inArray(["REG","SICK"],record.pay_code)); },
orderFields: ["ID", "TIME_OFF_DATE"]
};
var api = new IncrementalExportAPI(apiParms);
var batchId = api.computeDifferences(condition, differencesParms);
api.rollbackExport(batchId);
Example 2: Rolling back a single record computed by runs of this export for
TIME_SHEET_DETAIL
includeDistributedPolicy("INCREMENTAL_EXPORT_API");
var apiParms = {
exportTarget: "Clarity Incremental Export",
enableDebugLogging: false,
exportType: "TIME_SHEET_DETAIL" };
var condition = new MatchCondition("ASGNMT", "OTHER_STRING14", MatchOperator.EQUALS, "DIV17");
var differencesParms = {
startDate: WFSDate.today().addDays(-30),
endDate: WFSDate.today().addDays(999),
retractOnTerm: true,
priorPeriodMode: "PRIOR_PERIOD_ON_CHANGE",
gracePeriod: 1440,
logData: true,
filter: function(record) { return (inArray(["REG","SICK"],record.pay_code)); },
orderFields: ["ID", "TIME_OFF_DATE"]
};
var parms = {
filter: function(record) { return (inArray(["REG","SICK"],record.pay_code)); },
orderFields: ["ID", "TIME_OFF_DATE"]
};
var api = new IncrementalExportAPI(apiParms);
var batchId = api.computeDifferences(condition, differencesParms);
var exportRecords = api.getExportDetails(batchId, parms);
for (var I in exportRecords){
record = exportRecords[i];
api.rollbackRecord(record);
}
Example 3: Rolling back a single record computed by runs of this export for
SCHEDULE_DETAIL
includeDistributedPolicy("INCREMENTAL_EXPORT_API");
var apiParms = {
exportTarget: "Clarity Incremental Export",
enableDebugLogging: false,
exportType: "SCHEDULE_DETAIL" };
var condition = new MatchCondition("ASGNMT", "OTHER_STRING14", MatchOperator.EQUALS, "DIV17");
var differencesParms = {
startDate: WFSDate.today().addDays(-30),
endDate: WFSDate.today().addDays(999),
retractOnTerm: true,
priorPeriodMode: "PRIOR_PERIOD_ON_CHANGE",
gracePeriod: 1440,
logData: true,
filter: function(record) { return (inArray(["REG","SICK"],record.pay_code)); },
orderFields: ["ID", "TIME_OFF_DATE"]
};
var parms = {
filter: function(record) { return (inArray(["REG","SICK"],record.pay_code)); },
orderFields: ["ID", "TIME_OFF_DATE"]
};
var api = new IncrementalExportAPI(apiParms);
var batchId = api.computeDifferences(condition, differencesParms);
var exportRecords = api.getExportDetails(batchId, parms);
for (var I in exportRecords){
record = exportRecords[i];
api.rollbackRecord(record);
}
var parms = {
startDate: WFSDate.VERY_EARLY_DATE,
endDate: WFSDate.VERY_LATE_DATE,
personSelectionDateOption: Person_selection_date_option.BEGINNING_OF_CURRENT_PERIOD
};
var exportId = api.computeDifferences(condition, parms);
var parms = {
startDate: WFSDate.VERY_EARLY_DATE,
endDate: WFSDate.VERY_LATE_DATE,
personSelectionDateOption: Person_selection_date_option.BEGINNING_OF_CURRENT_PERIOD,
exportType: "TIME_SHEET_DETAIL"
};
var exportId = api.computeDifferences(condition, parms);
var parms = {
startDate: WFSDate.VERY_EARLY_DATE,
endDate: WFSDate.VERY_LATE_DATE,
personSelectionDateOption: Person_selection_date_option.BEGINNING_OF_CURRENT_PERIOD,
exportType: "SCHEDULE_DETAIL"
};
var exportId = api.computeDifferences(condition, parms);
Troubleshooting
The job log of the script using the File Manager API will include informational messages, and in the case of
problems, error messages. This job log should be reviewed if there are problems using the API.
Some common error messages, their causes, and the solution:
Error Message Problem Solution
No value specified for Not specifying any parameters Ensure that IncrementalExportAPI() is
export target should have failed the API called with the specified parms.
initialization due to a missing export
target.
Specifying parameters without an Ensure that parms being passed in the
export target should have failed the IncrementalExporAPI() must contain an
API initialization. export target.
Invalid export type Specifying parameters with an Valid values for export type are
BAD_CHOICE_VALUE invalid export type should have TIME_SHEET_OUTPUT,
specified failed the API initialization TIME_SHEET_DETAIL, and
SCHEDULE_DETAIL
No value specified for Not specifying any parameters computeDifferences should be called
export start date when computing difference should with two parameters.
have generated an error due to a 1) matchCondition
missing start date. 2) parms – JS object
Not specifying any parameters The Parms (JS object) must contain a
when computing difference should start date.
have generated an error due to a
missing start date.
No value specified for End date is a required field in the Ensure that Parms should contain a
export end date parameters and was not specified. specified end date.
Incremental Export start End date is not allowed to be before Ensure that start date is less than end
date yyyy-MM-dd is after start date. date.
end date yyyy-MM-dd
Value ‘X’ is not a valid value Invalid values are not allowed for The priorPeriodMode property must
for choice the prior period mode choice. contain a valid value.
Negative numbers not Negative values are not allowed for The gracePeriod property in parms
allowed for grace period the grace period. must not contain a negative value.
Incremental export filter Filter function does not return a The filter property in parms is a
function returned result null value that is not allowed. function which should return a value.
Incremental export filter Filter function that returns a non- Filter function should return a Boolean
function returned result 5, Boolean value is not allowed. value (True or False).
of type class
Invalid ordering field Order field array containing invalid Order field array should contain a valid
BAD_FIELD specified, Field field names is not allowed. field name.
does not exist in
INCR_EXP_HIST_DETAIL,
TIME_SHEET_DETAIL_HIST,
SCHEDULE_DETAIL_HIST
Invalid ordering field Order field array containing good Order field array should contain a valid
NONEXISTEND_FIELD and bad field names is not allowed. field name.
specified. Field does not
exist in
INCR_EXP_HIST_DETAIL,
TIME_SHEET_DETAIL_HIST,
SCHEDULE_DETAIL_HIST
Invalid ordering field Order field array containing data Order field array should contain a data
C_CALC_STRING2 specified element not linked to IEHD, TSDH, element linked to IEHD, TSDH, SDH.
Field does not exist in SDH is not allowed.
INCR_EXP_HIST_DETAIL,
TIME_SHEET_DETAIL_HIST,
SCHEDULE_DETAIL_HIST
Incorrect field name: The field name provided in the Provide the correct table name.
PAY_STRING specified for parameter doesn’t map to the
the difference. Field does actual column of
not exist in TIME_SHEET_OUTPUT,
TIME_SHEET_OUTPUT, TIME_SHEET_DETAIL,
TIME_SHEET_DETAIL, SCHEDULE_DETAIL table.
SCHEDULE_DETAIL
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Incremental Export API consists of the following component(s):
1. The MatchCondition, which defines a condition to select employee, assignment, or user records
a. This MatchCondition makes use of a MatchOperator which defines a comparison operation
to use in a condition for matching a specified value against the assignment.
2. The IncrementalExportAPI, which provides assorted methods to export incremental changes to
TIME_SHEET_OUTPUT, TIME_SHEET_DETAIL, and SCHEDULE_DETIAL data.
MatchOperator
EQUALS
Only data where the value in the specified field exactly matches the indicated value will be matched by the
condition.
NOT_EQUALS
Only data where the value in the specified field does not match the indicated value will be matched by the
condition.
GREATER_THAN
Only data where the value in the specified field is greater than the indicated value will be matched by the
condition. For string fields this is applied lexicographically, meaning that “3” is greater than “20”.
GREATER_THAN_OR_EQUALS
Only data where the value in the specified field exactly matches or is greater than the indicated value will be
matched by the condition. For string fields this is applied lexicographically, meaning that “3” is greater than
“20”.
LESS_THAN
Only data where the value in the specified field is less than the indicated value will be matched by the
condition. For string fields this is applied lexicographically, meaning that “20” is less than “3”.
LESS_THAN_OR_EQUALS
Only data where the value in the specified field exactly matches or is less than the indicated value will be
matched by the condition. For string fields this is applied lexicographically, meaning that “20” is less than
“3”.
IN
Only data where the value in the specified field exactly matches one of the values in the indicated array of
values will be matched by the condition.
NOT_IN
Only data where the value in the specified field does not match any of the values in the indicated array of
values will be matched by the condition.
LIKE
Only data where the value in the specified field matches the pattern defined by the indicated value will be
matched by the condition.
NOT_LIKE
Only data where the value in the specified field does not match the pattern defined by the indicated value
will be matched by the condition.
BETWEEN
Only data where the value in the specified falls between the two values defined by the indicated array of
values (inclusive of the two endpoints) will be matched by the condition. For string fields this is applied
lexicographically, meaning that “5” is between “37” and “62”.
MatchCondition
MatchCondition(table, field, operator, values, isNotOperator)
Creates a new MatchCondition for the indicated table/field combination that matches against the specified
value(s) using the indicated MatchOperator.
If the MatchOperator specified is IN or NOT_IN, then values is expected to be an array of all the different
values to be evaluated by the “in” operation. If the MatchOperator specified is BETWEEN, then values is
expected to be an array containing two values: the starting point and ending point of the range to be
evaluated by the “between” operation. For all other MatchOperators, values is expected to be a single
value.
The isNotOperator controls whether the results of the MatchCondition should be negated. If this is set to
true, then a MatchCondition that normally evaluates to false will instead evaluate to true, and a
MatchCondition that normally evaluates to true will instead evaluate to false. This allows for conditions that
don’t have an explicit MatchOperator defined, such as “not between”, to be defined.
and(condition)
Modifies the MatchCondition to return only the intersection of its original condition and the specified
condition.
or(condition)
Modifies the MatchCondition to return the union of its original condition and the specified condition.
IncrementalExportAPI
IncrementalExportAPI(parms)
Creates a new instance of the Incremental Export API, using the provided settings. The available parameters
that can be defined are:
Parameter Description
exportTarget Unique name for the export process. Used to differentiate between multiple
incremental exports a customer may have so that changes calculated in one export
do not impact the changes calculated by a different export.
True if debug logging should be generated, false if not. Defaults to false.
enableDebugLogging
computeDifferences(matchCondition, parms)
Computes the differences in TIME_SHEET_OUTPUT, TIME_SHEET_DETAIL and SCHEDULE_DETAIL since the
last time this export was run for all assignment that match the provided condition, returning the batch ID for
the process that computed the differences. The available parameters that can be defined are as follows:
Parameter Description
startDate Starting date of the range that should be evaluated by the export.
Time in minutes before the previous export run time to use when checking for
gracePeriod
timesheet modification. Defaults to 1440.
Indicates if information about the differences being computed should be
logData
written to the job log. Defaults to true.
Indicates what dates to filter the person on.
personSelectionDateOption
computeDifferencesForGroup(groupId, parms)
Computes the differences in TIME_SHEET_OUTPUT, TIME_SHEET_DETAIL, and SCHEDULE_DETAIL since the
last time this export was run for all assignments present in the specified assignment group, returning the
batch ID for the process that computed the differences. The available parameters that can be defined are as
follows:
Parameter Description
startDate Starting date of the range that should be evaluated by the export.
getExportProperties(batchId)
Returns the details about the incremental export operation specified by batch ID.
getExportDetails(batchId, parms)
Returns the collection of differences that were calculated by the specified export batch. The available
parameters that can be defined are as follows:
Parameter Description
filter Function that accepts a single argument (of type
IncrExportHistoryDetailScriptable) and returns a Boolean value; true if the
record should be kept, false if it should be filtered out.
Array of fields on the INCR_EXP_HIST_DETAIL, TIME_SHEET_DETAIL_HIST, and
SCHEDULE_DETIAL_HIST (for TIME_SHEET_OUTPUT, TIME_SHEET_DETAIL, and
orderFields
SCHEDULE_DETAIL, respectively) records that should be used for ordering the
results.
rollbackExport(batchId)
Undoes all the differences computed by runs of this export after and including the specified batch ID,
allowing these records to be exported again as though they had never been exported previously.
rollbackRecord(record)
Removes the differences associated with a single record, so that its source record can be exported again in a
future export. This would be used if a single record has an error that should stop it from being exported, so
that the next run of the export picks up that same change again (presumably after the issue that caused the
error has been fixed).
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The distributed JavaScript library policy INSTANCE_INFO_API
Setup
No setup is necessary for the Instance Info API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Reading Configuration Properties
The Instance Info API can be used to read settings from the config.xml file from within a script. These
settings correspond to the configuration properties that were defined for the instance in the build.properties
file, plus any defaults for properties that weren’t set explicitly. Some examples of uses for this information
would be for determining which environment (Prod, Test, Dev, etc.) the script is running in, or for checking
whether the script is running on an instance with the Scheduler enabled or not.
The following script example demonstrates reading configuration properties using the Instance Info API:
if (environment == "PROD") {
log.info("Script is running in the Production environment");
}
else {
log.info("Script is not running in the Production environment");
}
if (schedulerEnabled) {
log.info("Scheduler is enabled for this instance");
}
else {
log.info("Scheduler is not enabled for this instance");
}
Note: Only select standard configuration properties can be looked up using the Instance Info API. This is
because the rest of the information contained in those properties should not be used by scripts, and
could potentially lead to dangerous behavior or security risks.
In addition to reading the standard configuration properties, which will exist for every instance, the Instance
Info API can also be used to read custom configuration properties. These properties, defined in
custom_config.xml, allow additional customer-specific information to be associated with the instance. Some
examples of uses for this might include storing database connection information, web service end points, or
file transfer credentials.
Note: Custom configuration properties should not be used in place of environment-specific policy settings or
where the environment-specific script in the Environment policy can be used. Those options will be
much less prone to breaking when switching between environments or when upgrading.
The following script example demonstrates using the Instance Info API to read custom configuration
properties. It assumes that custom_config.xml defines a property called “EXTERNAL_CONNECTION_INFO”,
with sub-properties “USERNAME”, “PASSWORD”, “URL”, and “DRIVER”:
if (versionIsSameOrLaterThan(version, "16.1.0.2") ) {
// Execute code that applies to v16.1.0.2 and later
}
else {
// Execute code that applies to v16.1.0.1 and earlier
}
// If the component is greater than the target component, we know that the
// version as a whole is later than the target version
if (versionComponent > targetComponent) {
return true;
}
// If the component is less than the target component, we know that the
// version as a whole is earlier than the target version
if (versionComponent < targetComponent) {
return false;
}
// component
}
// All components were the same, so the version is the same as the target version
return true;
}
Note: Never compare version strings directly, as strings. String comparisons compare values lexicographically,
meaning that “16.1.0.2” would be less than “7.8.0.5”.
Sometimes the files to be processed are not stored in a dedicated directory, and may be intermingled with
other files in the same directory which should not be processed. In these cases, there is usually a naming
convention defined to allow the script to determine which files should be processed and which should be
ignored. The Instance Info API supports specifying a regular expression when looking up files, returning only
those files whose filenames match the specified pattern.
The following script example demonstrates using the Instance Info API to retrieve all files with names
matching the pattern “importData_MMDDYYYYHHMMSS.csv” in the indicated directory:
// Define the pattern the files need to match. This pattern will match all
// files with a filename of the form "importData_MMDDYYYYHHMMSS.csv", where
// "MMDDYYYYYHHMMSS" represents the timestamp of when the file was created
var pattern = "importData_(\\d{14})\\.csv";
// Iterate over the files that match the pattern and process them as desired
for (var i = 0; i < files.length; ++i) {
var file = files[i];
Note: The pattern needs to be specified as a string, not as a JavaScript RegExp object. This means that care
needs to be taken with escape characters in the pattern: all backlashes to appear in the regular
expression need to be doubled in order to function correctly.
In addition to these methods to retrieve the files in a directory, the Instance Info API also includes
functionality for printing out information about the files in a directory to the job log. This information cannot
be consumed by the script, since it just consists of information messages in the log, but can be useful for
troubleshooting if a script does not appear to be accessing the expected files.
The following script example demonstrates using the Instance Info API to print out the contents of a directory
in the log:
An example of the output from this method in the job log is:
// Perform conditional logic based on whether the expected SaaS directories are
// present or not
if (api.hasExpectedSaaSDirectories() ) {
log.info("The expected SaaS directories are present!");
}
else {
log.error("Unable to find the expected SaaS interface directories");
}
if (filePermissions.canRead) {
log.info("Able to read data from the file");
if (filePermissions.canWrite) {
log.info("Able to write data to the file");
// Select the right version of the query to use based on the database type
var query = "";
if (databaseType == "ORACLE") {
query = "select ld1 from ld3 where length(other_string1) >= 5";
}
else if (databaseType == "SQL_SERVER") {
query = "select ld1 from ld3 where len(other_string1) >= 5";
}
Troubleshooting
The job log of the script using the Instance Info API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
InstanceInfoAPI
InstanceInfoAPI
Creates a new instance of the Instance Info API.
getBuildProperty(property)
Returns the value of the specified build property from either the config.xml or custom_config.xml files. While
all properties in custom_config.xml are accessible through this method, only the following properties in
config.xml are available to be read:
Whitelisted config.xml Properties
BASE_PORT
BO.INSTALLATION.VERSION
CLIENT.APPLICATION.NAME
ENVIRONMENT
HOST_NAME
INSTANCE_NAME_OPTIONS.INSTANCE_ID
JDBC.PRODUCT
JVM.XMS
JVM.XMX
PRIMARY_INSTANCE
SCHEDULER_RUN
BATCH_JOB_THREADS
Table 48: Properties that are allowed to be read from config.xml
getVersion()
Returns the current version of WorkForce Time and Attendance that the script is running on.
getFilesInDirectory(filePath)
Returns an array of all files in the specified directory. Does not recursively scan subdirectories for files.
getMatchingFilesInDirectory(filePath, pattern)
Returns an array of all files in the specified directory where the filename matches the indicated regular
expression. Does not recursively scan subdirectories for files.
printDirectoryContents(filePath)
Writes information about the contents of the specified directory to the job log. Messages are of the form:
TYPE FILENAME (SIZE bytes). Last modified on LAST_MODIFICATION_DTTM
getFilePermissions(filePath, fileName)
Returns the file permissions for the specified file. Returned object is a JS object with the following
properties:
Property Name Description
canRead Indicates if the file is able to be read by WorkForce
Time and Attendance
Indicates if the files is able to be written to by
canWrite
WorkForce Time and Attendance
canExecute Indicates if the file can be executed by WorkForce
Time and Attendance
Table 49: Properties available on the result returned by getFilePermissions
hasExpectedSaaSDirectories()
Returns true if both the interface/incoming/ and interface/outgoing/ directories exist and are accessible
by the instance, or false if one or both of them is not accessible.
getDatabaseType()
Identifies whether WorkForce Time and Attendance is using a SQL Server or an Oracle back end to store the
data. Return value will be either SQL_SERVER or ORACLE.
getDistributionRootDirectory ()
Returns distribution root directory path.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• What sorts of WorkForce Time and Attendance jobs are available and what kinds of parameters they use
Components
This API consists of the following component(s):
• The JOB_QUEUE_API Distributed JavaScript library
• The COMMON_JOB_CREATION_LIBRARY Distributed JavaScript library (automatically included as part of
the JOB_QUEUE_API library)
• The JobBuilder framework (automatically included as part of the JOB_QUEUE_API library)
Setup
No setup is necessary for the Job Queue API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Running a Single Job
The simplest use case for the Job Queue API is to run a single job from within another script. If the job to be
run conforms to the typical default settings for running that type of job, then the helper methods defined by
the Common Job Creator Library can be used to define the job. A job defined in this manner will always
cause the parent script to pause execution until the child job has completed finished.
The following example demonstrates how a simple Scheduled Script job can be defined and run from within a
script. The script will run the Scheduled Script with the policy ID “STAGE_TWO_JOB”. This job will appear on
the job status screen with a description of “Stage Two Processing Job”. The script that initiated the
Scheduled Script job will wait until the Scheduled Script completely finishes before continuing on with any
remaining code execution.
// Define the job to be run. In this case, the job being defined is a Scheduled
// Script policy named "STAGE_TWO_JOB".
var policyId = "STAGE_TWO_JOB";
var description = "Stage Two Processing Job";
var parameters = null;
var job = createScheduledScriptJob(description, policyId, parameters);
// Start the job. This script will pause execution until the job completes
jobQueue.executeJobs();
// Additional code to be executed in this script after the job completes would
// go here
Sometimes the default settings for a job are not sufficient for the needs of the script. For example, perhaps
the parent job should not wait for the child job to complete before continuing on but instead it should
immediately continue with further processing. The following example shows how a job can be defined with
custom settings applied:
// Define the job to be run. In this case, the job being defined is a Scheduled
Script policy named "STAGE_TWO_JOB".
var policyId = "STAGE_TWO_JOB";
var description = "Stage Two Processing Job";
var jobType = "SCRIPT_RUNNER";
var job = new JobBuilder(jobType, description)
.buildPolicyId(policyId)
.buildWaitUntilCompleteBoolean(false)
.buildParamMap(null)
.build();
// Start the job. This script will continue with execution immediately
jobQueue.executeJobs();
// Additional code to be executed in this script after the job completes would
// go here
With this approach, the JobBuilder is used to construct the job. The different settings for the job are chained
together through a series of calls, and then the job is finalized by the call to build().
Note: Once a job has been built, no changes can be made to its settings. If different settings are needed a
second job must be built.
// Define the first job that should be run. This will be a Scheduled Script job
var policyId1 = "IMPORT_STEP_2";
var description1 = "Import Step #2: Scheduled Script";
var parameters1 = null;
var job1 = createScheduledScriptJob(description1, policyId1, parameters1);
// Define the second job that should be run. This will be a SQL Employe Import
// Control job.
var policyId2 = "IMPORT_STEP_3";
var description2 = "Import Step #3: SQL Employee Import Control";
var job2 = createSQLEmployeeImportJob(description2, policyId2);
// Define the third job that should be run. This will be a Time Entry Import job.
// Both the first and second phases of the Time Entry Import will be run.
var policyId3 = "IMPORT_STEP_4";
var description3 = "Import Step #4: Time Entry Import";
var filePath = "interface/incoming/";
var fileName = "timeEntryImport.csv";
var job3 = createTimeEntryImportJob(description3, policyId3, filePath, fileName);
// Add the three job definitions into the queue, in the order in which they should
// be executed
jobQueue.addJob(job1);
jobQueue.addJob(job2);
jobQueue.addJob(job3);
// Start the first job. One the first job completes, the second job will
// automatically begin. Once the second job completes, the third job will
// automatically begin.
jobQueue.executeJobs();
// Additional code to be executed in this script after all three of the jobs have
// completed would go here
In addition to running a sequence of jobs in order, the Job Queue can also be used to run several jobs that all
process at the same time. This can be accomplished by defining custom settings for the jobs that tell the
Queue not to wait until they are completed before continuing on, as shown in this example:
// Define the first job that should be run. This will be a Scheduled Script job
var policyId1 = "IMPORT_STEP_2";
var description1 = "Import Step #2: Scheduled Script";
var parameters1 = null;
var jobType1 = "SCRIPT_RUNNER";
var job1 = new JobBuilder(jobType1, description1)
.buildPolicyId(policyId1)
.buildParamMap(null)
.buildWaitUntilCompleteBoolean(false)
.build();
// Define the second job that should be run. This will be a SQL Employe Import
// Control job.
var policyId2 = "IMPORT_STEP_3";
var description2 = "Import Step #3: SQL Employee Import Control";
var jobType2 = "PC_PAYROLL_EMPLOYEE_IMPORT";
var job2 = new JobBuilder(jobType2, description2)
.buildPolicyId(policyId2)
.buildWaitUntilCompleteBoolean(false)
.build();
// Define the third job that should be run. This will be a Time Entry Import job.
// Both the first and second phases of the Time Entry Import will be run.
var policyId3 = "IMPORT_STEP_4";
var description3 = "Import Step #4: Time Entry Import";
var filePath = "interface/incoming/";
var fileName = "timeEntryImport.csv";
var job3 = createTimeEntryImportJob(description3, policyId3, filePath, fileName);
// Add the three job definitions into the queue, in the order in which they should
// be executed
jobQueue.addJob(job1);
jobQueue.addJob(job2);
jobQueue.addJob(job3);
// This will start the first job and then immediately start the second job, since
// the first job is defined as not waiting until completion before continuing.
// The same will happen with the second job, with the third job being started
// immediately upon beginning execution of the second job. This script will wait
// until the third job has completed before continuing on.
jobQueue.executeJobs();
// Additional code to be executed in this script after all three of the jobs have
// completed would go here
// Create a new job queue to hold the jobs to be run, specifying that the jobs should be
// run by the user that launched this job
var jobQueue = new JobQueue(loggedInUserId);
// Define the job that should be run. In this case, the job being defined is a Scheduled
// Script policy named "STAGE_TWO_JOB"
var policyId = "STAGE_TWO_JOB";
var description = "Stage Two Processing Job";
var parameters = null;
var job = createScheduledScriptJob(description, policyId, parameters);
// Start the job, running it as though it was launched by the indicated user.
jobQueue.executeJobs();
// Additional code to be executed in this script after the job completes would
// go here
Note: If the user specified is not a valid login ID for an APP_USER record, an exception will be generated. This
is especially important when using the ID of the user that launched the parent script, since if that
parent job is scheduled that user ID will be SYSTEM_SCHEDULER, which is unlikely to be a valid user ID.
This helps prevent the two jobs from conflicting with each other if they would both be processing the same
data.
The following example demonstrates counting the number of instances of a job that are running and only
executing code if that count is an expected number. This script in this example is assumed to be running
from a policy named THIS_JOB with a description of “This Job that is currently running”:
// Count the number of jobs currently running that match the provided description.
var jobCount = jobQueue.getRunningJobCount(descriptions, true);
if (jobCount > 1) {
log.info("Multiple copies of this job are already running. No action will be
taken here.");
}
else {
log.info("This is the only instance of the job that is running. Performing
actions...");
// The rest of the script logic that should happen now that we know only one
// instance of the job is running would go here
}
Note: The same job can end up with multiple different descriptions, depending on how it’s run. A job run
manually will have a description reflecting its policy ID. A job run by the scheduler will use the
description defined in the policy, or a description reflecting the policy ID if no description is defined in
the policy. And a job run by a Job Queue will use whatever description is defined in the script that is
running the Job Queue. It is often necessary to check against multiple descriptions in order to
determine how many instances of a particular job are actually running.
Troubleshooting
The job log of the script using the Job Queue API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
When archive is set to true, The definition for a job in the Job Ensure that the definition for the job
archivePath, filepath, and Queue specified that files should where archiving is to occur specifies a
filename attributes must be archived after they are file path, file name, and an archive
be initialized. processed, but either the file path, path.
file name, and/or archive path
attributes were not specified in
that definition.
Unable to archive file. File The definition for a job in the Job Ensure that the file referenced by the
doesn’t exist. Queue specified that files should file path and file name specified in the
be archived after they are job definition exists.
processed, but no file was found to
be processed.
Closing timesheets via an An attempt was made to define a Closing timesheets is not supported
automated script is not job to execute the Close by the Job Queue, and must be run as
supported by WorkForce Timesheets operation. a manual operation.
Software
Advancing Policy Profiles An attempt was made to define a Advancing policy profiles is not
via an automated script is job to execute the Advance Policy supported by the Job Queue, and
not supported by Profiles operation. must be run as a manual operation.
WorkForce Software
Must supply a reference A job definition was being defined Ensure that the timesheet parameters
date for a pay period for a Calculate job using either the being used by the Calculate job
option of: PAY PERIOD “period as of” or “specified period” specify a valid period reference date.
OPTION options, but no period reference
date was provided
Table 50: Job Queue API typical error messages, root problems, and solutions
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Job Queue API consists of the following component(s):
1. The JobQueue, which defines an ordered series of jobs that should be run
2. The Common Job Creator, which provides convenience methods for creating many common types of
jobs with default settings
3. The JobBuilder, which allows for custom job definitions to be defined with custom settings when the
Common Job Creator doesn’t provide the necessary functionality
The following is a summary of the available methods and common uses:
JobQueue
JobQueue(userId)
Creates a new Job Queue. If a userId is specified, all jobs executed by this Job Queue will appear to have
been run by the indicated user.
addJob(job)
Adds a new job, using the supplied definition, to the end of the Job Queue.
executeJobs()
Executes all jobs in the Job Queue, in the order in which they were added.
clearQueue()
Removes all jobs from the Job Queue.
countJobsCurrentlyRunning(description)
Returns the number of jobs currently running that include the specified value in their description.
getRunningJobCount(descriptionArray, includeThisJob)
Returns the number of jobs currently running that match any of the descriptions specified in the provided
description array. Descriptions are evaluated as regular expressions, allowing for pattern-based matching if
desired. Whichever description is associated with the job currently running will be automatically included in
the evaluation as well if includeThisJob is specified as true.
abortJob(description)
Aborts every job currently running where the specified value is included in the job description.
getLastRunTime(description)
Returns the most recent execution start time for any job matching the specified description. The description
is evaluated using SQL’s “LIKE” syntax, so wildcard values are supported.
executeSingleJobWithoutWaiting(job)
Executes the specified job immediately and resumes script execution without waiting for the job to complete.
Returns the Job ID of the job that was started to allow for later polling of the job status.
executeJobQueuePolicy(description, policyName)
Takes a job queue policy name and executes it by using an internal job queue. Description is also taken to
help identify the job more easily.
createSQLEmployeeImportJob(description, policyId)
Defines a new job to execute a SQL Employee Import Control job with the specified policy ID and description.
When executed, the Job Queue will wait until the job is complete before continuing to the next job. Errors in
this job will not cause the Job Queue to abort without processing remaining jobs.
createUserImportJob(description)
Defines a new job to execute the User Import job with the specified description. When executed, the Job
Queue will wait until the job is complete before continuing to the next job. Errors in this job will not cause
the Job Queue to abort without processing remaining jobs.
createTimeEntryImportFromHoldingJob(description, policyId)
Defines a new job to execute a Time Entry Import job with the specified policy ID and description. This job
will only execute the second phase of the Time Entry Import process, moving data that already exists in the
TIME_ENTRY_DETAIL_IMPORT table to the timesheet or schedule. No new data will be created in
TIME_ENTRY_DETAIL_IMPORT by this job. When executed, the Job Queue will wait until the job is complete
before continuing to the next job. Errors in this job will not cause the Job Queue to abort without processing
remaining jobs.
loading it into the TIME_ENTRY_DETAIL_IMPORT table. Data will not actually be written to the timesheets or
schedules by this job. The file will not be archived after processing, and when executed the Job Queue will
wait until the job is complete before continuing to the next job. Errors in this job will not cause the Job
Queue to abort without processing remaining jobs.
createAdvancePolicyProfileJob(description, listOfPolicyProfiles)
Generates an error message. Because it is an irreversible operation, the Advance Policy Profiles job should
never be run in an automated fashion. It should always be run manually, after verification that all timesheets
to be processed are in a final and correct state.
createJobQueueJob(description, policyName)
Defines a new job to run a Job Queue policy.
createAbsenceExportJobForAssignments(parms)
Defines a new job to execute an Absence Export job using the specified parameters. The parameters
object should be specified as a JavaScript object with properties corresponding to the parameter to be
set and a value for each property corresponding to the value that should be used for that parameter.
The following parameters are supported:
Parameter Description
Description The description that should appear on the Job Status screen
for the export job
policyId The name of the Absence Export policy that should be
executed
filePath (Optional) The directory that the absence export file should
be written to. If not specified, the directory specified in the
Absence Export policy being executed will be used
fileName (Optional) The name of the absence export file that should
be written. If not specified, the file name specified in the
Absence Export policy being executed will be used
asgnmtIds (Optional) Array of assignment IDs specifying the
assignments that should be exported. If not specified, all
assignments defined in the instance will be included in the
export data.
Table 51: Valid batch job types and their description
createAbsenceExportJobForAssignmentGroup(parms)
Defines a new job to execute an Absence Export job using the specified parameters, exporting data for
only the assignments belonging to the specified assignment group. The parameters object should be
specified as a JavaScript object with properties corresponding to the parameter to be set and a value for
each property corresponding to the value that should be used for that parameter. The following
parameters are supported:
Parameter Description
Description The description that should appear on the Job Status screen
for the export job
policyId The name of the Absence Export policy that should be
executed
asgnmtGroupId The display ID of the assignment group whose assignments
should be exported
filePath (Optional) The directory that the absence export file should
be written to. If not specified, the directory specified in the
Absence Export policy being executed will be used.
fileName (Optional) The name of the absence export file that should
be written. If not specified, the file name specified in the
Absence Export policy being executed will be used
Table 52: Valid batch job types and their description
JobBuilder
JobBuilder(type, description)
Creates a new JobBuilder with the specified job type and description. Valid job types are as follows:
Job Type Description
ACT Generates notifications and evaluating document eligibility
for ACT cases.
LD_IMPORT Executes a specific LD Import policy
AUTO_SCHEDULER Runs an auto-scheduling process
RECALC_ASGNMT_GROUPS Recalculates membership of automatic assignment groups
APPROVAL_DEADLINE Sends emails based on approval deadlines
SCRIPT_RUNNER Executes a specific Scheduled Script policy
APPROVAL Approves a timesheet.
ADP_DATA_EXCHANGE Executes a specific ADP Export Map policy
GENERIC_EXPORT Executes a specific Generic Export policy
ORACLE_PAYROLL Executes a specific Oracle Payroll Export policy
CSV_EMPLOYEE_IMPORT Executes a specific CSV Employee Import Control policy
PC_PAYROLL_EMPLOYEE_IMPORT Executes a specific SQL Employee Import Control policy
OFF_CYCLE_CANCEL Cancels an existing off-cycle batch that is already in process
OFF_CYCLE_RE_EXPORT Re-exports off-cycle data associated with a particular off-
cycle batch
OFF_CYCLE_CLOSE Closes an existing off-cycle batch that is already in process
OFF_CYCLE_ADP_EXPORT Exports off-cycle data using the ADP Export process
OFF_CYCLE_GENERIC_EXPORT Exports off-cycle data using the Generic Export process
OFF_CYCLE_ORACLE_PAYROLL_EXPORT Exports off-cycle data using the Oracle Payroll Export
process
TIME_ENTRY_COMMIT Executes the stage-to-actual step of a specific Time Entry
Import policy
build()
Returns an immutable representation of the job definition that can be added into a Job Queue.
buildPolicyId(policyID)
Defines which policy should be executed for jobs that run a single policy.
buildFileName(fileName)
Defines the name of the file that should be processed for jobs that read data from a file.
buildFilePath(filePath)
Defines the directory containing the file that should be processed for jobs that read data from a file.
buildProcessingParm(processingParm)
Defines the processing parameters that are needed for a particular type of job. The following table defines
which job types require processing parameters as well as which parameter class they are expecting:
Job Type Processing Parameter Class
LD_IMPORT LdImportParams
APPROVAL_DEADLINE ApprovalDeadlineParams
APPROVAL ApprovalBatchJobParams
ADP_DATA_EXCHANGE ExportPolicyParms
GENERIC_EXPORT ExportPolicyParms
ORACLE_PAYROLL ExportPolicyParms
OFF_CYCLE_CANCEL OffCycleCancelBatchParms
OFF_CYCLE_RE_EXPORT OffCycleReExportBatchParms
buildArchiveBoolean(archive)
Specifies whether the file should be archived after processing for jobs that read data from a file. When
archived, the source file for the job is renamed to include the timestamp of the archiving at the beginning of
the file name.
buildArchivePath(archivePath)
Specifies which directory the source file should be moved to when it is archived for jobs that read data from a
file. This setting only matters if file archiving is enabled using buildArchiveBoolean().
buildParamMap(paramMap)
Specifies the runtime parameters that should be passed into the job when it begins execution. Only used
when executing Scheduled Script jobs (the SCRIPT_RUNNER job type).
buildDelayTimer(delay)
Specifies the number of seconds the Job Queue should wait once it reaches this job before beginning actual
execution of the job.
buildContinueOnErrorBoolean(continueOnError)
Specifies whether the Job Queue should continue to execute later jobs in the queue if errors are reported in
the log for this job.
buildWaitUntilCompleteBoolean(waitUntilCompleteBoolean)
Specifies whether the Job Queue should wait until this job completes before starting the next job or whether
it should continue on immediately after starting this job.
buildSystemFeature(systemFeature)
Specifies the system feature needed in order to execute certain jobs. The following jobs require a system
feature to be specified:
Job Type
ADP_DATA_EXCHANGE
GENERIC_EXPORT
ORACLE_PAYROLL
CALC_TIME_SHEETS
CREATE_RETRO_TRIGGER_EVENTS
LOCK_AND_CALC_TIME_SHEETS
ORALCE_PA
UNLOCK_TIME_SHEETS
Table 55: Batch job types that require system features to be specified
buildJobTypeInt(jobTypeInt)
Specifies what type of batch job is being executed. This is used by the RECALC_ASGNMT_GROUPS,
CROSS_PERIOD_CALC, SCHEDULE_ASSIGNMENT, SCHEMA_MODIFIER, and REMOVE_BIOMETRIC_TEMPLATES
job types. The valid values here are:
Job Type Description
1 Interactive
2 User Batch
3 System Batch
4 Swipe Processing
Table 56: Valid job types
buildSuperUserId(superUserId)
Specifies which superuser should be marked as having executed the job.
buildParamList(paramList)
Specifies the list of assignment parameters to be processed by the job. This should be a list of the following
types of parameter-class objects depending on job type:
Job Type Parameter Class
CROSS_PERIOD_CALC CrossPeriodCalcBatchJob.CrossPeriodCalcParameter
SCHEDULE_ASSIGNMENT ScheduleAssignmentBatchJob.ScheduleAssignmentParameter
Table 57: Job types that use parameter lists and their expected parameter classes
buildPriority(priority)
Specifies the job priority for a CROSS_PERIOD_CALC, SCHEDULE_ASSIGNMENT, or SCHEMA_MODIFIER job.
This controls which jobs take precedence in terms of execution by the Batch Job Processor.
buildWildcardBoolean(wildcardBoolean)
Specifies whether the translations should be imported or exported by a TRANSLATIONS job. A value of true
here means that the job should export the translations, a value of false means that the job should import the
translations.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The KPI_CHART_LIBRARY Distributed JavaScript Library
• The KPIUtil, KPIDbUtil, KPIChartHelper Java classes
Setup
No setup is necessary for the KPI Chart Library. The distributed library is automatically available in WT&A.
Use Cases
The best way to see the KPI Chart Library in use is to browse the standard KPIs that can be imported via the
KEY_PERFORMANCE_INDICATORS Standard Product template. The use cases in this section are mostly drawn
from these KPIs. To learn how to access template instructions or how to import templates, see the WT&A
Templates Framework Guide.
config.setSubtitle(getStandardDateRange(getStart(), getEnd()));
turnOffLinePoints(config);
config.setSubtitle(getStandardDateRange(getStart(), getEnd()));
config.getPlotOptions().getColumn()
.setColor("#ff0000")
.setNegativeColor("#008000");
/*
Define the logic to process the records retrieved by the query here.
The buildDataPoints(resultSet) method must be defined and must return
an array of data points created using the methods in KPI_CHART_LIBRARY
*/
function buildDataPoints(resultSet) {
var data = [];
while(resultSet.next()) {
data.push(createBasicAssignmentGroupDataPoint(resultSet.payCodeSet, resultSet.dateField,
resultSet.sumField, resultSet.asgnmt_grp));
}
return data;
}
function buildDataPoints(resultSet) {
var asgnmts = retrieveAsgnmtDataFromResultSet(resultSet);
return createDataPointForEachAsgnmtGrp(asgnmts);
}
/**
* Takes the results of the query defined in the KPI query script, and retrieves the necessary
information
* in order to calculate the average other_number1 value over all asgnmts being processed.
* @param {ResultSetScriptable} Result set containing results of the KPI query
* @returns {Object} Map of asgnmt group ID to array of the other_number1 values for each
asgnmt in the group
*/
function retrieveAsgnmtDataFromResultSet(resultSet) {
var asgnmts = {};
while(resultSet.next()) {
if (!asgnmts[resultSet.asgnmt_grp]) {
asgnmts[resultSet.asgnmt_grp] = [];
}
asgnmts[resultSet.asgnmt_grp].push(resultSet.asgnmt_field_to_average);
}
return asgnmts;
}
/**
* Processes through the arrays of values created by retrieveAsgnmtDataFromResultSet,
calculates the average value
* for each asgnmt group, and create a data point for the asgnmt group
* @param {Object} Map of asgnmt group ID to array of the other_number1 values for each asgnmt
in the group
* @returns {Array} Array of data points, one for each asgnmt grp
*/
function createDataPointForEachAsgnmtGrp(asgnmts) {
var data = [];
for(var asgnmtGrp in asgnmts) {
var sum = 0;
for(var i = 0; i < asgnmts[asgnmtGrp].length; i++) {
sum += asgnmts[asgnmtGrp][i];
}
data.push(createAssignmentGroupDataPoint(getDataElementParameter("ASGNMT_FIELD_TO_AVERAGE") +
" Average", getDataElementParameter("ASGNMT_FIELD_TO_AVERAGE") + " Average", null, null,
sum/asgnmts[asgnmtGrp].length, null, asgnmtGrp));
}
return data;
}
/*
Define the query used to retrieve the records here.
The defineQuery() method must be defined and must return
a query object created using the methods in KPI_CHART_LIBRARY
*/
function defineQuery() {
//Define the query text here. Variables can be referenced inside the <query> tags by
wrapping the variable name with {} brackets
var query = <query>
select agd.asgnmt_grp, vpcsd.rule_set as payCodeSet,
{getDateSqlForTimeUnit("tso.work_dt")} as dateField,
sum({getDataElementParameter("SUM_FIELD")}) as sumField
from asgnmt_grp_detail agd, asgnmt a, asgnmt_master am, employee_periods ep,
time_sheet_output tso, (select distinct record_key, rule_set from v_pay_code_set_detail) vpcsd
where a.asgnmt = agd.asgnmt
and tso.work_dt between a.eff_dt and a.end_eff_dt
and a.assignment_status = 'A'
and a.soft_terminated ='F'
and ? <= ep.pp_end and ? >= ep.pp_begin
and am.asgnmt = a.asgnmt
and am.asgnmt_type in (1, 2, 3)
and am.employee = ep.employee
and ep.asgnmt = a.asgnmt
and tso.employee_period_version = ep.calc_emp_period_version
and tso.transaction_type in (30, 40)
//The 'group by', 'having', and 'order by' clauses must be defined separately in the below
variables
var groupByClause = "group by asgnmt_grp, rule_set, " +
getDateSqlForTimeUnit("tso.work_dt");
var havingClause = "";
var orderByClause = "";
//Define the parameters used in the query here. For String values, use the
formatNativeStringForServer method
var queryParams = [];
queryParams.push(getStart());
queryParams.push(getEnd());
//Only modify the below line if you have changed the data scope to something other that
User's Assignment Groups
return createAssignmentGroupBasedKpiDataQueryWithDataSeries(query, groupByClause,
havingClause, orderByClause,
"agd.asgnmt_grp", "tso.work_dt", "tso.pay_code", "vpcsd", queryParams);
}
/*
Define the query used to retrieve the records here.
The defineQuery() method must be defined and must return
a query object created using the methods in KPI_CHART_LIBRARY
*/
function defineQuery() {
//Change fields here to config:
var sumField = "tso." + getDataElementParameter("SUM_FIELD"); //field to use for summing
var ldField = getStringParameter("LD_FIELD"); //field containing LD value
var dateField = "tso.work_dt"; //field containing date value
var laborCostPayCodeSet = getPayCodeSetParameter("LABOR_COST_PAY_CODE_SET"); //Pay code set
contain pay codes to use for this KPI
var asgnmtGrpField = "agd.asgnmt_grp"; //field containing asgnmt_grp value
//Define the query text here. Variables can be referenced inside the <query> tags by
wrapping the variable name with {} brackets
var query = <query>
select {ldField} as ldField, {getDateSqlForTimeUnit(dateField)} as dateField,
sum({sumField}) as sumField, {asgnmtGrpField} as asgnmt_grp
from time_sheet_output tso, asgnmt_grp_detail agd, asgnmt a, asgnmt_master am,
employee_periods ep, (select distinct record_key, rule_set from v_pay_code_set_detail) vpcsd
where tso.pay_code = vpcsd.record_key
and a.asgnmt = agd.asgnmt
and tso.work_dt between a.eff_dt and a.end_eff_dt
and a.assignment_status = 'A'
and a.soft_terminated = 'F'
and ? <= ep.pp_end and ? >= ep.pp_begin
and am.asgnmt = a.asgnmt
and am.asgnmt_type in (1, 2, 3)
//The 'group by', 'having', and 'order by' clauses must be defined separately in the below
variables
var groupByClause = "group by " + ldField + ", " + getDateSqlForTimeUnit(dateField) + ", " +
asgnmtGrpField;
var havingClause = "";
var orderByClause = "order by sumField desc";
var appUserField = ""; //The field which contains the app user value (required for user data
scope)
//Define the parameters used in the query here. For String values, use the
formatNativeStringForServer method
var queryParams = [];
queryParams.push(getStart());
queryParams.push(getEnd());
//Only modify the below line if you have changed the data scope to something other that
User's Assignment Groups
return createAssignmentGroupBasedKpiDataQuery(query, groupByClause, havingClause,
orderByClause, asgnmtGrpField, dateField, queryParams);
}
function aggregateDataPoints(dataPoints) {
//Show the top X most common LD values (set value of 'X' here)
numberOfMostCommonLdValues = getIntegerParameter("NUMBER_OF_MOST_COMMON_LD_VALUES");
data = [];
//Create map from LD value name to array of records for that value
var ldValueMap = createLdMap(dataPoints);
//Create map from 'LD value' --> 'sum of cost for LD value across entire time frame'
var sumMap = createSumMap(ldValueMap);
//Create array of LD value-sum pairs, ordered from largest sum to lowest sum
var sortedLdValues = sortLdValuesBySum(ldValueMap, sumMap);
//For each of the first X most common LD values (i.e. LD values with the largest hours sum),
create the data series for the LD value
for(var i = 0; i < numberOfMostCommonLdValues && i < sortedLdValues.length; i++) {
createDataSeriesForLdValue(sortedLdValues[i].ldValue, ldValueMap);
}
//Create "Other" series (all other data not part of the X most common LD values get grouped
into a single 'Other' series)
createOtherDataSeries(ldValueMap, sortedLdValues);
return data;
}
//Create map from name of LD value to an array of records for that value. Each record is a
pair of values (date, hours)
function createLdMap(dataPoints) {
var ldValueMap = {};
for(var i = 0; i < dataPoints.length; i++) {
addRecordToLdValueMap(dataPoints[i], ldValueMap);
}
return ldValueMap;
}
//Create map from 'LD value' --> 'sum of hours for LD value across entire time frame'
function createSumMap(ldValueMap) {
var sumMap = {};
for(var ldValue in ldValueMap) {
createSumRecordForLdValue(ldValue, ldValueMap, sumMap);
}
return sumMap
}
//Retrieves the array of records for specified LD value, sums up the hours, and creates a
single record with LD value and sum (and adds record to LD sum map
function createSumRecordForLdValue(ldValue, ldValueMap, sumMap) {
sumMap[ldValue] = 0;
var ldArray = ldValueMap[ldValue];
//Iterate through all of the values for that LD value, and sum up the hours
for(var i = 0; i < ldArray.length; i++) {
//The new sum value is equal to the old sum value plus the hours amount for this record
sumMap[ldValue] = sumMap[ldValue] + ldArray[i].sumField;
}
}
//Uses the data in the ldValueMap and sumMap to create an array of LD value-sum pairs, ordered
from largest sum to lowest sum
function sortLdValuesBySum(ldValueMap, sumMap) {
var sortedLdValues = [];
for(var ldValue in ldValueMap) {
sortedLdValues.push({"ldValue" : ldValue, "sumField" : sumMap[ldValue]});
}
//Sort the array based on the largest sum (i.e. the most common values by hours)
//Highest sum should be first in array (thus use b - a rather than other way around)
sortedLdValues.sort(function (a, b) {
var diff = b.sumField - a.sumField ;
if(diff != 0) {
return diff;
} else {
//If same cost value, sort by LD value name (so we have a consistent ordering, in the
case of tie)
if(b.ldValue == a.ldValue) {
return 0;
} else if (b.ldValue > a.ldValue) {
return 1;
} else {
return -1;
}
}
});
return sortedLdValues;
}
//Take all of the records for a given LD value, and create all of the data points for the
chart
function createDataSeriesForLdValue(ldValue, ldValueMap) {
//Create a data point for each date that this LD value has records
var sumValuesByDateMap = {};
sumRecordsForLdValueByDate(ldValue, ldValueMap, sumValuesByDateMap);
createDataSeriesPoints(ldValue, sumValuesByDateMap);
}
return sumValuesByDateMap;
}
//The rest of the LD values that aren't in the X most common, get summed together into a
single "Other" series.
//So here we take the remaining values not processed by createDataSeriesForLdValue and combine
into one series.
function createOtherDataSeries(ldValueMap, sortedLdValues) {
var otherLdValuesMap = {};
//Start with i = numberOfMostCommonLdValues, as we have already processed through
'numberOfMostCommonLdValues - 1' in
//createDataSeriesForLdValue. If the number of series (# of LD values) is <=
numberOfMostCommonLdValues, then this
//loop gets skipped as i = numberOfMostCommonLdValues will already be >=
sortedLdValues.length
for(var i = numberOfMostCommonLdValues; i < sortedLdValues.length; i++) {
var ldValue = sortedLdValues[i].ldValue;
sumRecordsForLdValueByDate(ldValue, ldValueMap, otherLdValuesMap);
}
//Take all of the records for the "Other" series, and create all of the data points for the
chart
function createDataSeriesPoints(ldValue, sumValuesByDateMap) {
for(var key in sumValuesByDateMap) {
//Create a data point for each date that has values
data.push(createAssignmentGroupDataPoint(ldValue, null, null,
sumValuesByDateMap[key].date, sumValuesByDateMap[key].sum, null, null));
}
}
Troubleshooting
The job log of the script using the KPI Chart Library will include informational messages, and in the case of
problems, error messages. This job log should be reviewed if there are problems using the API.
The following table lists some common error messages, their causes, and the solution.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
getRandomColor()
Gets a random hex color. The length can be from 3 to 6 characters.
turnOffLinePoints()
Turns off actual points on a line chart.
Parameter Description
Config: Object The configuration data of the chart
defineReportUrlPlaceholder()
Defines placeholder/values to be used in report URL.
Parameter Description
placeHolderName: String The placeholder to be used in report URL
value: String or Array The value(s) to be used in report URL. If Array, reportParameterId must be
defined.
reportParameterId Only required if value is Array. Name of the report parameter that the values
are for.
formatNativeStringForServer()
Formats a NativeString to a JavaScript String.
Parameter Description
string: NativeString The string to be formatted
getPay_codeSet()
Returns the pay codes for the specified pay code set.
Parameter Description
id: String The name of the pay code set.
buildInClause()
Creates a SQL ‘IN’ clause given a list of items and a specified column name.
Parameter Description
items: Array The list of items for the ‘IN’ clause
columnName: String Name of the queried column
formatListForInClause()
Formats a list to be used in DBUtils.buildInClause() method by adding single quote (‘) around each item in the
list.
Parameter Description
list: List The list to be formatted
getDateFormatBasedOnTimeUnitConfig()
Returns date format applicable to current time unit config.
formatArrayForInClause()
Formats an array to be used in DBUtils.buildInClause() method by adding single quote (‘) around each item in
the array.
Parameter Description
array: Array The array to be formatted
getPayCodeSetDescription()
Returns short description value for the specified pay code set.
Parameter Description
payCodeSetId: String The name of the pay code set
getEmployee()
Returns current employee from runtime context.
getEmployeeId()
Returns current employee ID from runtime context.
getAppUser()
Returns current app user from runtime context.
getAppUserId()
Returns current app user ID from runtime context.
createGlobalKpiDataQuery()
Create a KpiQuery javascript object to be used to run query.
Parameter Description
query: String Text of select/from/where clauses of query
groupByClause: String Text of group by clause of query
havingClause: String Text of having clause of query
orderByClause: String Text of order by clause of query
dateTimeField: String Name of field with dttm value
queryParams: Array Parameters for query
createAppUserBasedKpiDataQuery()
Create a KpiQuery javascript object, based on app user, to be used to run query.
Parameter Description
query: String Text of select/from/where clauses of query
groupByClause: String Text of group by clause of query
havingClause: String Text of having clause of query
orderByClause: String Text of order by clause of query
appUserField: String Name of field with app_user value
dateTimeField: String Name of field with dttm value
queryParams: Array Parameters for query
createAssignmentGroupBasedKpiDataQuery()
Create a KpiQuery javascript object, based on assignment group, to be used to run query.
Parameter Description
query: String Text of select/from/where clauses of query
createGlobalKpiDataQueryWithDataSeries()
Create a KpiQuery javascript object, based on app user, to be used to run query.
Parameter Description
query: String Text of select/from/where clauses of query
groupByClause: String Text of group by clause of query
havingClause: String Text of having clause of query
orderByClause: String Text of order by clause of query
dateTimeField: String Name of field with dttm value
dataSeriesField: String Name of field with data series value
dataSeriesTableAlias: String The table alias used in the query to reference the source table of the data
series
queryParams: Array Parameters for query
createAppUserBasedKpiDataQueryWithDataSeries()
Create a KpiQuery javascript object with data series, based on app user, to be used to run query.
Parameter Description
query: String Text of select/from/where clauses of query
groupByClause: String Text of group by clause of query
havingClause: String Text of having clause of query
orderByClause: String Text of order by clause of query
appUserField: String Name of field with app_user value
dateTimeField: String Name of field with dttm value
dataSeriesField: String Name of field with data series value
dataSeriesTableAlias: String The table alias used in the query to reference the source table of the data
series
queryParams: Array Parameters for query
createAssignmentBasedBasedKpiDataQueryWithDataSeries()
Create a KpiQuery javascript object with data series, based on assignment group, to be used to run query.
Parameter Description
query: String Text of select/from/where clauses of query
groupByClause: String Text of group by clause of query
havingClause: String Text of having clause of query
orderByClause: String Text of order by clause of query
asgnmtGrpField: String Name of field with asgnt_grp value
dateTimeField: String Name of field with dttm value
dataSeriesField: String Name of field with data series value
dataSeriesTableAlias: String The table alias used in the query to reference the source table of the data
series
queryParams: Array Parameters for query
createBasicGlobalCategoryDataPoint()
Create a Kpi_data_set_point to add to the KPIChartData object, for use with category points.
Parameter Description
series: String Name of data series
category: String Category of point (for category-based axes)
y: Double Y-value of point
createBasicAssignmentGroupCategoryDataPoint()
Create a Kpi_data_set_point to add to the KPIChartData object, for use with category points for assignment
group.
Parameter Description
series: String Name of data series
category: String Category of point (for category-based axes)
y: Double Y-value of point
asgnmtGroup: String Assignment group value of point
createBasicAppUserCategoryDataPoint()
Create a Kpi_data_set_point to add to the KPIChartData object, for use with category points for app user.
Parameter Description
series: String Name of data series
category: String Category of point (for category-based axes)
y: Double Y-value of point
appUser: String App user value of point
createBasicCategoryDataPoint()
Create a Kpi_data_set_point to add to the KPIChartData object, for use with category points.
Parameter Description
series: String Name of data series
category: String Category of point (for category-based axes)
y: Double Y-value of point
asgnmtGroup: String Assignment group value of point
appUser: String App user value of point
createBasicGlobalDataPoint()
Create a Kpi_data_set_point to add to the KPIChartData object.
Parameter Description
series: String Name of data series
x: Double or WFSDate or X- value of point
WFSDateTime
createBasicAssignmentGroupDataPoint()
A more basic version of createAppUserDataPoint for assignment group.
Parameter Description
series: String Name of data series
x: Double or WFSDate or X- value of point
WFSDateTime
y: Double Y-value of point
asgnmtGroup: String Assignment group value of point
createBasicAppUserDataPoint()
A more basic version of createAppUserDataPoint for app user.
Parameter Description
series: String Name of data series
x: Double or WFSDate or X- value of point
WFSDateTime
y: Double Y-value of point
appUser: String App user value of point
createBasicDataPoint()
Create a Kpi_data_set_point to add to the KPIChartData object.
Parameter Description
series: String Name of data series
x: Double or WFSDate or X- value of point
WFSDateTime
y: Double Y-value of point
asgnmtGroup: String Assignment group value of point
appUser: String App user value of point
createGlobalDataPoint()
Create a Kpi_data_set_point to add to the KPIChartData object.
Parameter Description
series: String Name of data series
category: String Category of point (for category-based axes)
color: String Color of point
x: Double or WFSDate or X- value of point
WFSDateTime
y: Double Y-value of point
z: Double Z-value of point
createAssignmentGroupDataPoint()
Create a Kpi_data_set_point to add to the KPIChartData object for assignment group.
Parameter Description
series: String Name of data series
category: String Category of point (for category-based axes)
color: String Color of point
x: Double or WFSDate or X- value of point
WFSDateTime
y: Double Y-value of point
z: Double Z-value of point
asgnmtGroup: String Assignment group value of point
createAppUserDataPoint()
Create a Kpi_data_set_point to add to the KPIChartData object for app user.
Parameter Description
series: String Name of data series
category: String Category of point (for category-based axes)
color: String Color of point
x: Double or WFSDate or X- value of point
WFSDateTime
y: Double Y-value of point
z: Double Z-value of point
appUser: String App user value of point
copyDataPoint()
Creates a copy of the specified data point.
Parameter Description
point: Kpi_data_set_point The data set point to be copied
getPayCodeSetParameter()
Retrieves the pay code set with the specified ID.
Parameter Description
parameter_id: String Unique parameter identifier for pay code set
getBankSetParameter()
Retrieves the bank set with the specified ID.
Parameter Description
parameter_id: String Unique parameter identifier for bank set
getDataElementParameter()
Retrieves the data element with the specified ID.
Parameter Description
parameter_id: String Unique parameter identifier for data element
getStringParameter()
Retrieves the string with the specified ID.
Parameter Description
parameter_id: String Unique parameter identifier for string
getBooleanParameter()
Retrieves the boolean with the specified ID.
Parameter Description
parameter_id: String Unique parameter identifier for boolean
getDateParameter()
Retrieves the date with the specified ID.
Parameter Description
parameter_id: String Unique parameter identifier for date
getDateTimeParameter()
Retrieves the date time with the specified ID.
Parameter Description
parameter_id: String Unique parameter identifier for date time
getDoubleParameter()
Retrieves the double with the specified ID.
Parameter Description
parameter_id: String Unique parameter identifier for double
getIntegerParameter()
Retrieves the integer with the specified ID.
Parameter Description
parameter_id: String Unique parameter identifier for integer
getStandardDateRange()
Returns a localized date range.
Parameter Description
start: WFSDate Start date of the date range
end: WFSDate End date of the date range
getStart()
Returns the start date-time of the timeframe.
getEnd()
Returns the end date-time of the timeframe.
getLastPoint()
Returns the last data point in the timeframe.
addTimeIntervalToDateTime()
Given a date-time, adds the defined interval of time and returns the result.
Parameter Description
dateTime: WFSDateTime Date-time to add interval
getDateSqlForTimeUnit()
Given the name of a date or date-time filed, creates the proper SQL required around the date field to group
all the records for an hour, day, week, or month together.
Parameter Description
dateField: String The field to get the date or date-time from
LD Data API
Overview and capabilities
The LD Data API lets you access Labor Distribution (LD) data from tables using LdMatchCondition defined
in API_UTIL library.
Prerequisites
To use this API, you should know the following:
Components
The components of the LD Data API are as follows:
Setup
No setup is necessary for the LD Data API. The distributed library is automatically available within
WT&A.
Use cases
Get an LD Record from a table using simple LdMatchCondition
// include the required libraries
includeDistributedPolicy("LD_DATA_API");
//this will create inner join for table LD8 and LD9 on LD8.LD2 = LD9.LD1
condition.addJoinCriteria(8, "LD2", 9, "LD1");
var api = new LDStoreAPI();
var result = api.lookupByMatchCondition(condition, WFSDate.today());
log.info("logging ld fields");
var logArray = [];
for (var i = 0; i < result.length; i++) {
logArray.push(result[i].ld2);
}
log.info(logArray.toString());
Troubleshooting
The job log of the script using the LD Store API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
Some common error messages, their causes, and the solution:
Error Message Problem Solution
NaN received in summation The API will not throw an error if Make sure the field is a data type that
the field is not supposed to be supports the sum aggregate.
summed, but may return invalid
values.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The LD Store API consists of one public initializer component:
o LDStoreAPI. It contains the following public method:
▪ lookupByMatchCondition
LDStoreAPI
Creates a new instance of the LD Store API.
LD Import API
Overview and Capabilities
The LD Import API provides a mechanism for performing common LD imports.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding knowledge
• Understanding of policies and their settings
• Understanding of how and what LD Import should work
• Knowledge of which methods must be called first before the import can be initiated. This
documentation will detail these methods.
Components
This API consists of the following component(s):
• The distributed JavaScript library LD_IMPORT_LIBRARY_API
Setup
No setup is necessary for the LD Import API. The distributed library is automatically available within WT&A.
Use Cases
Importing Records from Using LD Import
Example 1: Importing records from a CSV file while validating the character set of a file
includeDistributedPolicy("LD_IMPORT_LIBRARY");
includeDistributedPolicy("LD_IMPORT_LIBRARY");
// mapping function is required
function mappingFunction(record) {
var obj = {
"ld1": record.getStringValue("LD1"),
"other_string2": record.getStringValue("OTHER_STRING2"),
"other_string3": record.getStringValue("OTHER_STRING3"),
"other_string4": record.getStringValue("OTHER_STRING4"),
"short_description": record.getStringValue("SHORT_DESCRIPTION"),
"eff_dt": record.getDateValue("EFF_DT", null, WFSDate.VERY_EARLY_DATE),
"end_eff_dt": record.getDateValue("END_EFF_DT", null, WFSDate.VERY_LATE_DATE),
"other_string1": record.getStringValue("OTHER_STRING1")
};
return obj;
}
var parms = {
decoderPolicy: "INT_362_LD_DECODER",
autoTrimValues: true,
sourceType: Import_source.CSV
};
var ldImportApi = new DecoderLdImportAPI(parms);
ldImportApi.setDestinationTable(42);
ldImportApi.setKeyFieldCount(2);
ldImportApi.wipeDataBeforeLoading(true);
ldImportApi.useCharacterSet(“UTF-8”);
ldImportApi.setEnforceCharsetEncoding(true);
ldImportApi.importDataFromFile(filePath, fileName, fieldNames);
Troubleshooting
The job log of the script using the LD Import API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
Error Message Problem Solution
Destination table must be a The number supplied to Supply a valid number
number between 1 and setDestinationTable function is out
<somenumber> bounds
OR
No decoder policy specified DecoderLdImportAPI was used but Supply a decoder policy id
decoder policy was not provided
Invalid source type Source type can either be CSV or Supply a valid source type
<sourceType> specified for FIXED_WIDTH
Decoder. Valid options are
[options]
XX method on YY called There are three types of LD Import N/A. only relevant/valid methods can be
APIs that can be initialized. called
• WebServiceLdImportAPI
• DecoderLdImportAPI
• LdImportAPI
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The following is a summary of the available methods and common uses:
LDImportAPI
LdImportAPI(mappingFunction: function, validationFunction: function)
Creates a new instance of the LD Import API, capable of running imports from a SQL source, CSV file, or Fixed-
Width file. Takes two functions as parameters, mappingFunction and validationFunction, where
validationFunction is optional.
DecoderLdImportAPI(params: Object)
Creates a new instance of the LD Import API, capable of running imports from a SQL source, CSV file, or Fixed-
Width file. Since it is decoder based, a mappingFunction is not supplied. Instead, takes a JS Object as the
parameter with the following properties:
{
decoderPolicy: String
autoTrimValues: Boolean <optional, default = false>,
sourceColumnOverride: String <optional, default = “SOURCE”>,
sourceType: String <optional, default = “CSV”, possible values = CSV|FIXED_WIDTH>
}
WebServiceLdImportAPI(decoderPolicy: String)
Creates a new instance of the LD Import API, capable of running imports from a ResultSet using web services.
This type of LdImport is made use of by E2G and would not be needed directly in normal circumstances.
setMappingFunction(mappingFunction: function)
Only needed with DecoderLdImportAPI if there are fields that need to be mapped which the decoder does
not handle.
setValidationFunction(validationFunction: function)
Only needed with DecoderLdImportAPI if a validationFunction must be specified.
setDestinationTable(table: int)
Sets the destination LD table for the import. Must be set before the LD import can be run.
setKeyFieldCount(keyFieldCount: int)
The number of LD fields that form the key of the LD data. Defaults to 1.
setStandardDateFormat(format: String)
Sets the standard date format for all dates processed by the LD import. Default is "yyyy-MM-dd".
setLocale(locale: Locale)
Sets the locale for the date format for all dates processed by the LD import.
wipeDataBeforeLoading(doWipe: Boolean)
Sets the option that determines if the LD import will first wipe all data in the destination table before
importing. Defaults to false.
createRetroTriggers(createRetroTriggers: Boolean)
Indicates whether the LD import will create retro triggers or not. Defaults to false.
setRetroTriggerFunction(retroTriggerFunction: function)
Sets the function to use to determine which assignments should have retro triggers created when a change is
detected to an LD record. This function should accept two arguments, the LD record that caused the retro
trigger to be created and a connection to the local database, and should return an array of the assignment
IDs for the assignments that should have the retro triggers applied to them.
setEmployeeMatchField(employeeMatchField: String)
Sets the employee match field (required for retro triggers).
setAssignmentMatchField(assignmentMatchField: String)
Sets the assignment match field (required for retro triggers).
useJoinImport(useJoinImport: Boolean)
Indicates whether to use the ld_join or ld table for import.
setAllowExtraColumns(doAllow: Boolean)
Indicates if extra columns in the source data should be tolerated. Default is true.
useCharacterSet(charSet: String)
Defines the character set to use for the input file.
setEncrypted(encrypted: Boolean)
Indicates if the source file is encrypted.
setEnforceCharsetEncoding(enforceCharsetEncoding: Boolean)
Indicates if the character set in the file should be same as mentioned. Defaults to false.
importSingleRecord(sourceData: JSObject)
Loads data from a JavaScript object into an LD table.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• How Line Approvals function in WorkForce Time and Attendance
Components
This API consists of the following component(s):
1. The distributed JavaScript library LINE_APPROVAL_API.
2. The RoutingApproval Java class.
Setup
No setup is necessary for the Line Approval API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Adding New Routings to the Existing Routing Data
The simplest use case for the Line Approval API is to just add additional routings to the existing routing data,
without changing any of those existing records. This would be used if your source data is only defining new
routings that need to be created, or if the routing data is being cleared out ahead of time through some other
means.
The following example demonstrates how new routing data can be added to the existing routing data. It
iterates through the records in a file, constructs a new RoutingApproval for each record, and then adds that
RoutingApproval into the routing data.
// Create a new instance of the line approval API. This is done in a try-finally
try {
api = new LineApprovalAPI();
// Define an array to hold the routing approval data that should exist after
// the import is complete
var routingApprovals = [];
// Add the new routing approval to the array of data being tracked
routingApprovals.push(routingApproval);
}
// Create a new instance of the line approval API. This is done in a try-finally
// block in order to ensure that the API is cleaned up correctly when it is no
// longer needed
var api;
try {
api = new LineApprovalAPI();
// Use the API to replace the routing approvals. Any existing routing approval
// not defined in the provided array will be removed.
api.replaceAllRoutingApprovals(routingApprovals);
}
finally {
// Clean up the API now that it is done being used
api.close();
}
The second approach is to replace all of the existing routing data for the users with the new data being
imported, but to leave the routing data for users that aren’t in the import data set unchanged. This assumes
that the file includes data for only the users that have updates to their routing data. The following script
example demonstrates using the Line Approval API to replace the existing routing data for users that have
new data from a source file:
Example 2: Replacing routing information for just the users with new data
includeDistributedPolicy("LINE_APPROVAL_API");
// Define an array to hold the routing approval data that should exist after
// the import is complete
// Add the new routing approval to the array of data being tracked
routingApprovals.push(routingApproval);
}
// Create a new instance of the line approval API. This is done in a try-finally
// block in order to ensure that the API is cleaned up correctly when it is no
// longer needed
var api;
try {
api = new LineApprovalAPI();
// Use the API to replace the routing approvals. Any existing routing approval
// not defined in the provided array will be removed for the users associated
// with the new routing approvals. Existing routing data for users not specified
// in the new data will not be modified.
api.replaceRoutingApprovalsForUsers(routingApprovals);
}
finally {
// Clean up the API now that it is done being used
api.close();
}
// Create a new instance of the line approval API. This is done in a try-finally
// block in order to ensure that the API is cleaned up correctly when it is no
// longer needed
var api;
try {
api = new LineApprovalAPI();
// Define the name of the match policy whose routing data should be cleared
var matchPolicy = "LINE_APPROVAL";
// Clear all of the routing data associated with that match policy
api.clearAllRoutingApprovals(matchPolicy);
}
finally {
// Clean up the API now that it is done being used
api.close();
}
Troubleshooting
The job log of the script using the Line Approval API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
This instance of the Line Additional calls were made to the Ensure that close() is not called until
Approval API is disabled API after close() had already been no further operations remain to be
and no further action can called. performed by the API.
be taken with it. Please
create a new instance of
the API in order to continue
with any further actions.
Table 59: Line Approval API typical error messages, root problems, and solutions
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Line Approval API consists of the following component(s):
1. The RoutingApproval, which defines the information for a single routing of line items for a user
2. The LineApprovalAPI, which defines operations for updating the routing approvals within WorkForce
Time and Attendance
See the contents of the LINE_APPROVAL_API policy in the Distributed JavaScript Library category for full
documentation on these methods. The following is a summary of the available methods and common uses:
RoutingApproval
RoutingApproval(parms)
Defines a new routing approval with the indicated settings. Once created, a RoutingApproval can no longer
be modified. These RoutingApprovals can then be used by the LineApprovalAPI for updating the routing
information for line items within WorkForce Time and Attendance.
The available parameter values are as follows:
Parameter Name Description
userLoginId The login ID of the user who should receive the routing approval
matchPolicy The policy ID of the match policy associated with this approval
matchValue The value that needs to match the TIME_SHEET_DETAIL_MATCH_VAL
result in order for a line to be eligible for approval
lineRole The Line Role associated with the approval rights
minApproval The minimum approval level a line item can have to be visible to the
approver. Defaults to 0 if not specified.
maxApproval The maximum approval level a line item can have to be visible to the
approver. Defaults to 99 if not specified.
nextApproval The approval level that will be assigned to the line item after it’s
approved by this user. Defaults to 100 if not specified.
rejectApproval The approval level that will be assigned to the line item if this user
rejects it. Defaults to 0 if not specified.
rightType Indicates the type of line right represented by this record. One of either
PRIMARY or BACKUP. Defaults to PRIMARY if not specified.
description A description of the rights
Table 60: Available parameters when instantiating a RoutingApproval object
LineApprovalAPI
LineApprovalAPI(parms)
Creates a new instance of the Line Approval API, using the specified parameters. The available parameters
that can be set are:
Parameter Name Description
connection A connection to the local WorkForce Time and Attendance database to
use for writing changes. If no connection is specified in the parameters,
the API will create its own connection to use and changes will be
automatically committed as soon as they are applied.
debug Determines if additional debug output should be written to the job log
Table 61: Available parameters when instantiating a LineApprovalAPI object
importSingleRoutingApproval(routingApproval)
Appends a new routing approval record to the existing routing approval data, using the provided settings.
Existing routing approval data remains unmodified.
replaceAllRoutingApprovals(routingApprovals)
Removes all of the existing routing approval data and replaces it with the data defined by the provided array
of RoutingApproval records. Even if the new data does not contain information for a particular user, the
existing routing approvals for that user will be removed.
replaceRoutingApprovalsForUsers(routingApprovals)
Removes all of the existing routing approval data and replaces it with the data defined by the provided array
of RoutingApproval records for the users with routing approvals defined in the provided array. If the new
data does not contain information for a particular user, the existing routing approvals for that user will
remain unchanged.
clearAllRoutingApprovals(matchPolicy)
Removes all of the existing routing approval information for the specified match policy.
close()
Cleans up all connections used by the API. This needs to be called, regardless of whether a connection was
supplied when instantiating the API, in order to prevent connection leaks by the script.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
1. The PERSON_DATA_API Distributed JavaScript library
a. This automatically includes the API_UTIL Distributed JavaScript library
2. The MatchCondition, which defines a condition to select employee, assignment, or user records.
Setup
No setup is necessary for the Person Data API. The distributed library is automatically available in WorkForce
Time and Attendance.
Use Cases
Looking up a single effective-dated employee record
This script excerpt demonstrates how the Person Data API can be used to find a single effective-dated
employee record. A match condition is defined that matches any employee with a display ID of “E156784”,
and then the API retrieves the data for the matching employee that is effective on the current system date.
includeDistributedPolicy("PERSON_DATA_API");
Note: The condition used here must match exactly zero or one employees. If more than one employee
matches the condition, an exception will be generated.
if (data.length === 0) {
log.info("No employee data was found for employee E284224");
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < data.length; ++i) {
var effDatedRecord = data[i];
log.info("Employee E284224 has a record effective from " +
effDatedRecord.eff_dt + " through " + effDatedRecord.end_eff_dt);
}
}
If the specified match condition doesn’t match any employees, then the array of records returned will have a
zero length. If more than one employee matches the condition specified, then an exception will be
generated.
// Retrieve all of the employees that match the condition on that date
var employees = personDataAPI.getAllEmployees(condition, effDt);
if (employees.length === 0) {
log.info("No employee data was found for country code USA");
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < employees.length; ++i) {
var employee = employees[i];
log.info("Employee " + employee.display_employee +
" has a country code of USA on " + effDt);
}
}
If the specified condition doesn’t match any employees, then the array of records returned will be empty.
MatchOperator.EQUALS, "E837645");
Note: The condition used here must match exactly zero or one employees. If more than one employee
matches the condition, an exception will be generated.
// Retrieve all of the employees that match the condition on that date
// that have at least one active assignment
var employees = personDataAPI.getActiveEmployees(condition, effDt);
if (employees.length === 0) {
log.info("No active employee data was found for country code USA");
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < employees.length; ++i) {
var employee = employees[i];
log.info("Employee " + employee.display_employee +
If no active employees match the specified condition, then an empty array will be returned.
// Retrieve all of the employees that match the condition on that date
// that have at least one active or soft-terminated assignment
var employees = personDataAPI.getActiveOrSoftTerminatedEmployees(condition, effDt);
if (employees.length === 0) {
log.info("No active employee data was found for country code USA");
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < employees.length; ++i) {
var employee = employees[i];
log.info("Employee " + employee.display_employee +
" has a country code of USA on " + effDt);
}
}
If no employee with at least one active or soft-terminated assignment matches the specified condition, then
an empty array will be returned.
includeDistributedPolicy("PERSON_DATA_API");
Note: The condition used here must match exactly zero or one assignments for the indicated employee. If
more than one assignment matches the condition, an exception will be generated.
// Retrieve the assignment that matches the assignment condition, for the
// employee matching the employee condition, on that date
var assignment = personDataAPI.getAsgnmtRecord(employeeCondition,
assignmentCondition, effDt);
Note: The employee condition used here must match exactly zero or one employees and the assignment
condition must match exactly zero or one assignments belonging to the matching employee. If more
than one employee or assignment matches the conditions, an exception will be generated.
// Retrieve the full history for the assignment that matches the condition
var data = personDataAPI.getAsgnmtHistoryForEmployee(employeeId, condition,
effDt);
if (data.length === 0) {
log.info("No primary assignment was found for employee " + employeeId);
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < data.length; ++i) {
var assignment = data[i];
log.info("Primary assignment for employee " + employeeId +
" belongs to policy profile " + assignment.policy_profile +
" between " + assignment.eff_dt + " and " + assignment.end_eff_dt);
}
}
If no assignment belonging to the indicated employee matches the specified condition, then an empty array
will be returned.
// Retrieve the full history for the assignment that matches the condition
var data = personDataAPI.getAsgnmtHistory(employeeCondition, assignmentCondition);
if (data.length === 0) {
log.info("No primary assignment was found for the matching employee");
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < data.length; ++i) {
var assignment = data[i];
log.info("Primary assignment for the matching employee" +
" belongs to policy profile " + assignment.policy_profile +
" between " + assignment.eff_dt + " and " + assignment.end_eff_dt);
}
}
If no assignment belonging to the matching employees matches the specified condition, then an empty array
will be returned.
Note: the employee condition must match exactly zero or one employees. If more than employee matches the
condition, an exception will be generated.
// Retrieve all of the assignments that match the condition on that date
var assignments = personDataAPI.getAllAsgnmts(condition, effDt);
if (assignments.length === 0) {
log.info("No assignments were found in policy profile EXEMPT on " + effDt);
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < assignments.length; ++i) {
var assignment = assignments[i];
log.info("Assignment with ID " + assignment.computed_match_id +
" belongs to policy profile EXEMPT on " + effDt);
}
}
If the specified condition doesn’t match any assignments, then the array of records returned will be empty.
// Retrieve the assignment master record for the assignment that matches the condition
var assignmentMaster = personDataAPI.getAsgnmtMasterForEmployee(employeeId,
condition);
Note: The condition used here must match exactly zero or one assignments for the indicated employee. If
more than one assignment matches the condition, an exception will be generated.
// Retrieve the assignment master record for the assignment that matches
// both the employee and assignment conditions
var assignmentMaster = personDataAPI.getAsgnmtMaster(employeeCondition,
assignmentCondition);
Note: The employee condition used here must match exactly zero or one employees and the assignment
condition must match exactly zero or one assignments belonging to the matching employee. If more
than one employee or assignment matches the conditions, an exception will be generated.
// Retrieve all of the assignments that match the condition and are active on that
// date
var assignments = personDataAPI.getActiveAssignments(condition, effDt);
if (assignments.length === 0) {
log.info("No assignments were found in policy profile EXEMPT on " + effDt);
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < assignments.length; ++i) {
var assignment = assignments[i];
log.info("Assignment with ID " + assignment.computed_match_id +
" belongs to policy profile EXEMPT on " + effDt);
}
}
If no active assignments match the condition, then an empty array will be returned.
// Retrieve all of the assignments that match the condition and are active on that date
var assignments = personDataAPI.getActiveOrSoftTerminatedAssignments(condition, effDt);
if (assignments.length === 0) {
log.info("No assignments were found in policy profile EXEMPT on " + effDt);
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < assignments.length; ++i) {
var assignment = assignments[i];
log.info("Assignment with ID " + assignment.computed_match_id +
" belongs to policy profile EXEMPT on " + effDt);
}
}
If no active or soft-terminated assignments match the condition, then an empty array will be returned.
// Define the match condition to identify the user. This condition will
// match the user with the login ID E766598
var condition = new MatchCondition("app_user", "login_id",
MatchOperator.EQUALS, "E766598");
// Retrieve the user information for the user matching the condition
var user = personDataAPI.getUser(condition);
Note: The condition used here must match exactly zero or one users. If more than one user matches the
condition, an exception will be generated.
// Define the match condition to identify the user. This condition will
// match all users with a login ID beginning with the letter E
var condition = new MatchCondition("app_user", "login_id",
MatchOperator.LIKE, "E%");
// Retrieve the user information for the user matching the condition
var users = personDataAPI.getAllUsers(condition);
if (users.length === 0) {
log.info("No user records found with login ID beginning with E");
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < users.length; ++i) {
log.info("User email address for user with login ID " + user.login_id +
" is " + user.email_address);
}
}
Note: The condition used here must match exactly zero or one users. If more than one user matches the
condition, an exception will be generated.
// Retrieve the user information for the users in the specified roles
var users = personDataAPI.getUsersForGeneralRoles(roles, effDt);
if (users.length === 0) {
If no users are found matching the specified roles, then the array of users returned will be empty.
// Retrieve the user information for the non-terminated users in the specified roles
var users = personDataAPI.getNonTerminatedUsersForGeneralRoles(roles, effDt);
if (users.length === 0) {
log.info("No non-terminated user records found with the specified roles");
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < users.length; ++i) {
log.info("User with login ID " + user.login_id +
"has one of the indicated roles");
}
}
If no non-terminated users are found matching the specified roles, then the array of users returned will be
empty.
// Retrieve the user information for the users in the specified roles
if (users.length === 0) {
log.info("No user records found with the specified roles");
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < users.length; ++i) {
log.info("User with login ID " + user.login_id +
"has one of the indicated roles");
}
}
If no users are found matching the specified roles, then the array of users returned will be empty.
// Retrieve the user information for the non-terminated users in the specified roles
var users = personDataAPI.getNonTerminatedUsersForGroupRoles(roles, effDt);
if (users.length === 0) {
log.info("No non-terminated user records found with the specified roles");
}
else {
// Do something with the data, based on the needs of the script
for (var i = 0; i < users.length; ++i) {
log.info("User with login ID " + user.login_id +
"has one of the indicated roles");
}
}
If no non-terminated users are found matching the specified roles, then the array of users returned will be
empty.
Example 1: Determining the general role when user data has already been looked up
includeDistributedPolicy("PERSON_DATA_API");
// Retrieve the user information for the users in the specified roles
var generalRoles = personDataAPI.getAllGeneralRolesForUser(userId, effDt);
Note: If the specified user doesn’t exist, or doesn’t have a general role assigned, then an exception will be
generated.
Example 2: Determining the general role when the user is not yet known
includeDistributedPolicy("PERSON_DATA_API");
// Define the condition to select the user. In this case, the condition
// will select the user with login ID E884521
var condition = new MatchCondition("app_user", "login_id",
MatchOperator.EQUALS, "E884521");
// Retrieve the user information for the users in the specified roles
var generalRoles = personDataAPI.getAllGeneralRoles(condition, effDt);
Note: If the specified condition doesn’t match exactly one user, with a general role defined, then an exception
will be generated.
var queryResultCount = 0;
// query to verify the results returned by getDistinctValues()
var query = "select distinct ACT_COMPANY from EMPLOYEE where {ts '2017-01-01 00:00:00.000'}
between EFF_DT and END_EFF_DT";
var c = new Connection();
var distinctValuesQuery = [];
try {
var result = new Sql(c, query);
while (result.next()) {
queryResultCount++;
distinctValuesQuery.push(result.ACT_COMPANY);
}
}
distinctValuesMethod.sort();
distinctValuesQuery.sort();
if (distinctValuesMethod.length != distinctValuesQuery.length) {
throw "Values returned by method do not match values returned by query";
}
Troubleshooting
The job log of the script using the Person Data API will include informational messages, and in the case of
problems, error messages. This job log should be reviewed if there are problems using the API.
Some common error messages, their causes, and the solution:
Error Message Cause Solution
Input parameter PARAMETER An API method was called Check that all calls to the API
DESCRIPTION not provided without a value being provided methods are supplying values for
for a required argument all of the required arguments
Multiple general roles found More than one record was Check that the condition
matching specified criteria returned by a call to specified in the call to
getGeneralRole getGeneralRole returns only one
user, and that that user doesn’t
have overlapping general role
records
No general role defined for user No records were returned by a Check that the condition or user
matching specified criteria call to getAllGeneralRoles or ID matches a valid existing user
getAllGeneralRolesForUser and that the user is correct
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Person Data API consists of the following component(s):
1. The MatchCondition, which defines a condition to select employee, assignment, or user records
a. This MatchCondition makes use of a MatchOperator which defines a comparison operation
to use in a condition for matching a specified value against the employee/assignment/user
data.
2. The PersonDataAPI, which provides assorted methods for accessing employee, assignment, or user
data.
See the contents of the PERSON_DATA_API policy in the Distributed JavaScript Library category of the Policy
Editor for full documentation on these methods. The following is a summary of the available methods and
common uses:
MatchOperator
EQUALS
Only data where the value in the specified field exactly matches the indicated value will be matched by the
condition
NOT_EQUALS
Only data where the value in the specified field does not match the indicated value will be matched by the
condition
GREATER_THAN
Only data where the value in the specified field is greater than the indicated value will be matched by the
condition. For string fields this is applied lexicographically, meaning that “3” is greater than “20”.
GREATER_THAN_OR_EQUALS
Only data where the value in the specified field exactly matches or is greater than the indicated value will be
matched by the condition. For string fields this is applied lexicographically, meaning that “3” is greater than
“20”.
LESS_THAN
Only data where the value in the specified field is less than the indicated value will be matched by the
condition. For string fields this is applied lexicographically, meaning that “20” is less than “3”.
LESS_THAN_OR_EQUALS
Only data where the value in the specified field exactly matches or is less than the indicated value will be
matched by the condition. For string fields this is applied lexicographically, meaning that “20” is less than
“3”.
IN
Only data where the value in the specified field exactly matches one of the values in the indicated array of
values will be matched by the condition.
NOT_IN
Only data where the value in the specified field does not match any of the values in the indicated array of
values will be matched by the condition.
LIKE
Only data where the value in the specified field matches the pattern defined by the indicated value will be
matched by the condition.
NOT_LIKE
Only data where the value in the specified field does not match the pattern defined by the indicated value
will be matched by the condition.
BETWEEN
Only data where the value in the specified falls between the two values defined by the indicated array of
values (inclusive of the two endpoints) will be matched by the condition. For string fields this is applied
lexicographically, meaning that “5” is between “37” and “62”.
MatchCondition
MatchCondition(table, field, operator, values, isNotOperator)
Creates a new MatchCondition for the indicated table/field combination that matches against the specified
value(s) using the indicated MatchOperator.
If the MatchOperator specified is IN or NOT_IN, then values is expected to be an array of all the different
values to be evaluated by the “in” operation. If the MatchOperator specified is BETWEEN, then values is
expected to be an array containing two values: the starting point and ending point of the range to be
evaluated by the “between” operation. For all other MatchOperators, values is expected to be a single
value.
The isNotOperator controls whether the results of the MatchCondition should be negated. If this is set to
true, then a MatchCondition that normally evaluates to false will instead evaluate to true, and a
MatchCondition that normally evaluates to true will instead evaluate to false. This allows for conditions that
don’t have an explicit MatchOperator defined, such as “not between”, to be defined.
and(condition)
Modifies the MatchCondition to return only the intersection of its original condition and the specified
condition.
or(condition)
Modifies the MatchCondition to return the union of its original condition and the specified condition.
PersonDataAPI
PersonDataAPI()
Creates a new instance of the Person Data API
getEmployeeRecord(condition, effDate)
Returns a single effective-dated employee record for the employee matching the specified condition on the
indicated date.
getEmployeeHistory(condition)
Returns the full effective-dated employee history for the employee matching the specified condition.
Records will be ordered from earliest effective date to latest effective date.
getAllEmployees(matchCondition, effDate)
Returns a single effective-dated employee record for each employee matching the specified condition on the
indicated date.
getAllEmployeeHistory(matchCondition)
Returns the full effective-dated employee history for each employee matching the specified condition. For
each employee, records will be ordered from earliest effective date to latest effective date.
getEmployeeMaster(condition)
Returns a single employee master record for the employee matching the specified condition.
getActiveEmployees(matchCondition, effDate)
Returns a single effective-dated employee record for each employee matching the specified condition on the
indicated date where that employee has at least one assignment that is not in a hard-terminated or soft-
terminated state on that date.
getActiveOrSoftTerminatedEmployees(matchCondition, effDate)
Returns a single effective-dated employee record for each employee matching the specified condition on the
indicated date where that employee has at least one assignment that is not in a soft-terminated state on that
date.
getEmployeeStageRecord(condition, effDate)
Returns a single employee stage record for the employee matching the specified condition. If no effective
date is specified, the condition must match to a single record. If an effective date is specified, then the
condition must match to a single employee and the record for that employee effective on the indicated date
will be returned.
getAllEmployeeStageRecords(matchCondition, effDate)
Returns all employee stage records matching the specified condition. If an effective date is specified, only
those records matching the condition on the indicated date will be returned.
clearEmployeeStageRecords(condition)
Deletes all records from the employee stage table that match the specified condition.
clearAllEmployeeStageRecords()
Deletes all records from the employee stage table.
getAsgnmtHistoryForEmployee(employeeId, condition)
Returns the full effective-dated assignment history for the employee with the indicated ID where the
assignment matches the specified condition.
getAsgnmtHistory(employeeCondition, asgnmtCondition)
Returns the full effective-dated assignment history where the assignment matches the specified assignment
condition and the assignment belongs to the employee matching the specified employee condition. Records
will be ordered from earliest effective date to latest effective date.
getAllAsgnmts(matchCondition, effDate)
Returns the single effective-dated assignment record for all of the assignments matching the specified
condition on the indicated date.
getAllAsgnmtHistory(matchCondition)
Returns the full effective-dated assignment history for each assignment matching the specified condition.
For each assignment, records will be ordered from earliest effective date to latest effective date.
getAsgnmtMasterForEmployee(employeeId, condition)
Returns the assignment master record for the employee with the specified ID that matches the indicated
condition.
getAsgnmtMaster(employeeCondition, asgnmtCondition)
Returns the assignment master record for the assignment matching the specified assignment condition
belonging to the employee matching the specified employee condition.
getActiveAssignments(matchCondition, effDate)
Returns the single effective-dated assignment record for each assignment matching the specified condition
on the indicated date where that assignment is not in a hard-terminated or soft-terminated state.
getActiveOrSoftTerminatedAssignments(matchCondition, effDate)
Returns the single effective-dated assignment record for each assignment matching the specified condition
on the indicated date where that assignment is not in a hard-terminated state.
getAsgnmtStageRecord(asgnmtCondition, effDate)
Returns a single assignment stage record for the assignment matching the specified condition. If no effective
date is specified, the condition must match to a single record. If an effective date is specified, then the
condition must match to a single assignment and the record for that assignment effective on the indicated
date will be returned.
clearAsgnmtStageRecords(condition)
Deletes all records from the assignment stage table that match the specified condition.
clearAllAsgnmtStageRecords()
Deletes all records from the assignment stage table.
getUser(condition)
Returns the user record for the user matching the specified condition.
getAllUsers(matchCondition)
Returns the user records for all users matching the specified condition.
getUsersForGeneralRoles(generalRoleIds, effDate)
Returns the user records for every user assigned to one of the specified general roles on the indicated date.
getNonTerminatedUsersForGeneralRoles(generalRoleIds, effDate)
Returns the user records for non-terminated users assigned to one of the specified general roles on the
indicated date.
getUsersForGroupRoles(groupRoleIds, effDate)
Returns the user records for every user delegated rights to an assignment group with one of the specified
group roles on the indicated date.
getNonTerminatedUsersForGroupRoles(groupRoleIds, effDate)
Returns the user records for non-terminated users delegated rights to an assignment group with one of the
specified group roles on the indicated date.
getAllGeneralRoles(condition, effDate)
Returns all the general roles associated with the user matching the specified condition effective on the
indicated date.
getAllGeneralRolesForUser(appUser, effDate)
Returns all the general roles associated with the user with the specified internal user ID effective on the
indicated date.
isTerminatedUser(appUser, effDate)
Determines if an app user is terminated by evaluating the user’s roles. This method is valid for both stackable
general roles configurations and single general role configurations
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• Understanding of policies and their settings
Components
This API consists of the following component(s):
• The distributed JavaScript library POLICY_CREATION_API.
Setup
No setup is necessary for the Policy Creation API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Creating Schedule Templates
The Policy Creation API allows for the creation of new Schedule Templates and updates to existing Schedule
Templates. Since Schedule Templates are effective dated, these updates can take the form of either an in-
place update to the existing data (if the same effective date is specified) or in the creation of a new effective-
dated row for the policy if a later effective date is specified.
The following script example demonstrates using the Policy Creation API to create a new Schedule Template
policy named “M_F_8A_5P”, which defines a schedule from 8:00am – 5:00pm for Monday through Friday:
// Create a new instance of the API to use for importing policy data
var api = new PolicyCreationAPI();
// Define the details that will be included in the policy definition. In this case,
// this is a single slice running from 8:00am - 5:00pm for Monday through Friday,
// using the pay code SCHEDULED_IO.
var details = [];
for (var i = 2; i <= 6; ++i) { // 2 = Monday, 6 = Friday
// Define the settings for this detail
var detailParms = {
payCode: "SCHEDULED_IO", // The pay code for the detail
dayNumber: i, // Which day the detail is defining
startDayNumber: i, // The start time falls on the same day
endDayNumber: i, // The end time falls on the same day
details: {
start_tm: new WTime(8, 0, 0), // Starting at 8:00am
end_tm: new WTime(17, 0, 0) // Ending at 5:00pm
}
};
// Create the detail with the specified settings
var detail = new ScheduleTemplateSliceParms(detailParms);
// Add the detail to the detail array
details.push(detail);
}
// Create a new instance of the API to use for importing policy data
var api = new PolicyCreationAPI();
// The order of templates in this array will define the order in which they
// occur in the cycle. (Array element 0 will be the first template in the
// cycle, etc.)
var templates = [
"WORKING_SMTW__S_8A_5P",
"WORKING_SMT__FS_8A_5P",
"WORKING_SM__TFS_8A_5P",
"WORKING_S__WTFS_8A_5P",
"WORKING___TWTF__8A_5P",
"WORKING__MTWT___8A_5P"
];
// Create a new instance of the API to use for importing policy data
var api = new PolicyCreationAPI();
// Define the settings for the pay code policy. Pay codes have no settings
// other than their name and description, so no additional settings need to be
// specified here.
var payCodeParms = new PayCodeParms();
// Create a new instance of the API to use for importing policy data
var api = new PolicyCreationAPI();
// Create a new instance of the API to use for importing policy data
// Define an array to hold the details for the pay code map
var details = [];
// Define the details for the timesheet mappings included in the pay code map
details.push(
{
isSchedule: false, // This detail is for the timesheet
payCode: "REG", // The pay code for the detail
entryField: "HOURS", // The entry field for the pay code
payCodeValue: "REG", // The pay code value to associate with the pay code
canView: "ALL_USERS", // Set of roles allowed to view the pay code
canEdit: "ALL_USERS" // Set of roles allowed to edit the pay code
}
);
details.push(
{
isSchedule: false,
payCode: "VAC",
bankUsage: "VAC", // The bank usage policy associated with the pay code
payCodeValue: "VAC",
canView: "ALL_USERS",
canEdit: "ADMIN_USERS"
}
);
details.push(
{
isSchedule: true, // This detail is for the schedule
payCode: "SCHEDULED",
entryField: "HOURS"
}
);
Creating Exceptions
The Policy Creation API allows for the creation of new Exception Codes, Exception Rules, and Exception
Triggers and updates to existing policies of those types. These types of policies are treated as a single unit by
the API, so it is not possible to import an Exception Code separately from an Exception Trigger. Since
Exception Rule and Exception Trigger policies are effective dated, these updates can take the form of either
an in-place update to the existing data (if the same effective date is specified) or in the creation of a new
effective-dated row for the policy if a later effective date is specified. The Exception Code and Exception
Trigger Master policies, which are not effective dated, will have their existing definitions completely replaced
if an update is imported.
The following script example demonstrates using the Policy Creation API to create a new Exception named
“OVER_24_HOURS_IN_A_DAY”, which will trigger if more than 24 hours are on the timesheet for a given day:
// Create a new instance of the API to use for importing policy data
var api = new PolicyCreationAPI();
// The message that should appear to the user on the exceptions tab
message: "More than 24 hours worked on a single day",
// What the user should not be allowed to do when the exception is present
preventUserActions: "SUBMIT",
// Create a new instance of the API to use for importing policy data
var api = new PolicyCreationAPI();
// Define an array to hold the service ranges for the bank accrual policy
var ranges = [];
// Should the amount being accrued be divided by the total number of periods in
// the year?
divideAccrualAmounts: false
};
var bankAccrualParms = new BankAccrualParms(accrualSettings);
// Create a new instance of the API to use for importing policy data
var api = new PolicyCreationAPI();
// Create a new instance of the API to use for importing policy data
var api = new PolicyCreationAPI();
Note: The API can only be used to create customer data elements (prefixed with “C_”). If the provided policy
ID does not begin with “C_”, that prefix will be added automatically.
The following script example demonstrates using the Policy Creation API to create a new Data Element
named “C_PROJECT” with mappings in the ASGNMT and TIME_SHEET_DETAIL tables:
// Create a new instance of the API to use for importing policy data
var api = new PolicyCreationAPI();
// Define an array to hold the mapping details for the data element
var mappings = [];
// Create a new instance of the API to use for updating policy data
var api = new PolicyCreationAPI();
// Specify the pay code that should be added to the time entry groups
var payCode = "REG_HOURS";
// Create a new instance of the API to use for updating policy data
var api = new PolicyCreationAPI();
// Specify the pay code that should be added to the mobile time entry layouts
var payCode = "WORKED_TIME";
// Create a new instance of the API to use for updating policy data
var api = new PolicyCreationAPI();
// Specify the policy profile containing the pay code map that should be updated
var policyProfile = "NON_EXEMPT_EMPLOYEES";
// Update the pay code map with the field security policies
api.addFieldSecurityToPayCodeMap(fieldSecurityPolicy, policyProfile, payCodes);
Troubleshooting
The job log of the script using the Policy Creation API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
Unable to create or modify A policy with the specified ID Ensure that all policies being created
policy of type POLICY_TYPE already exists and belongs to a do not already belong to a bundle or
with ID POLICY_ID as part different bundle than was belong to the bundle associated with
of bundle BUNDLE_NAME. specified when the API was the API being used to create them.
It already belongs to a instantiated.
different bundle,
OTHER_BUNDLE_NAME
No policy ID specified for The policy being created requires a Ensure that all required fields on the
required field FIELD_NAME foreign key link to another table in policy records being imported are
order to be saved successfully, and populated.
no policy ID was provided for that
field to create that link.
Unable to create formula The formula specified was invalid. Ensure that the formula is valid for
the context and expected return type.
Multiple entries found for The list of details provided for a Ensure that all details provided for a
FIELD_NAME with value policy included two or more details policy contain unique values for fields
VALUE. FIELD_NAME is that duplicated a value in the that aren’t allowed to be duplicated
required to contain unique indicated field. (For instance, two across the set of details.
values on all detail records details for the same table on a
Data Element)
Policy POLICY_ID does not No policy of the specified type Ensure that all of the foreign key
exist in table TABLE_NAME already exists references to other policies in the
policy data being imported point to
valid policies that are already defined.
Policy Set SET_ID does not No policy set of the specified type Ensure that all of the foreign key
exist for table already exists references to policy sets in the policy
TABLE_NAME data being imported point to valid
sets that are already defined.
Effective date EFF_DATE The effective date provided was Ensure that all effective dates
must be on or before 3000- later than 3000-12-31, which is the supplied for policies are no later than
12-31 latest effective date allowed for 3000-12-31.
policy data.
Time Zone ISO Code The ISO code specified when Ensure that all of the ISO codes
ISO_CODE is not a valid JDK importing a Time Zone was not a specified when creating Time Zone
ISO Code for a time zone valid ISO code policies are valid JDK ISO codes.
Service range from Two or more service ranges for a Ensure that all of the service ranges
RANGE_START to Bank Accrual Policy policy have associated with a Bank Accrual Policy
RANGE_END overlaps with overlapping ranges. policy do not have any overlaps
another service range between their ranges.
Value “VALUE” is not a The value specified for a Ensure that all DateChoice fields
valid DateChoice option. DateChoice field was not one of contain a value that is one of the valid
Valid values are the allowed values. choices listed in the error message.
DATE_CHOICE_OPTIONS
Invalid month/day The combination of month and day Ensure that all month/day
combination specified. specified represents an invalid combinations specified represent a
Month = MONTH, Day = date (for instance February 31st) valid day that exists within that
DAY month.
Range start RANGE_START The starting value for a service Ensure that all service ranges have a
is after range end range was later than the ending starting value that is less than or
RANGE_END value for the range. equal to their ending value.
Cannot create a Data No table details were specified for Ensure all Data Elements being
Element with no mappings a Data Element policy. imported include at least one table
mapping.
New formula context Exception data was imported to Ensure that all updates to Exceptions
NEW_CONTEXT does not update an existing policy but used use the same formula evaluation
match existing context a different evaluation context for context that the existing policy being
PRIOR_CONTEXT for the qualification than the existing updated used.
Exception Trigger Master data used.
POLICY_ID. Unable to
import new data
Index INDEX is no a valid The index specified for the Ensure that all field indices specified
index for a FIELD_TYPE field indicated field was outside of the for the indicated field type fall within
on a Pay Code Value policy. allowed range. The valid range is the allowed index range for that type
Valid indices are integers between 1 and the number of of field.
between 1 and MAX_INDEX variable fields of that time defined
in the configuration.
Table 63: Policy Creation API typical error messages, root problems, and solutions
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The following is a summary of the available methods and common uses:
PolicyCreationAPI
PolicyCreationAPI(parms)
Creates a new instance of the Policy Creation API, using the specified settings.
The available parameters here are:
Parameter Name Description
Bundle The name of the bundle that the policies being imported should be added to. If the
bundle does not already exist, then a new bundle will be created with the indicated
name. If no bundle is specified, then the policies will not be added to a bundle.
enableDebugging Indicates if additional debug output should be written to the job log during
processing. Defaults to false.
Table 64: Parameters available when creating a new PolicyCreationAPI
getPolicyInformation(policyType, policyName)
Returns the policy information for the policy of the specified type with the indicated name.
createPolicy(policyDefinition)
Creates a single new policy using the provided policy definition.
createPolicies(policyDefinitions)
Creates a new policy for each of the policy definitions specified in the provided array.
addPayCodeToTimeEntryGroups(payCode)
Adds the indicated Pay Code to all of the currently-defined Time Entry Group policies.
addPayCodeToMobileTimeEntryLayouts(payCode)
Adds the indicated Pay Code to all of the currently-defined Mobile Time Entry Layout policies.
PolicyDefinition
PolicyDefinition(policyId, parms, description, effDate)
Creates a new PolicyDefinition using the provided settings. The type of object specified for parms will
determine the type of policy being created. (For instance, if a ScheduleTemplateParms object is provided,
then the PolicyDefinition will reflect the definition for a Schedule Template policy). For non-effective-dated
policies, the effective date will not be used and does not need to be specified. For effective-dated policies,
the effective date will be defaulted to 1900-01-01 if not specified.
Parameter Type Description
ScheduleTemplateParms Used to create a new Schedule Template policy
ScheduleCycleParms Used to create a new Schedule Cycle policy
PayCodeParms Used to create a new Pay Code policy
PayCodeValueParms Used to create a new Pay Code Value policy
PayCodeMapParms Used to create a new Pay Code Map policy
ExceptionParms Used to create a new Exception Code, Exception Rule, and Exception Trigger
policy
BankAccrualParms Used to create a new Bank Accrual Policy policy
TimeZoneParms Used to create a new Time Zone policy
FieldSecurityParms Used to create a new Field Security Policy policy
DataElementParms Used to create a new Data Element policy
Table 65: Object classes that can be specified for the parms object
ScheduleTemplateParms
ScheduleTemplateParms(parms)
Used to create a new Schedule Template policy.
Available parameters are:
Parameter Name Description
weekStartDay The day of the week that the template starts on. 1 = Sunday, 7 = Saturday.
For biweekly templates, 8 = second Sunday, 14 = second Saturday.
Details Array of details for the slices that make up the template.
Biweekly Indicates if the template is a biweekly template. Defaults to false.
hideFromScreen Indicates if the template should not be visible on the schedule screen. Defaults to
false.
Table 66: Parameters available when creating a new ScheduleTemplateParm
ScheduleTemplateSliceParms
ScheduleTemplateSliceParms(parms)
Defines the details for a single time slice that makes up a Schedule Template policy.
Available parameters are:
Parameter Name Description
payCode The pay code for the slice
dayNumber Number indicating which day the slice falls on. 1 = Sunday, 7 = Saturday. For
biweekly templates, 8 = second Sunday, 14 = second Saturday.
startDayNumber Number indicating which day the beginning of the slice falls on.
endDayNumber Number indicating which day the end of the slice falls on.
Hours Formula to use to compute the hours that apply for the slice.
Details JS object with properties corresponding to the additional settings that should apply
to the slice. Property name corresponds to the field name on the slice and the
property value is the value that should be used for that field.
Table 67: Parameters available when creating a new ScheduleTemplateSliceParms
ScheduleCycleParms
ScheduleCycleParms(parms)
Used to create a new Schedule Cycle policy.
Available parameters are:
Parameter Name Description
Templates Array of Schedule Template policy IDs, in the order
in which they should occur within the cycle.
Indicates if the template should not be visible on
hideFromScreen
the schedule screen. Defaults to false.
Table 68: Parameters available when creating a new ScheduleCycleParms
PayCodeParms
PayCodeParms(parms)
Used to create a new Pay Code policy.
PayCodeValuesParms
PayCodeValuesParms(parms)
Used to create a new Pay Code Value policy.
Available parameters are:
Parameter Name Description
Systems JS object mapping from pay_code_system field
number to the value for that field
JS object mapping from pay_code_other_string
otherStrings
field number to the value for that field
JS object mapping from pay_code_other_number
otherNumbers
field number to the value for that field
The amount value for the Pay Code Value. Defaults
Amount
to 0.
The factor value for the Pay Code Value. Defaults
Factor
to 0.
The percent value for the Pay Code Value. Defaults
Percent
to 0.
The rate value for the Pay Code Value. Defaults to
Rate
0.
Table 69: Parameters available when creating a new PayCodeValueParms
PayCodeMapParms
PayCodeMapParms(parms)
Used to create a new Pay Code Map policy.
Available parameters are:
Parameter Name Description
Array of pay code map details for the pay codes in
Details
the pay code map.
Table 70: Parameters available when creating a new PayCodeMapParms
PayCodeMapDetailParms
PayCodeMapDetailParms(parms)
Used to define a single detail row defining the behavior for one Pay Code within a Pay Code Map.
Available parameters are:
Parameter Name Description
isSchedule True if the detail is describing a schedule mapping,
false if it is describing a timesheet mapping.
payCode The Pay Code associated with the map entry.
The Field Mapping Policy defining the entry type to
entryField
use for the Pay Code.
ExceptionParms
ExceptionParms(parms)
Used to create a new Exception Code policy, along with its associated Exception Rule and Exception Trigger
policies.
Available parameters are:
Parameter Name Description
Severity The severity of the exception.
The message that should be displayed by the
Message
exception.
Defines which actions the user is prohibited from
preventUserActions
performing if the exception is present.
The message describing the action that needs to be
requiredAction
taken by the user to resolve the exception.
The formula context in which the exception should
Context
be evaluated.
The formula to be evaluated to determine if the
Formula
exception should trigger.
ID of the role set identifying which roles can see the
canView
exception.
The calculation stage where the exception should
calcStage trigger. Defaults to EXCEPTION_TRIGGER_001 if
not specified.
Table 72: Parameters available when creating a new ExceptionParms
BankAccrualParms
BankAccrualParms(parms)
Used to create a new Bank Accrual Policy policy.
Available parameters are:
Parameter Name Description
Frequency Defines how often the accrual happens.
BankAccrualServiceRangeParms
BankAccrualServiceRangeParms(parms)
Used to define a single service range detail that makes up a Bank Accrual Policy policy.
Available parameters are:
Parameter Name Description
rangeStart The minimum length of time (in the service units
indicated on the corresponding Bank Accrual Policy
policy) the employee needs to have worked to
qualify for this service range.
The maximum length of time (in the service units
indicated on the corresponding Bank Accrual Policy
rangeEnd
policy) the employee needs to have worked to
qualify for this service range.
The formula to evaluate to determine the amount
accrualFormula
that should be accrued.
The formula to evaluate to determine if this service
Qualification
range applies to an employee.
The formula to evaluate to determine the
maxLiimit
maximum balance an employee is allowed to have.
Table 74: Parameters available when creating a new BankAccrualServiceRangeParms
TimeZoneParms
TimeZoneParms(parms)
Used to create a new Time Zone policy.
Available parameters are:
Parameter Name Description
isoCode The ISO code specifying which time zone the policy
represents.
Table 75: Parameters available when creating a new TimeZoneParms
FieldSecurityParms
FieldSecurityParms(parms)
Used to create a new Field Security Policy policy.
Available parameters are:
Parameter Name Description
Fields Array of field details for the policy.
Table 76: Parameters available when creating a new FieldSecurityParms
FieldSecurityFieldParms
FieldSecurityFieldParms(parms)
Used to define a single set of field details belonging to a Field Security Policy policy.
Available parameters are:
Parameter Name Description
fieldMappingPolicy The Field Mapping Policy for the detail record.
The role set ID for the set of roles allowed to view
canView
the field.
The role set ID for the set of roles allowed to edit the
canEdit
field.
Table 77: Parameters available when creating a new FieldSecurityFieldParms
DataElementParms
DataElementParms(parms)
Used to create a new Data Element policy.
Available parameters are:
Parameter Name Description
Details Array of details defining the table mappings for this
Data Element.
The distributed Data Element whose definition
overrideElement
should be overridden by this policy.
Indicates if the policy ID should not be projected as
doNotProjectAlias
an alias in the tables. Defaults to false.
Table 78: Parameters available when creating a new DataElementParms
DataElementDetailParms
DataElementDetailParms(parms)
Used to define a single table mapping for a Data Element policy.
Available parameters are:
Parameter Name Description
tableName The name of the table that the mapping applies to.
fieldname The field in the table that the mapping applies to.
Variable field number that should be used for the
mapping, if the field selected is a variable field. If
varFieldNo
not specified for a variably-numbered field then the
next available field will be automatically used.
Table 79: Parameters available when creating a new DataElementDetailParms
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• Understanding of policies and their settings
Components
This API consists of the following component(s):
• The distributed JavaScript library POLICY_INFO_API
Setup
No setup is necessary for the Policy Info API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Loading Data for a Non-Effective-Dated Policy
The Policy Info API can be used to load data for policies that are not effective dated. In these cases, no
effective date needs to be supplied in order to access the policy information.
The following examples demonstrate how the Policy Info API can be used to look up information for the Role
“EMPLOYEE_GENERAL”:
Note: For policies that don’t include an effective-dated component, getMasterInfo() and getMasterDetails()
will both return exactly the same information.
// Get the detail data for the policy. In this case, get the list of system features
// that are available to the role
var details = policyData.getDetailInfo("RIGHT_GRP_DETAIL");
// Iterate over the details and print out the system features that are available
for (var i = 0; i < details.length; ++i) {
log.info("System feature = " + details[i].system_feature);
log.info("Security option = " + details[i].security_option);
}
// Load the policy information for the Pay Code Map EXEMPT
var policyType = "PAY_CODE_MAP";
var policyName = "EXEMPT";
var policyData = api.getPolicyInformation(policyType, policyName);
Note: For policies that don’t include a non-effective-dated component, getMasterInfo() and
getMasterDetails() will both return exactly the same information.
// Load the policy information for the Pay Code Map EXEMPT
var policyType = "PAY_CODE_MAP";
var policyName = "EXEMPT";
var policyData = api.getPolicyInformation(policyType, policyName);
// Iterate over the timesheet details and print out information about them
log.info("Timesheet details:");
for (var i = 0; i < timeSheetDetails.length; ++i) {
log.info("Pay code = " + timeSheetDetails[i].pay_code);
log.info("Entry field = " + timeSheetDetails[i].entry_field);
log.info("Pay code formula = " + timeSheetDetails[i].pay_code_formula);
log.info("Pay code value = " + timeSheetDetails[i].pay_code_value);
}
// Iterate over the schedule details and print out information about them
log.info("Schedule details:");
for (var i = 0; i < scheduleDetails.length; ++i) {
log.info("Pay code = " + scheduleDetails[i].pay_code);
log.info("Entry field = " + scheduleDetails[i].entry_field);
}
Example 1: Loading the non-effective-dated master data for a policy with both effective-
dated and non-effective-dated components
includeDistributedPolicy("POLICY_INFO_API");
// Load the policy information for the Exception Trigger Group EXEMPT_TRIGGERS
var policyType = "EX_TRIGGER_GRP_MASTER";
var policyName = "EXEMPT_TRIGGERS";
var policyData = api.getPolicyInformation(policyType, policyName);
Example 2: Loading the effective-dated master data for a policy with both effective-
dated and non-effective-dated components
includeDistributedPolicy("POLICY_INFO_API");
// Load the policy information for the Exception Trigger Group EXEMPT_TRIGGERS
var policyType = "EX_TRIGGER_GRP_MASTER";
var policyName = "EXEMPT_TRIGGERS";
var policyData = api.getPolicyInformation(policyType, policyName);
Example 3: Loading detail information for a policy with both effective-dated and non-
effective-dated components
includeDistributedPolicy("POLICY_INFO_API");
// Load the policy information for the Exception Trigger Group EXEMPT_TRIGGERS
var policyType = "EX_TRIGGER_GRP_MASTER";
var policyName = "EXEMPT_TRIGGERS";
var policyData = api.getPolicyInformation(policyType, policyName);
// Get the exception triggers that are part of the trigger group
var triggers = policyData.getDetailInfo("EX_TRIGGER_GRP_DETAIL", asOfDate);
// Iterate over the triggers and print out information about them
for (var i = 0; i < triggers.length; ++i) {
log.info("Exception trigger = " + triggers[i].exception_trigger);
}
these changes; unless special action is taken, the data returned by the Policy Info API will always reflect the
configuration as it existed at the time the API was created in the script.
The loadLatestPolicies() method can be used to update the Policy Info API to take into account all of the
latest policy changes. The following script example demonstrates how this method could be used within a
script:
// Get the information for the Role policy again. This will now
// reflect any changes that were made after the API was originally
// instantiated.
policyData = api.getPolicyInformation(policyType, policyName);
if (!api.exists(policyType, policyName)) {
// throw an error here or log something
Return;
}
This way you can make sure that every time you need to reference the policy after this point, it will exist and
no additional try-catch blocks are needed
Troubleshooting
The job log of the script using the Policy Info API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
Invalid detail type An attempt was made to access Ensure that all attempts to access
DETAIL_TYPE specified for detail information of a policy of a detail information for a policy are
policy type POLICY_TYPE. type that does not exist for that specifying a detail type that actually
Valid detail types are: policy. exists for that type of policy.
LIST_OF_VALID_DETAIL_TYP
ES
An as-of date must be An attempt was made to access the Ensure that an as-of date is always
specified when accessing master information of an effective- supplied by the script when attempting
master details of effective- dated policy without having to look up policy master information
dated policy POLICY_NAME specified an as-of date to use to for effective-dated policies.
of type POLICY_TYPE evaluate the policy data.
Table 80: Policy Info API typical error messages, root problems, and solution
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Policy Info API defines a mechanism for requesting information about policies:
• Returned by the PolicyInfoApi are _PolicyInfo objects, which encapsulate all policy information
(include master and detail information, and all effective-dated changes) for a particular policy. These
objects cannot be instantiated directly.
See the contents of the POLICY_INFO_API policy in the Distributed JavaScript Library category for full
documentation on these methods. The following is a summary of the available methods and common uses:
PolicyInfoAPI
PolicyInfoAPI()
Creates a new instance of the Policy Info API.
getPolicyInformation(policyType, policyName)
Returns the policy information for the policy of the specified type with the indicated name.
loadLatestPolicies()
Refreshes the policy information in the PolicyInfoAPI to reflect the latest policy data. This needs to be called
anytime there are policy changes after the PolicyInfoAPI is created in order for its results to reflect those
policy changes.
exists(policyType, policyName)
Returns true if the specified policy exists, false otherwise.
_PolicyInfo
getPolicyType()
Returns which type of policy is represented by the data contained in this object. The policy type returned will
always reflect the table name of the top-level parent table for the policy type (e.g. for an Exception Trigger
policy, the value returned would be “EXCEPTION_TRIGGER_MASTER”).
getMasterInfo(asOfDate)
Returns the master policy information for the policy. If the policy is effective dated, this will return the policy
information that is effective on the specified as-of date.
If the policy includes both effective-dated and non-effective-dated components, this will return the non-
effective-date portion of the policy data. If the policy only includes either effective-dated or non-effective-
dated data, then this will return the same data as getMasterDetails().
getMasterDetails(asOfDate)
Returns the master detail policy information for the policy. If the policy includes both effective-dated and
non-effective dated components, this will return the effective-dated portion of the policy data. Otherwise,
this will return exactly the same data that getMasterInfo() returns.
getDetailInfo(detailType, asOfDate)
Returns an array of details for the policy of the indicated detail type. The detail type should reflect the table
name for the type of details that should be returned (e.g. for the schedule mappings in a Pay Code Map, this
should be “PAY_CODE_MAP_SCHED_DETAIL”). If the policy is effective dated, this will return the details that
are effective on the specified as-of date.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• Understanding of policies and their settings
Components
This API consists of the following component(s):
• The distributed JavaScript library POLICY_MAPPING_API
Setup
No setup is necessary for the Policy Info API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
DETERMINE THE HOLIDAY POLICY TO BE APPLIED TO THE EMPLOYEE
Example 1: DETERMINE THE HOLIDAY POLICY TO BE APPLIED TO THE EMPLOYEE
includeDistributedPolicy("POLICY_MAPPING_API");
// Use the policy mapping screen logic to define the holiday set
try {
assignment.holiday_set = api.evaluate("HOLIDAYS", employee, assignment);
} catch (e) {
log.warning("No specific mapping for received CALENDAR value: '" + source.CALENDAR +
"', will try to assign value as the employee's Holiday Set");
}
Troubleshooting
The job log of the script using the Policy Mapping API will contain information messages and in the case of
problems, any error messages that were generated during processing. This job log should be reviewed if
there are any problems encountered when using the API.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Policy Mapping API defines a mechanism for evaluating policy mappings to determine which policy the
given record maps to:
• When no Mapping is found, the PolicyMappingApi triggers an exception. The error is a string
detailing the nature of the problem.
See the contents of the POLICY_MAPPING_API policy in the Distributed JavaScript Library category for full
documentation on these methods. The following is a summary of the available methods and common uses:
PolicyMappingAPI
PolicyMappingAPI()
Creates a new instance of the Policy Mapping API.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• How policy sets function within WorkForce Time and Attendance
Components
This API consists of the following component(s):
• The Distributed JavaScript Library policy POLICY_SET_API
Setup
No setup is necessary for the Policy Set API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Identifying Policies in a Policy Set
The Policy Set API allows a script to determine the policy IDs of all policies that belong to a particular set on a
given date. The following example demonstrates using the Policy Set API to look up all pay codes in the set
PAY_CODES_TO_IMPORT, rejecting any records that specify a pay code not included in the set:
// Compare the pay code from the source against the policy set
if (inArray(policies, sourcePayCode) ) {
// The policy set contains the pay code, so do whatever processing is needed
}
else {
// The set does not contain the pay code, so display an error message
log.error("The policy set " + setName + " does not contain a " + policyType +
" policy named " + sourcePayCode);
}
}
Note: Policy sets are effective dated within WorkForce Time and Attendance, which means that the policies
that belong to a set can vary depending on the date being evaluated. If your script is processing
records that can span multiple dates, such as time records, be sure that you are evaluating them using
the policy set data for the correct date.
If additional information is needed about a policy belonging to a set beyond just its policy ID, then the Policy
Info API can be used to load all of the additional attributes for that policy._Policy_Info_API
Troubleshooting
The job log of the script using the Policy Set API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
PolicySetAPI
Creates a new instance of the Policy Set API. Policy data as it exists in the configuration at the time this is
called will be used for determining which policies belong to which sets.
loadLatestPolicies()
Updates the policy information used by the API to reflect any changes that have been made to the policies
since the API was originally instantiated. This is particularly important if the script itself is making any
changes to the policies in the configuration.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The PPI_DATA_CONNECTOR Distributed JavaScript Library
Setup
No setup is necessary for the PPI_DATA_CONNECTOR. The distributed library is automatically available in
WT&A.
Use Cases
Creating a file data connector to get read/write access to a file
The following script example demonstrates how the FileConnector can be created to get read/write access to
a file.
try {
// Create data connector for a file
var dataConnector = FileConnector(FILE_PATH + FILE_NAME , characterEncoding);
//get data from a file through data connector
var fileData = dataConnector.getData();
log.info("Printing Data: \n" + fileData);
log.info("DataConnector read data from the file successfully.");
} catch (e) {
log.error("Error: " + e);
}
includeDistributedPolicy("PPI_DATA_CONNECTOR");
try {
// Create data connector for a SFTP server
var dataConnector = new SFTPConnector("10.110.1.66", "workforce", "mittens", "22",
"/UTF-8.csv", "UTF-8", false, "", true);
//get data from a file through data connector
var fileData = dataConnector.getData();
log.info("Printing Data: \n" + fileData);
log.info("DataConnector read data from the file successfully.");
} catch (e) {
log.error("Error: " + e);
}
try {
// Create data connector for a SFTP server
var dataConnector = new SFTPConnector("10.110.1.66", "workforce", "mittens", "22",
"/UTF-8.csv", "UTF-8", false, "", true);
//write data to a file through data connector
dataConnector.sendData("Hello World"); // write data without placeholder
// dataConnector.sendData("Hello %name%",{name: John}); // write data with
placeholders
log.info("DataConnector writes data to the file successfully.");
} catch (e) {
Creating a CSV file data connector to read data from a CSV file
The following script example demonstrates how the CSVConnector can be created to read a CSV file from the
system.
try {
// get data connector of given policy id
var dataConnector = new getDataConnector("INT_1236_2");
//get data through data connector
var data = dataConnector.getData();
log.info("Printing Data: \n" + data);
log.info("DataConnector read data successfully.");
} catch (e) {
log.error("Error: " + e);
}
try {
// get data connector of given policy id
var dataConnector = new getDataConnector("INT_1236_2");
//write data through data connector
dataConnector.sendData("Hello World"); // write data without placeholder
// dataConnector.sendData("Hello %name%",{name: John}); // write data with
placeholders
log.info("DataConnector writes data to the file successfully.");
} catch (e) {
log.error("Error: " + e);
}
Troubleshooting
The job log of the script using the PPI Data Connector API will include informational messages, and in the case
of problems, error messages. This job log should be reviewed if there are problems using the library.
The following table lists some common error messages, their causes, and the solution.
Error Message Cause Solution
No file path specified No file path specified as parameter for Provide the file path
method FileConnector
At least one file name should No file name is provided for Provide at least one file name
be provided to the CSVFileConnector or an array of file names
CSVFileConnector
File path is not specified for Base directory is not specified for Provide the base directory path
the CSVFileConnector CSVFileConnector from where file(s) should be
read
The directory does not exist at Incorrect base directory is provided for Provide correct base directory
specified path CSVFileConnector path
The path does not represent a Provided base directory path for Provide the correct base
directory CSVFileConnector is not a directory directory path and it should be
a directory
The directory at path (baseDir)
Provided base directory for Make sure the file(s) at
does not have read access CSVFileConnector doesn’t have read provided directory have read
access access
No DB Connection Info policy Name of DB_CONNECTION_INFO Provide correct name of the
specified for connecting Policy Name is not provided or DB_CONNECTION_INFO Policy
SQLConnector incorrect for SQLConnector
No SQL query provided for SQL query is not provided or null for Provide the SQL query
SQLConnector SQLConnector
Table 83: PPI_DATA_CONNECTOR common error messages, causes, and solutions
API Reference
Knowledge of JavaScript programming is necessary to make the best use of this section.
Parameter Description
host (required) The name of sftp server
username (required) The username for authentication of the sftp server
password (required) The password for authentication of the sftp server
port (required) The port on which sftp server runs
remoteFilePath (required) Path to the file from root of sftp server from which data needs to be read or
written to
characterEncoding (optional) Canonical name of character set encoding, default is UTF-8 if not specified.
encrypted (optional) Flag to indicate whether file is encrypted or not (default false)
encryptionAlias(optional) Alias used for PGP encryption
enforceCharsetEncoding If true, file will be validated if it contains the correct character set, i.e.
(optional) specified. Defaults to false.
CSVFileConnector (parameters)
Creates an instance of a CSVFileConnector – an object that implements the DataConnector interface and
provides access for reading data from a CSV file. Takes an object as its parameter, which includes defined
parameters in the table.
Parameter Description
baseDir (required) The file path to the source file(s)
fileNames (required) Array of strings with the source file name(s)
delimiter (optional) A single character string with the custom delimiter. Default is ","
characterSet (optional) Name of the character set to be used to read the CSV files. Default is UTF-8.
startingLine (optional) The 0-based line number from where to start reading the CSV file. Lines
before this will be ignored. Default is 0.
autoTrim (optional) If true, white spaces are removed from the beginning and end of field values.
Default is false.
archiveFiles (optional) If true, CSV files passed in fileNames parameter are archived to a directory
specified in the archivePath parameter. Default is false.
archivePath (optional) Path where files will be archived. Default is [baseDir]\archive.
enforceCharsetEncoding If true, file will be validated if it contains the correct character set, i.e.
(optional) specified. Defaults to false.
getDataConnector (policyId)
Returns the data connector for the specified PPI Data Connector policy ID.
Parameter Description
policyId (required) Start date-time for the date-time range
Prerequisites
To use this API, you should know the following:
Components
The components of the Retro Trigger API are as follows:
Setup
No setup is necessary for the Retro Trigger API. The distributed library is automatically available within
WT&A.
Use cases
Example 1: Create new Retro Trigger Event
The following script example demonstrates how the Retro Trigger API can be used to create a new retro
trigger event for an assignment based on a provided employee and assignment match condition.
includeDistributedPolicy("RETRO_TRIGGER_API");
//setup parameters
var parms = {
enableDebugging: true,
description: “API created group”
};
//Define match condition for an employee. This match condition will match an employee with the
display ID of 1163297
//Define match condition for an assignment. This match condition will match an assignment with
the computed match ID of 1163297202327-5301
//Commit all created retro trigger events. This function must be called after the creation of
all retro trigger events to write them in database.
retroApi.commitRetroTriggers();
Note: The condition used here must match exactly one employee and assignment. If more than one employee
and assignment matches the condition, an exception will be generated.
includeDistributedPolicy("RETRO_TRIGGER_API");
var parms = {
enableDebugging: false,
description: "API created group"
};
//Initialize API
var retroApi = new RetroTriggerAPI(parms);
//Define match condition for an employee. This match condition will match an employee with the
display ID of 1163297
var empMatchCondition = new MatchCondition(MatchTable.EMPLOYEE, "DISPLAY_EMPLOYEE",
MatchOperator.EQUALS, "1163297");
//Define match condition for an assignment. This match condition will match an assignment with
the computed match ID of 1163297202327-5301
var asgMatchCondition = new MatchCondition(MatchTable.ASGNMT, "COMPUTED_MATCH_ID",
MatchOperator.EQUALS, "1163297202327-5301");
var startDate = new WFSDate(2017,10,11); //start trigger date for date range
var endDate = new WFSDate(2017,12,31); //end trigger date for date range
var includeInactive = true; // flag to include inactive records
//Get all active/inactive retro triggers within date range for an assignment
var list =
retroApi.getRetroTriggerEventsToBeProcessed(empMatchCondition,asgMatchCondition,startDate,endD
ate,includeInactive);
includeDistributedPolicy("RETRO_TRIGGER_API");
//setup parameters
var parms = {
enableDebugging: true,
description: “API created group”
};
//Define match condition for an employee. This match condition will match an employee with the
display ID of 1163297
var empMatchCondition = new MatchCondition(MatchTable.EMPLOYEE, "DISPLAY_EMPLOYEE",
MatchOperator.EQUALS, "1163297");
//Define match condition for an assignment. This match condition will match an assignment with
the computed match ID of 1163297202327-5301
var asgMatchCondition = new MatchCondition(MatchTable.ASGNMT, "COMPUTED_MATCH_ID",
MatchOperator.EQUALS, "1163297202327-5301");
var startDate = new WFSDate(2017,10,11); //start trigger date for date range
var endDate = new WFSDate(2017,12,31); //end trigger date for date range
var includeInactive = true; // flag to include inactive records
//Get all active/inactive retro triggers within date range for an assignment
var list =
retroApi.getRetroTriggerEventsToBeProcessed(empMatchCondition,asgMatchCondition,startDate,endD
ate,includeInactive);
includeDistributedPolicy("RETRO_TRIGGER_API");
//setup parameters
var parms = {
enableDebugging: true,
description: “API created group”
};
//Define match condition for an employee. This match condition will match an employee with the
display ID of 1163297
var empMatchCondition = new MatchCondition(MatchTable.EMPLOYEE, "DISPLAY_EMPLOYEE",
MatchOperator.EQUALS, "1163297");
//Define match condition for an assignment. This match condition will match an assignment with
the computed match ID of 1163297202327-5301
var asgMatchCondition = new MatchCondition(MatchTable.ASGNMT, "COMPUTED_MATCH_ID",
MatchOperator.EQUALS, "1163297202327-5301");
var startDate = new WFSDate(2017,10,11); //start trigger date for date range
var endDate = new WFSDate(2017,12,31); //end trigger date for date range
var includeInactive = true; // flag to include inactive records
//Get all active/inactive retro triggers within date range for an assignment
var list =
retroApi.getRetroTriggerEventsToBeProcessed(empMatchCondition,asgMatchCondition,startDate,endD
ate,includeInactive);
includeDistributedPolicy("RETRO_TRIGGER_API");
//setup parameters
var parms = {
enableDebugging: true,
description: “API created group”
};
//Define match condition for an employee. This match condition will match an employee with the
display ID of 1163297
//Define match condition for an assignment. This match condition will match an assignment with
the computed match ID of 1163297202327-5301
//OR Fetch only active retro-trigger events with or without effective trigger date.
var includeInactive = false;
Troubleshooting
The job log of the script using the Retro Trigger API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Retro Trigger API consists of two components:
3. The MatchCondition, which defines a condition to select employee, assignment, or user records.
a. This MatchCondition makes use of a MatchOperator which defines a comparison operation
to use in a condition for matching a specified value against the employee/assignment/user
data.
4. The RetroTriggerAPI, which provides methods to create, update (activate/deactivate), and fetch
retro trigger events.
See the contents of the RETRO_TRIGGER_API policy in the Distributed JavaScript Library category of the
Policy Editor for full documentation on these methods. The following is a summary of the available methods
and common uses:
MatchOperator
EQUALS
Only data where the value in the specified field exactly matches the indicated value will be matched by the
condition.
NOT_EQUALS
Only data where the value in the specified field does not match the indicated value will be matched by the
condition.
GREATER_THAN
Only data where the value in the specified field is greater than the indicated value will be matched by the
condition. For string fields this is applied lexicographically, meaning that “3” is greater than “20”.
GREATER_THAN_OR_EQUALS
Only data where the value in the specified field exactly matches or is greater than the indicated value will be
matched by the condition. For string fields this is applied lexicographically, meaning that “3” is greater than
“20”.
LESS_THAN
Only data where the value in the specified field is less than the indicated value will be matched by the
condition. For string fields this is applied lexicographically, meaning that “20” is less than “3”.
LESS_THAN_OR_EQUALS
Only data where the value in the specified field exactly matches or is less than the indicated value will be
matched by the condition. For string fields this is applied lexicographically, meaning that “20” is less than
“3”.
IN
Only data where the value in the specified field exactly matches one of the values in the indicated array of
values will be matched by the condition.
NOT_IN
Only data where the value in the specified field does not match any of the values in the indicated array of
values will be matched by the condition.
LIKE
Only data where the value in the specified field matches the pattern defined by the indicated value will be
matched by the condition.
NOT_LIKE
Only data where the value in the specified field does not match the pattern defined by the indicated value
will be matched by the condition.
BETWEEN
Only data where the value in the specified falls between the two values defined by the indicated array of
values (inclusive of the two endpoints) will be matched by the condition. For string fields this is applied
lexicographically, meaning that “5” is between “37” and “62”.
MatchCondition
MatchCondition(table, field, operator, values, isNotOperator)
Creates a new MatchCondition for the indicated table/field combination that matches against the specified
value(s) using the indicated MatchOperator.
If the MatchOperator specified is IN or NOT_IN, then values are expected to be an array of all the different
values to be evaluated by the “in” operation. If the MatchOperator specified is BETWEEN, then values are
expected to be an array containing two values: the starting point and ending point of the range to be
evaluated by the “between” operation. For all other MatchOperators, values are expected to be a single
value.
The isNotOperator controls whether the results of the MatchCondition should be negated. If this is set to
true, then a MatchCondition that normally evaluates to false will instead evaluate to true, and a
MatchCondition that normally evaluates to true will instead evaluate to false. This allows for conditions that
don’t have an explicit MatchOperator defined, such as “not between”, to be defined.
and(condition)
Modifies the MatchCondition to return only the intersection of its original condition and the specified
condition.
or(condition)
Modifies the MatchCondition to return the union of its original condition and the specified condition.
RetroTriggerAPI
RetroTriggerAPI(parms)
Create new instance of RetroTriggerAPI.
Creates a new retro trigger event with a given trigger date for an assignment based on provided employee
and assignment match conditions.
commitRetroTriggers()
Commits all created retro trigger events into the retro_trigger_event table in the database.
activateRetroTrigger(retroTriggerEvent)
deactivateRetroTrigger(retroTriggerEvent)
deleteRetroTrigger(retroTriggerEvent)
Deletes the provided retro trigger event.
getRetroTriggerEventsToBeProcessed(employeeMatchCondition, asgnmtMatchCondition,
startDate, endDate, includeInactive)
Returns a Read-Only list of active/inactive unproccessed retro trigger events within a given trigger date
range for an assignment based on the provided match condition of employee and assignment.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The SCHEDULE_DETAIL_API distributed JavaScript Library
• The ScheduleDetailAPI Java class
• The API_UTIL Distributed JavaScript Library
Setup
No setup is necessary for the Schedule Detail API. The distributed library is automatically available in
WorkForce Time and Attendance.
Use Cases
Getting SD data using Match Condition for assignment; Version: INITIAL
Schedule is not locked.
includeDistributedPolicy("SCHEDULE_DETAIL_API ");
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
asgnmtEffDate: new WFSDate(2010, 05, 29),
startDate: new WFSDate(2008, 04, 21),
endDate: new WFSDate(2008, 04, 25),
version:"INITIAL"
};
var result = api.getScheduleDetailForPeriodAsOf(queryParms);
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
asgnmtEffDate: new WFSDate(2010, 05, 29),
asOfDate: new WFSDate(2008, 04, 26),
version:"OPEN"
};
var result = api.getScheduleDetailForPeriodAsOf(queryParms);
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
asgnmtEffDate: new WFSDate(2010, 05, 29),
asOfDate: new WFSDate(2008, 04, 26),
version:"LATEST"
};
var result = api.getScheduleDetailForPeriodAsOf(queryParms);
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
asgnmtEffDate: new WFSDate(2010, 05, 29),
asOfDate: new WFSDate(2008, 04, 26),
version:" LATEST_CLOSED"
};
var result = api.getScheduleDetailForPeriodAsOf(queryParms);
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
var matchConditionParam = {
table: TimesheetMatchTable.SCHEDULE_DETAIL,
field: "HOURS",
operator: MatchOperator.EQUALS,
values: ["3"]
};
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
asgnmtEffDate: new WFSDate(2010, 05, 29),
asOfDate: new WFSDate(2008, 04, 26),
version: "LATEST_CLOSED",
payCodeSet: "INT_5184_TEST",
sdMatchCondition: new TimesheetMatchCondition(matchConditionParam)
};
var result = api.getScheduleDetailForPeriodAsOf(queryParms);
Troubleshooting
The job log of the script using the Schedule Detail API will include informational messages, and in the case of
problems, error messages. This job log should be reviewed if there are problems using the API.
The following table lists some common error messages, their causes, and the solution.
Error Message Cause Solution
Invalid Pay Code Set provided The API will throw an error if the Pass a valid PayCodeSet
name of PayCodeSet is not valid
Version invalid The API will throw an error if the Pass a valid version from the
version is not valid. following: OPEN, LATEST,
LATEST_CLOSED, INITIAL
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
getScheduleDetailForPeriodAsOf ()
Performs a lookup on SD table using provided match condition and returns matching rows as
ReadOnlyScriptables for the period the provided effective date is in. Takes an object as parameter, having the
following fields:
Parameter Description
matchCondition The condition to select the assignment. Condition can be from EMPLOYEE,
EMPLOYEE_MASTER, ASGNMT and/or ASGNMT_MASTER table
asgnmtEffDate The effective date to use for the retrieval of the assignment record
asOfDate The date to retrieve the period of schedule
The version of schedule for which to retrieve the SD records. The 4 versions
version
are INITIAL, LATEST, LATEST_CLOSED, OPEN
payCodeSet (optional) The pay codes for which to retrieve the SD records
sdMatchCondition (optional) The TimesheetMatchCondition for SD table to put additional filter criteria
getScheduleDetailBetweenWorkDates ()
Performs a lookup on SD table using provided match condition and returns matching rows as
ReadOnlyScriptables between the provided dates. Takes an object as parameter, having the following fields:
Parameter Description
matchCondition The condition to select the assignment. Condition can be from EMPLOYEE,
EMPLOYEE_MASTER, ASGNMT and/or ASGNMT_MASTER table
asgnmtEffDate The effective date to use for the retrieval of the assignment record
startDate The start date to use for calculation of schedule
endDate The end date to use for calculation of schedule
The version of schedule for which to retrieve the SD records. The 4 versions
version
are INITIAL, LATEST, LATEST_CLOSED, OPEN
payCodeSet (optional) The pay codes for which to retrieve the SD records
sdMatchCondition (optional) The TimesheetMatchCondition for SD table to put additional filter criteria
SSO API
Definitions of Terms Used in This Section
SSO - Single sign-on (SSO) is a session/user authentication process that permits a user to enter one name and
password in order to access multiple applications.
SAML - Security Assertion Markup Language (SAML, pronounced sam-el) is an XML-based, open-standard
data format for exchanging authentication and authorization data between parties, in particular, between an
identity provider and a service provider.
SP - Service Provider – A website that hosts applications. In this case Workforce Software is acting as the
Service Provider.
IdP - Identity Provider – A trusted provider that enables using SSO to access other websites. In this case we
are relying on the customers using the product to act as the Identity Provider.
Prerequisites
To use this API, you should be familiar with the following functionality:
• How to create policies within the Policy Editor
• Basic SAML concepts, particularly those of metadata, Service Provider (SP), and Identity Provider (IdP)
• Basic JavaScript coding (for advanced SSO scenarios)
• Basic understanding of HTTP requests (for advanced SSO scenarios)
Components
This API consists of the following component(s):
• The SSO_API JavaScript library
• One or more SAML SSO Profile policies in Policy Editor
• One or more Authentication Mechanism Policy details (or 'bands')
Setup
Make sure the following requirements are met before using this API.
Field Description
The IdP EntityID to Fill in the unique identifier (EntityID) of the Identity Provider.
expect
IdP Metadata WorkForce Time and Attendance can retrieve the Identity Provider's metadata
Location Type from either a file or a public URL. Specify which option will be used here.
IdP Metadata Specify the exact location of the Identity Provider's metadata. If 'File' was chosen,
Location fill in the file location. For 'URL', fill in the full URL
SP EntityID Choose a unique identifier for the WorkForce Time and Attendance instance.
Table 84: Fields in SAML SSO Profile policy.
Use Cases
Accessing WorkForce Time and Attendance SP Metadata
Identity Providers (IdP) often require access to the Service Provider's (SP) metadata in order to enable SSO
between the systems. An WorkForce Time and Attendance instance's metadata can be accessed via an
instance URL.
First, set up a SAML SSO Profile policy as described above. Note the name of the policy.
The URL to access the metadata is in the form:
https://<instance base>/workforce/metadata/sp/<policy name>
So for the sample instance and SAML SS Profile in the example, the URL would be:
https://company.workforcehosting.com/workforce/metadata/sp/SAMPLE_PROFILE
WorkForce recommends the use of the URL-based metadata wherever possible. If the Identity Provider
requires a file, copy the contents of the XML on the page to a file.
Next, create an Authentication Mechanism detail policy as described in Authentication Mechanism Policy
Detail. Here is a basic script that can be put into the SSO Script field to do the
SSO:_Authentication_Mechanism_Policy
Example: Basic SP-initiated SSO Script
includeDistributedPolicy("SSO_API");
SsoApi.doSpSaml("SAMPLE_PROFILE");
Specify the name of the SAML SSO Policy on the second line; in this example it is the sample policy named
SAMPLE_PROFILE.
Setup is now complete. To test the process, open a web browser and go to the SSO endpoint. For the
sample instance, this would be https://company.workforcehosting.com/workforce/SSO.do
The browser should be redirected to the Identity Provider, which should prompt for authentication if
necessary. Then, the browser should be redirected to the WorkForce Time and Attendance main page.
includeDistributedPolicy("SSO_API");
SsoApi.doIdpSaml("SAMPLE_PROFILE");
Specify the name of the SAML SSO Policy on the second line; in this example it is the sample policy named
SAMPLE_PROFILE.
This script goes into the SSO Script field of the Authentication Mechanism being used.
Setup is now complete. To test the process, the IdP should be set up to authenticate a user and send the
SAML to the SSO endpoint; this may require additional configuration of the IdP.
For the sample instance, the WorkForce Time and Attendance SSO endpoint would be
https://company.workforcehosting.com/workforce/SSO.do
Once the IdP is configured, users sent to WorkForce Time and Attendance should be directed to the
application main page.
Example 1: A script that converts the provided NameID identifier into upper case
includeDistributedPolicy("SSO_API");
SsoApi.doSpSaml("SAMPLE_PROFILE");
Example 2: A script that strips the prefix 'WORKFORCE\' off the provided NameID
identifier
includeDistributedPolicy("SSO_API");
SsoApi.doSpSaml("SAMPLE_PROFILE");
Example 3: A script that extracts the value of the "EmailAddress" attribute as the
identifier
includeDistributedPolicy("SSO_API");
function customUserHook(samlResponse, nameIds) {
var attributes = samlResponse.getAttributes();
var emailAttributes = attributes['EmailAddress'];
// This is actually a list, though it likely has exactly one member, so extract the first
item
var emailAttribute = emailAttributes.get(0);
// This is actually an XML element, to get a string representation, call getValue()
return emailAttribute.getValue();
}
SsoApi.doSpSaml("SAMPLE_PROFILE");
Customizing Redirection
The default behavior of the SSO API is to direct users to the main page of the application upon successful
authentication. The SSO API provides a way to specify alternate redirect pages based on custom logic.
Convenient methods have been provided for the most common redirects (the main page, Mobile,
Accessibility, and WebClock), as well as a method for sending the user to any page within WorkForce Time
and Attendance.
if (isWebclockUser) {
SsoApi.redirectToWebclock();
} else if (isAccessibilityUser) {
SsoApi.redirectToAccessibility();
} else {
SsoApi.redirectToMain();
}
}
SsoApi.doSpSaml("SAMPLE_PROFILE");
SsoApi.doSpSaml("SAMPLE_PROFILE");
In this case, no scripting customizations are necessary. The SSO API will attempt to automatically detect
mobile browsers and direct the user to Mobile or the main page appropriately. Note that this approach is
not compatible with any custom redirection logic; if the customRedirectHook is used, the auto-detection will
not be used.
if (isMobile == "true") {
SsoApi.redirectToMobile();
} else {
SsoApi.redirectToMain();
}
}
SsoApi.doIdpSaml("SAMPLE_PROFILE");
if (mobileParam == "true") {
SsoApi.redirectToMobile();
} else {
SsoApi.redirectToMain();
}
}
SsoApi.doIdpSaml("SAMPLE_PROFILE");
Persisting Data
In order to implement custom behavior, it may be necessary to retain information across an SP-initiated SSO
request/response cycle. The most common reason for this is when HTTP parameters are used to specify
information. Normally, this information would be lost during the SAML request and response.
For example, the user might include an HTTP Parameter named 'accessibility', which will have a 'true' value if
the user should be directed to the Accessibility page upon login.
function preRequestHook(samlRequest) {
// Retrieve the parameter value
var accessibilityParameter = SsoApi.getRequestParameter('accessibility');
// Persist it
samlRequest.persistProperty('accessibility', accessibilityParameter);
}
if (persistedParameter == "true") {
SsoApi.redirectToAccessibility();
} else
SsoApi.redirectToMain();
}
}
SsoApi.doSpSaml("SAMPLE_PROFILE");
SsoApi.doSpSaml("SAMPLE_PROFILE");
Example 1: The Idp sends a single proxyId which does not require custom processing
This is the case where no changes to the SSO script would be required. The script could look as simple as the
basic Idp-initiated script.
includeDistributedPolicy("SSO_API");
SsoApi.doIdpSaml("SAMPLE_PROFILE");
Example 2: Strip 'WORKFORCE\' off the provided Name ID and Proxy ID if provided. If no
Proxy ID provided, proceed as a normal SSO login
includeDistributedPolicy("SSO_API");
SsoApi.doIspSaml("SAMPLE_PROFILE");
Troubleshooting
There are two sources of information for problems encountered with getting SAML SSO working:
• The WorkForce Time and Attendance server_log.txt file
• The WorkForce Time and Attendance sso_log.txt file
Usually, messages about an error or problem can be found in one or both of these logs, depending on the
nature of the error and where in the process it occurred.
The sso_log.txt file will contain the output of the debugHttpRequest, printSamlResponse, and debug
methods of the SSO API. It will also contain the logging generated by use of the "Enable Debug Logging"
option of the SAML SSO Profile policy.
The server_log.txt file will contain all other messages, such as those about errors processing the metadata
file or URL, errors in the script itself, or errors finding a user.
Error Message Problem Solution
Non-ok status code 404 WorkForce Time and Attendance Make sure the URL entered in the
returned from remote cannot retrieve the IdP metadata SAML SSO Profile Policy is correct and
metadata source via URL because the URL is not publicly accessible
http://example.com found
Error retrieving metadata WorkForce Time and Attendance Add the SSL certificate of the
from https://example.com cannot retrieve the IdP metadata metadata's URL to WorkForce Time
No trusted certificate via URL because the SSL connection and Attendance via the Certificate
found cannot be made Administration page. Note that this
needs to be done on each instance in
a load-balanced situation, and that it
also needs to be redone on every
upgrade.
Expected metadata file not WorkForce Time and Attendance Ensure the file is in the correct place
found at idp_metadata.xml cannot read the IdP metadata from and accessible to WorkForce Time and
a file Attendance
Signature validation failed The metadata has expired Check the expiration date of the
metadata and its certificate
Signature validation failed The signature on the SAML does not Make sure the correct metadata is
match the metadata being used
No valid nameIds were WorkForce Time and Attendance Inspect the SAML to ensure the IdP is
found in the SAML cannot find a user identifier in the sending an identifier in the expected
Response - a custom user SAML element – most often, a NameID
hook is required unless you have implemented custom
user logic
Employee not found for WorkForce Time and Attendance Check that the match table and match
EXTERNAL_ID = cannot find a user to authenticate field on the Authentication Mechanism
'SAMPLE_USER' policy hold the identifier from the IdP
Error validating SAML info: Clock drift between the SP and IdP Use the "Assertion time window" field
Current time Time X is not means the SAML is invalid on the SAML SSO Policy to allow for
between Time A and Time drift
B
Table 86: SSO API common error messages, causes, and solutions
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The SSO API consists of the following component(s):
1. The SsoApi namespace, which provides convenience methods for common SSO operations
a. This namespace uses a SamlRequest, which represents a SAML request sent to the IdP.
b. This namespace uses a SamlResponse, which represents a SAML response received from the
IdP.
2. One or more SAML SSO Profile policies. There is typically one per Identity Provider.
3. One or more Authentication Mechanism Policy. There is typically one per Identity Provider and these
will generally contain the SSO Scripts with the logic of the SSO Process.
Note that in almost all cases, direct creation of the SamlRequest or SamlResponse will not be necessary; the
API will create them, and their behavior can be customized through optional functions that you can write.
The following is a summary of the available methods and common uses.
SsoApi.debug(message)
Output the provided string message to the sso_log.txt file. May be useful for debugging.
SsoApi.printSamlResponse()
Output the raw XML of the SAML received by the script. If no SAML response was received, that will be
indicated instead. May be useful for debugging.
SsoApi.isSsoRequest()
Returns true if the HTTP request was sent to the WorkForce Time and Attendance SSO endpoint. Usually
used to control whether the script should run.
SsoApi.isMobileAgent()
Returns true if WorkForce Time and Attendance could detect the user is signing in on a mobile device. Often
used to redirect the user to the Mobile page. Note that WorkForce Time and Attendance will by default
attempt to automatically redirect mobile users based on this unless a custom redirection method is
implemented.
SsoApi.isSamlResponseAvailable()
Returns true if a SAMLResponse was available in the request. May be useful for debugging.
SsoApi.getRequestParameter(parameterName)
Extract an HTTP parameter from the HTTP request. Often used to allow the IdP to send additional
information that can affect the behavior of the SSO script
SsoApi.redirectToMobile()
Instruct the SSO script to direct the user to Mobile. Often used if customizing the redirect logic in the script.
SsoApi.redirectToAccessibility()
Instruct the SSO script to direct the user to Accessibility. Often used if customizing the redirect logic in the
script.
SsoApi.redirectToWebclock()
Instruct the SSO script to direct the user to WebClock. Often used if customizing the redirect logic in the
script.
SsoApi.redirectToMain()
Instruct the SSO script to direct the user to the application's main page. Often used if customizing the
redirect logic in the script.
SsoApi.redirectTo(target)
Instruct the SSO script to direct the user to the indicated target. Often used if customizing the redirect logic
in the script.
SsoApi.fail(message)
Instruct the SSO script to fail authentication and display the provided message. Rarely used.
SsoApi.doSpSaml(ssoPolicy)
Instruct the SSO script to run SP-initiated SAML, using the settings in the specified SAML SSO Policy
SsoApi.doIdpSaml(ssoPolicy)
Instruct the SSO script to run IdP-initiated SAML, using the settings in the specified SAML SSO Policy
The SamlRequest class provides the following methods:
SamlRequest(ssoPolicy)
Create a SAML Authentication Request suitable for sending to the IdP, using the settings in the specified
SAML SSO policy
setRelayState(relayState)
Set the SAML RelayState.The IdP will retransmit this value back. Could be used to retain information from
the request to make it available when the response is received.
persistProperty(propertyName, propertyValue)
Store a name/value pair in the request, which will be available when processing the response. Preferred to
relay state for retaining information during the request to make it available when the response is received.
dispatchRedirect()
Instruct the SSO Script to dispatch the Authentication response via HTTP Redirect.
dispatchPost()
Instruct the SSO Script to dispatch the Authentication request via HTTP POST. Automatically done if
SsoApi.doSpSaml() is used.
getRelayState()
Get the relay state that arrived with the response
getNameIds()
Get the NameIds that the IdP sent in the response. Usually there will only be one.
getPersistedProperties()
Retrieve any properties that were persisted using the persistProperty() method on SamlRequest, when
processing the response to that request. In IdP-initiated SAML, this will be an empty object.
getAssertions()
ADVANCED USE ONLY - provides direct access to the SAML Assertions in the Response. Should only be used
in very complex SSO script scenarios
getAttributes()
ADVANCED USE ONLY - provides direct access to the SAML Attributes in the Response. Should only be used
in very complex SSO script scenarios.
preRequestHook(samlRequest)
Called before the SAML request is dispatched. This is the place where the SamlRequest methods
setRelayState() and persistProperty() would be called. No return value.
postRequestHook()
Called after the SAML request has been dispatched but before the script completes. Very rarely used. No
return value.
preResponseHook()
Called before the Saml Response is processed. Most common use would be for logging debugging
information about the http request or saml response, using SsoApi.debugHttpRequest(),
SsoApi.debug(message), or SsoApi.printSamlResponse(). No return value.
responseHook(samlResponse)
Called after the Saml Response has been processed and validated, but before any values are extracted. Very
rarely used. No return value.
customUserHook(samlResponse, nameIds)
Called instead of the default user determination logic. The SamlResponse and the nameIds present in the
SAML are provided. Must return a string which will be used to look up the user based on the Authentication
Mechanism's match field. Must be implemented if zero or more than one NameId is provided by the IdP;
optional otherwise.
customProxyUserHook(samlResponse, proxyIds)
Called instead of the default proxy user determination logic. The SamlResponse and the proxyIds present in
the SAML are provided. Must return either null or a string which will be used to look up the proxy user based
on the Authentication Mechanism's match field. If the function returns null, the system will attempt a normal
SSO login, not an SSO proxy login.
customRedirectHook(samlResponse, userId)
Called instead of the default redirect logic - the default logic is to try to determine mobile users based on the
User-Agent header, and send mobile users to Mobile, all other users to Desktop. If customUserHook is
implemented, userId will be the value returned by that function. Otherwise it will be the NameId provided by
the IdP. Usually calls the SsoApi.redirectToMain(), SsoApi.redirectToMobile(),
SsoApi.redirectToAccessibility(), SsoApi.redirectToWebclock(), or SsoApi.redirect(target) methods in
conditional logic. Will often use the relay state (samlResponse.getRelayState()) or http parameters
(SsoApi.getRequestParameter(paramName)) to drive the conditional logic. Less commonly, may use the
SsoApi.fail(message) method to fail authentication. No return value.
postResponseHook(samlResponse, userId)
Called just before the script completes. Very rarely used. No return value.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The Distributed JavaScript Library SWIPE_IMPORT_API
Setup
No setup is necessary for the Swipe Import API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Importing Swipes
The Swipe Import API allows for swipes to be imported from any data source into timesheets. It is the calling
script’s responsibility to handle reading the data from the data source and formatting it into JS objects that
match the structure of the SWIPE_IMPORT_STAGING_TABLE table. The Swipe Import API can then take that
formatted data and post it to timesheets.
// Iterate over the source data, creating new swipe records and adding them to the API
while (dataSource.next()) {
var swipe = {
id_type1: "DISPLAY_EMPLOYEE",
id_data: dataSource.employeeId,
transaction_dttm: dataSource.timeOfSwipeEvent,
pay_code: dataSource.pay_code,
event_type: dataSource.swipeType
};
api.addSwipe(swipe);
}
// Once all of the swipes have been added, process them to the timesheets
api.processSwipes();
Note: A record will be created in the SWIPE_IMPORT_STAGING_TABLE table for all swipes processed by
addSwipe(). This allows for auditing of what swipes have been processed by the API
// Process all of the swipes from the staging table that have not yet been processed
api.processSwipesFromStagingTable();
Troubleshooting
The job log of the script using the Swipe Import API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
No badge group policy exists The badge group being used to Ensure that all swipes that specify
with ID BADGE_GROUP_ID evaluate the match ID for the swipe badge groups, as well as the default
does not correspond to a Badge badge group called out when
Group policy defined in the initializing the API, specify values that
configuration. match the name of a Badge Group
policy in the configuration
Unable to determine badge A swipe using badge-based Ensure that a badge record matches
group for swipe with badge matching was unable to determine the ID specified on all swipes being
ID BADGE_ID which badge group the swipe processed using badge-based mapping
belonged to.
Invalid value VALUE Swipe data was specified in Ensure that the values entered on all
specified for field FIELD. addSwipe that included an invalid swipes for the specified field are one of
Valid options are OPTIONS value for the indicated field. the options listed in the error message.
Table 87: Swipe Import API typical error messages, root problems, and solutions
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
SwipeImportAPI
Creates a new instance of the Swipe Import API
addSwipe(swipe)
Adds the specified swipe data to the set of swipes to be processed by this object. Also creates a record in
SWIPE_IMPORT_STAGING_TABLE for the swipe data.
processSwipes()
Sorts the swipe data by employee and transaction time (from earliest to latest), and processes each of the
swipes to import the data onto the employees’ timesheets.
processSwipesFromStagingTable()
Loads all of the records in SWIPE_IMPORT_STAGING_TABLE where PROCESSED is false and
IMPORT_JOB_NAME is not set into memory, as though they had been added by calls to addSwipe(), and then
performs the same actions as processSwipes().
getAllUnprocessedSwipeStagingRecords()
Fetches all the unprocessed swipes present in the SWIPE_IMPORT_STAGING_TABLE. The criteria to fetch the
unprocessed swipe is that the PROCESSED field should be false which states that the swipe is in unprocessed
state.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
• WorkForce Time and Attendance Time Entry and Schedule
Components
This API consists of the following component(s):
• The TIME_ENTRY_IMPORT_API distributed JavaScript Library
• The API_UTIL distributed JavaScript Library
Setup
No setup is necessary for the Time Entry Import API. The distributed library is automatically available in
WT&A.
Use Cases
Printing Available Parameters for API Configuration
This script excerpt demonstrates how to print available configuration parameters of the API to use them
correctly.
includeDistributedPolicy("TIME_ENTRY_IMPORT_API");
var parms = {
// Specifies how the data should be mapped from the source result set to the
//TIME_ENTRY_DETAIL_IMPORT records being imported.
// Valid options are DECODER or FUNCTION. (Values are not case sensitive.)
mappingType: "FUNCTION",
// Specifies the field on the ASGNMT record that the value being mapped to
// TIME_ENTRY_DETAIL_IMPORT.ASGNMT_MATCH_FLD_VALUE is expected to match against to determine
//which assignment to import the time record to.
matchField: "computed_match_id",
// Specifies an array of field names that are expected to be included in the source result set
//data. If all of the listed fields are not found, no data will be imported. Additional
//fields not specified in the array are allowed to be included.
fieldNames:
"EMPLOYEE_ID,DATE,START_TIME,END_TIME,DURATION,PAY_CODE,HOME_DEPT,JOB_CODE,COST_CENTER,CLIE_WO
RK,CLIE_CONV,CLIE_POSTN,AH_DEPTID,AH_JOBCODE,DEPTID2,JOBCODE2,AH_EMP_SLOT,AH_LOCATION,AH_RPT_S
LOT,AH_RPT_LOCATION".split(","),
// Specifies the date format that all date strings are expected to be provided in
dateFormat: "MM/dd/yyyy",
// Specifies the date-time format that all date-time strings are expected to be provided in
dateTimeFormat: "MM/dd/yyyy HH:mm",
// Specifies the pay code set to use to limit which pay codes are allowed to be imported.
//If a pay code set is defined, only pay codes in that set can be imported. If set to null or
//an empty string, all pay codes will be allowed to import successfully.
payCodeSet: "",
// Specifies whether time sheets should be automatically amended when a time record with a
//work date that falls in a closed period is imported. If set to false, records with work
//dates in a closed period will error out.
amendTimeSheets: false,
// Specifies whether amended time sheets that have already been approved should be
//automatically unapproved when a time record with a work date that falls in that period is
//imported.If amendTimeSheetsIfNecessary is set to false, then this option will have no
//effect.
unapproveAmendedPeriods: false,
// Specifies the login_id of the user to use for generating unapprovals when
//unapproveAmendedPeriods is set to true.
approver: "WORKFORCE",
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
function mappingFunction(record) {
return {
"employee_lookup_id": record.display_employee,
"asgnmt_match_fld_value": record.asgnmt,
"work_dt": record.work_dt,
"start_dttm": record.work_dt + " 09:00:00",
"end_dttm": record.work_dt + " 17:00:00",
"pay_code": "REG",
"hours": "8",
"OTHER_TEXT10": "TT28437_2",
"comments": "TT28437_2"
};
}
/**
* This function is executed after the data mappings are complete, and allows for custom
validation logic to be applied
* to each record to determine if it should be imported or not. This function also allows for
any final modifications
* to be made to the data mappings.
*
* @param {AssignmentInfo} asgnmtInfo information for the assignment the record is being
imported for
* @param {Entry_type} entryType the entry type associated with the record
* @param {Object} record scriptable object containing the values mapped to the
TIME_ENTRY_DETAIL_IMPORT table
* @returns {String[]} array of error messages generated for this record
*/
function validationFunction(asgnmtInfo, entryType, record) {
var errors = [];
if (isBlank(record.EMPLOYEE_LOOKUP_ID)) {
errors.push("EMPID is required");
}
return errors;
}
var parms = {
// Specifies how the data should be mapped from the source result set to the
//TIME_ENTRY_DETAIL_IMPORT records being imported.
// Valid options are DECODER or FUNCTION. (Values are not case sensitive.)
mappingType: "DECODER",
// Specifies the field on the ASGNMT record that the value being mapped to
// Specifies the field in the decoder that identifies the source fields in the result set.
decoderSourceCol: "SOURCE",
// Specifies the field in the decoder that identifies the target fields to write the data to
on the.
decoderDestCol: "TIME_ENTRY_DETAIL_IMPORT",
// Specifies the pay code set to use to limit which pay codes are allowed to be imported.
//If a pay code set is defined, only pay codes in that set can be imported. If set to null or
//an empty string, all pay codes will be allowed to import successfully.
payCodeSet: "",
// Specifies whether time sheets should be automatically amended when a time record with a
//work date that falls in a closed period is imported. If set to false, records with work
//dates in a closed period will error out.
amendTimeSheets: false,
// Specifies whether amended time sheets that have already been approved should be
//automatically unapproved when a time record with a work date that falls in that period is
//imported.If amendTimeSheetsIfNecessary is set to false, then this option will have no
//effect.
unapproveAmendedPeriods: false,
// Specifies the login_id of the user to use for generating unapprovals when
//unapproveAmendedPeriods is set to true.
approver: "WORKFORCE",
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
/**
* This function is executed after the data mappings are complete, and allows for custom
validation logic to be applied
* to each record to determine if it should be imported or not. This function also allows for
any final modifications
* to be made to the data mappings.
*
* @param {AssignmentInfo} asgnmtInfo information for the assignment the record is being
imported for
* @param {Entry_type} entryType the entry type associated with the record
* @param {Object} record scriptable object containing the values mapped to the
TIME_ENTRY_DETAIL_IMPORT table
* @returns {String[]} array of error messages generated for this record
*/
function validationFunction(asgnmtInfo, entryType, record) {
var errors = [];
if (isBlank(record.EMPLOYEE_LOOKUP_ID)) {
errors.push("EMPID is required");
}
return errors;
}
var parms = {
// Specifies how the data should be mapped from the source result set to the
//TIME_ENTRY_DETAIL_IMPORT records being imported.
// Valid options are DECODER or FUNCTION. (Values are not case sensitive.)
mappingType: "FUNCTION",
// Specifies the field on the ASGNMT record that the value being mapped to
// TIME_ENTRY_DETAIL_IMPORT.ASGNMT_MATCH_FLD_VALUE is expected to match against to determine
//which assignment to import the time record to.
matchField: "computed_match_id",
// Specifies an array of field names that are expected to be included in the source result set
//data. If all of the listed fields are not found, no data will be imported. Additional
//fields not specified in the array are allowed to be included.
fieldNames:
"EMPLOYEE_ID,DATE,START_TIME,END_TIME,DURATION,PAY_CODE,HOME_DEPT,JOB_CODE,COST_CENTER,CLIE_WO
RK,CLIE_CONV,CLIE_POSTN,AH_DEPTID,AH_JOBCODE,DEPTID2,JOBCODE2,AH_EMP_SLOT,AH_LOCATION,AH_RPT_S
LOT,AH_RPT_LOCATION".split(","),
// Specifies the date format that all date strings are expected to be provided in
dateFormat: "MM/dd/yyyy",
// Specifies the date-time format that all date-time strings are expected to be provided in
dateTimeFormat: "MM/dd/yyyy HH:mm",
// Specifies the pay code set to use to limit which pay codes are allowed to be imported.
//If a pay code set is defined, only pay codes in that set can be imported. If set to null or
//an empty string, all pay codes will be allowed to import successfully.
payCodeSet: "",
// Specifies whether time sheets should be automatically amended when a time record with a
//work date that falls in a closed period is imported. If set to false, records with work
//dates in a closed period will error out.
amendTimeSheets: false,
// Specifies whether amended time sheets that have already been approved should be
//automatically unapproved when a time record with a work date that falls in that period is
//imported.If amendTimeSheetsIfNecessary is set to false, then this option will have no
//effect.
unapproveAmendedPeriods: false,
// Specifies the login_id of the user to use for generating unapprovals when
//unapproveAmendedPeriods is set to true.
approver: "WORKFORCE",
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
function mappingFunction(record) {
var results = [];
var workDt = new WFSDate.today();
results.push({
"employee_lookup_id": record.display_employee,
"asgnmt_match_fld_value": record.asgnmt,
"work_dt": workDt,
"pay_code": "REG",
"hours": "8",
"OTHER_TEXT10": "TT28437_3",
"comments": "TT28437_3",
"Extra_field":"extra_value"
});
results.push({
"employee_lookup_id": record.display_employee,
"asgnmt_match_fld_value": record.asgnmt,
"work_dt": workDt.addDays(1),
"pay_code": "REG",
"hours": "10",
"OTHER_TEXT10": "TT28437_3",
"comments": "TT28437_3",
"Nonexistent_field1": "valueA",
"Nonexistent_field2": "valueB"
});
return results;
}
/**
* This function is executed after the data mappings are complete, and allows for custom
validation logic to be applied
* to each record to determine if it should be imported or not. This function also allows for
any final modifications
* to be made to the data mappings.
*
* @param {AssignmentInfo} asgnmtInfo information for the assignment the record is being
imported for
* @param {Entry_type} entryType the entry type associated with the record
* @param {Object} record scriptable object containing the values mapped to the
TIME_ENTRY_DETAIL_IMPORT table
* @returns {String[]} array of error messages generated for this record
*/
function validationFunction(asgnmtInfo, entryType, record) {
var errors = [];
if (isBlank(record.EMPLOYEE_LOOKUP_ID)) {
errors.push("EMPID is required");
}
return errors;
}
Example 4: Import multiple time record using multiple calls to the single record creation
function
Imports an array of time records from a single source data record to the TIME_ENTRY_DETAIL_IMPORT
staging table.
includeDistributedPolicy("TIME_ENTRY_IMPORT_API");
includeDistributedPolicy("TIME_ENTRY_IMPORT_API");
var parms = {
// Specifies how the data should be mapped from the source result set to the
//TIME_ENTRY_DETAIL_IMPORT records being imported.
// Valid options are DECODER or FUNCTION. (Values are not case sensitive.)
mappingType: "FUNCTION",
// Specifies the field on the ASGNMT record that the value being mapped to
// TIME_ENTRY_DETAIL_IMPORT.ASGNMT_MATCH_FLD_VALUE is expected to match against to determine
//which assignment to import the time record to.
matchField: "computed_match_id",
// Specifies an array of field names that are expected to be included in the source result set
//data. If all of the listed fields are not found, no data will be imported. Additional
//fields not specified in the array are allowed to be included.
fieldNames:
"EMPLOYEE_ID,DATE,START_TIME,END_TIME,DURATION,PAY_CODE,HOME_DEPT,JOB_CODE,COST_CENTER,CLIE_WO
RK,CLIE_CONV,CLIE_POSTN,AH_DEPTID,AH_JOBCODE,DEPTID2,JOBCODE2,AH_EMP_SLOT,AH_LOCATION,AH_RPT_S
LOT,AH_RPT_LOCATION".split(","),
// Specifies the date format that all date strings are expected to be provided in
dateFormat: "MM/dd/yyyy",
// Specifies the date-time format that all date-time strings are expected to be provided in
dateTimeFormat: "MM/dd/yyyy HH:mm",
// Specifies the pay code set to use to limit which pay codes are allowed to be imported.
//If a pay code set is defined, only pay codes in that set can be imported. If set to null or
//an empty string, all pay codes will be allowed to import successfully.
payCodeSet: "",
// Specifies whether time sheets should be automatically amended when a time record with a
//work date that falls in a closed period is imported. If set to false, records with work
//dates in a closed period will error out.
amendTimeSheets: false,
// Specifies whether amended time sheets that have already been approved should be
//automatically unapproved when a time record with a work date that falls in that period is
//imported.If amendTimeSheetsIfNecessary is set to false, then this option will have no
//effect.
unapproveAmendedPeriods: false,
// Specifies the login_id of the user to use for generating unapprovals when
//unapproveAmendedPeriods is set to true.
approver: "WORKFORCE",
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
function mappingFunction1(record) {
return {
"employee_lookup_id": record.display_employee,
"asgnmt_match_fld_value": record.asgnmt,
"work_dt": record.work_dt,
"start_dttm": record.work_dt + " 09:00:00",
"end_dttm": record.work_dt + " 17:00:00",
"pay_code": "REG",
"hours": "8",
"OTHER_TEXT10": "TT28437_2",
"comments": "TT28437_2"
};
}
function mappingFunction2(record) {
return {
"employee_lookup_id": record.display_employee,
"asgnmt_match_fld_value": record.asgnmt,
"work_dt": Packages.com.workforcesoftware.Util.DateTime.WDate.valueOf(record.work_dt,
DATE_FMT).addDays(1),
"pay_code": "REG",
"hours": "6",
"OTHER_TEXT10": "TT28437_2",
"comments": "TT28437_2"
};
}
function mappingFunction3(record) {
return {
"employee_lookup_id": record.display_employee,
"asgnmt_match_fld_value": record.asgnmt,
"work_dt": Packages.com.workforcesoftware.Util.DateTime.WDate.valueOf(record.work_dt,
DATE_FMT).addDays(2),
"pay_code": "REG",
"hours": "4",
"OTHER_TEXT10": "TT28437_2",
"comments": "TT28437_2"
};
}
/**
* This function is executed after the data mappings are complete, and allows for custom
validation logic to be applied
* to each record to determine if it should be imported or not. This function also allows for
any final modifications
* to be made to the data mappings.
*
* @param {AssignmentInfo} asgnmtInfo information for the assignment the record is being
imported for
* @param {Entry_type} entryType the entry type associated with the record
* @param {Object} record scriptable object containing the values mapped to the
TIME_ENTRY_DETAIL_IMPORT table
* @returns {String[]} array of error messages generated for this record
*/
if (isBlank(record.EMPLOYEE_LOOKUP_ID)) {
errors.push("EMPID is required");
}
return errors;
}
var parms = {
// Specifies whether new records will be allowed to be imported on days that already contain
manually-edited time
// records. If this is set to true, any records in the import record set that fall on days
containing manually-edited
// time will be filtered out and the existing records will be left untouched. Otherwise,
manually-edited time will be
// subject to the indicated time record replacement behavior.
dontImportOnManuallyEditedDays: false,
// Specifies whether or not time records will be allowed to be imported prior to the current
period. If set to false,
// any records with a work date prior to the current period will be filtered out.
amendTimeSheets: false,
// Specifies the login_id of the user to use for approving any amended time sheets with new
time being imported.
approver: "WORKFORCE",
// Specifies the approval level to assign to amended time sheets being approved
automatically
approvalLevel: 5,
// Specifies whether the changes made to the time sheet/schedule by the commit process
should be audited. This should
// Indicates if the API logging should use the expanded name (including the employee first and
last name), or if it should only output the internal ID numbers
useExpandedAsgnmtName: false,
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
var parms = {
// Specifies whether new records will be allowed to be imported on days that already contain
manually-edited time
// records. If this is set to true, any records in the import record set that fall on days
containing manually-edited
// time will be filtered out and the existing records will be left untouched. Otherwise,
manually-edited time will be
// subject to the indicated time record replacement behavior.
dontImportOnManuallyEditedDays: false,
// Specifies whether or not time records will be allowed to be imported prior to the current
period. If set to false,
// any records with a work date prior to the current period will be filtered out.
amendTimeSheets: false,
// Specifies the login_id of the user to use for approving any amended time sheets with new
time being imported.
approver: "WORKFORCE",
// Specifies the approval level to assign to amended time sheets being approved
automatically
approvalLevel: 5,
// Indicates if the API logging should use the expanded name (including the employee first and
last name), or if it should only output the internal ID numbers
useExpandedAsgnmtName: false,
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
var parms = {
// Specifies whether new records will be allowed to be imported on days that already contain
manually-edited time
// records. If this is set to true, any records in the import record set that fall on days
containing manually-edited
// time will be filtered out and the existing records will be left untouched. Otherwise,
manually-edited time will be
// subject to the indicated time record replacement behavior.
dontImportOnManuallyEditedDays: false,
// Specifies whether or not time records will be allowed to be imported prior to the current
period. If set to false,
// any records with a work date prior to the current period will be filtered out.
amendTimeSheets: false,
// Specifies the login_id of the user to use for approving any amended time sheets with new
time being imported.
approver: "WORKFORCE",
// Specifies the approval level to assign to amended time sheets being approved
automatically
approvalLevel: 5,
// Indicates if the API logging should use the expanded name (including the employee first and
last name), or if it should only output the internal ID numbers
useExpandedAsgnmtName: false,
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
var parms = {
// Specifies whether new records will be allowed to be imported on days that already contain
manually-edited time
// records. If this is set to true, any records in the import record set that fall on days
containing manually-edited
// time will be filtered out and the existing records will be left untouched. Otherwise,
manually-edited time will be
// subject to the indicated time record replacement behavior.
dontImportOnManuallyEditedDays: false,
// Specifies whether or not time records will be allowed to be imported prior to the current
period. If set to false,
// any records with a work date prior to the current period will be filtered out.
amendTimeSheets: false,
// Specifies the login_id of the user to use for approving any amended time sheets with new
time being imported.
approver: "WORKFORCE",
// Specifies the approval level to assign to amended time sheets being approved
automatically
approvalLevel: 5,
// Indicates if the API logging should use the expanded name (including the employee first and
last name), or if it should only output the internal ID numbers
useExpandedAsgnmtName: false,
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
teiAPI.replaceExistingRecords(replacementFunction);
existingRecords[i] = null;
}
}
}
}
var parms = {
// Specifies whether new records will be allowed to be imported on days that already contain
manually-edited time
// records. If this is set to true, any records in the import record set that fall on days
containing manually-edited
// time will be filtered out and the existing records will be left untouched. Otherwise,
manually-edited time will be
// subject to the indicated time record replacement behavior.
dontImportOnManuallyEditedDays: false,
// Specifies whether or not time records will be allowed to be imported prior to the current
period. If set to false,
// any records with a work date prior to the current period will be filtered out.
amendTimeSheets: false,
// Specifies the login_id of the user to use for approving any amended time sheets with new
time being imported.
approver: "WORKFORCE",
// Specifies the approval level to assign to amended time sheets being approved
automatically
approvalLevel: 5,
// Indicates if the API logging should use the expanded name (including the employee first and
last name), or if it should only output the internal ID numbers
useExpandedAsgnmtName: false,
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
First setup API in “Commit Setup Script” section of Time Entry Import policy as follows:
includeDistributedPolicy("TIME_ENTRY_IMPORT_API");
var parms = {
// Specifies whether new records will be allowed to be imported on days that already contain
manually-edited time
// records. If this is set to true, any records in the import record set that fall on days
containing manually-edited
// time will be filtered out and the existing records will be left untouched. Otherwise,
manually-edited time will be
// subject to the indicated time record replacement behavior.
dontImportOnManuallyEditedDays: false,
// Specifies whether or not time records will be allowed to be imported prior to the current
period. If set to false,
// any records with a work date prior to the current period will be filtered out.
amendTimeSheets: true,
// Specifies the login_id of the user to use for approving any amended time sheets with new
time being imported.
approver: "WORKFORCE",
// Specifies the approval level to assign to amended time sheets being approved
automatically
approvalLevel: 5,
// Indicates if the API logging should use the expanded name (including the employee first and
last name), or if it should only output the internal ID numbers
useExpandedAsgnmtName: false,
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
var parms = {
// Specifies whether new records will be allowed to be imported on days that already contain
manually-edited time
// records. If this is set to true, any records in the import record set that fall on days
containing manually-edited
// time will be filtered out and the existing records will be left untouched. Otherwise,
manually-edited time will be
// subject to the indicated time record replacement behavior.
dontImportOnManuallyEditedDays: false,
// Specifies whether or not time records will be allowed to be imported prior to the current
period. If set to false,
// any records with a work date prior to the current period will be filtered out.
amendTimeSheets: true,
// Specifies the login_id of the user to use for approving any amended time sheets with new
time being imported.
approver: "WORKFORCE",
// Specifies the approval level to assign to amended time sheets being approved
automatically
approvalLevel: 5,
// Indicates if the API logging should use the expanded name (including the employee first and
last name), or if it should only output the internal ID numbers
useExpandedAsgnmtName: false,
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false,
//The field on the time record containing the indicator used to identify deletion-only records
deleteField: "OTHER_TEXT1",
var parms = {
// Match strings to match existing time records against.This overrides the default behavior of
//using the match values defined on the records being imported
timeRecordIDStrings: ["TT28437_3"],
// Specifies whether new records will be allowed to be imported on days that already contain
manually-edited time
// records. If this is set to true, any records in the import record set that fall on days
containing manually-edited
// time will be filtered out and the existing records will be left untouched. Otherwise,
manually-edited time will be
// subject to the indicated time record replacement behavior.
dontImportOnManuallyEditedDays: false,
// Specifies whether or not time records will be allowed to be imported prior to the current
period. If set to false,
// any records with a work date prior to the current period will be filtered out.
amendTimeSheets: false,
// Specifies the login_id of the user to use for approving any amended time sheets with new
time being imported.
approver: "WORKFORCE",
// Specifies the approval level to assign to amended time sheets being approved
automatically
approvalLevel: 5,
// Indicates if the API logging should use the expanded name (including the employee first and
last name), or if it should only output the internal ID numbers
useExpandedAsgnmtName: false,
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
includeDistributedPolicy("TIME_ENTRY_IMPORT_API");
var parms = {
// Specifies the field on the ASGNMT record that the value being mapped to
// TIME_ENTRY_DETAIL_IMPORT.ASGNMT_MATCH_FLD_VALUE is expected to match against to determine
which assignment to
// import the time record to.
matchField: "COMPUTED_MATCH_ID",
// Specifies the name of the Time Entry Import policy that should be kicked off
automatically to write records to the
// TIME_SHEET_DETAIL table. If there is only one Time Entry Import policy to be run, this
should be the exact name of
// that policy. If multiple Time Entry Import policies should be run, those policies should
be named <policy name>1,
// <policy name>2, and so on (for example TIME_IMPORT_1, TIME_IMPORT_2, etc.) and this
should just specify the
// <policy name> portion of that identifier, i.e. "TIME_IMPORT_". If this is set to an
empty string, no jobs will be
// run to import TIME_SHEET_DETAIL records.
timeImportPolicy: "TIME_IMPORT_POLICY",
// Specifies the number of different Time Entry Import policies that should be used to write
records to the
// TIME_SHEET_DETAIL table. The assignments that time records belong to will automatically
be distributed evenly
// across these different import jobs, so that all of the records being imported for a given
assignment will be
// processed during the same import job.
timeImportCount: 1,
// Specifies the name of the Time Entry Import policy that should be kicked off
automatically to write records to the
// SCHEDULE_DETAIL table. If there is only one Time Entry Import policy to run, this should
be the exact name of that
// policy. If multiple Time Entry Import policies should be run, those policies should be
named <policy name>1,
// <policy name>2, and so on (for example SCHEDULE_IMPORT_1, SCHEDULE_IMPORT_2, etc.) and
this should just specify the
// <policy name> portion of that identifier, i.e. "SCHEDULE_IMPORT_". If this is set to an
empty string, no jobs will
// be run to import SCHEDULE_DETAIL records.
scheduleImportPolicy: "",
// Specifies the number of different Time Entry Import policies that should be used to write
records to the
// SCHEDULE_DETAIL table. The assignments that time records belong to will automatically be
distributed evenly
// across these different import jobs, so that all of the records being imported for a given
assignment will be
// processed during the same import job.
scheduleImportCount: 1,
// Specifies the date format that all date strings are expected to be provided in.
dateFormat: "MM/dd/yyyy",
// Specifies the date-time format that all date-time strings are expected to be provided in.
dateTimeFormat: "MM/dd/yyyy HH:mm:ss",
// Specifies the pay code set to use to limit which pay codes are allowed to be imported.
If a pay code set is
// defined, only pay codes in that set can be imported. If set to null or an empty string,
all pay codes will be
// allowed to import successfully.
payCodeSet: "",
// Specifies whether time sheets should be automatically amended when a time record with a
work date that falls in a
// closed period is imported. If set to false, records with work dates in a closed period
will error out.
amendTimeSheets: false,
// Specifies whether amended time sheets that have already been approved should be
automatically unapproved when a
// time record with a work date that falls in that period is imported. If
amendTimeSheetsIfNecessary is set to false,
// then this option will have no effect.
unapproveAmendedPeriods: false,
// Specifies the login_id of the user to use for generating unapprovals when
unapproveAmendedPeriods is set to true.
approver: "WORKFORCE",
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false,
/**
* Uses the data in the source data record to construct a JavaScript object with properties
corresponding to the fields
* in the TIME_ENTRY_DETAIL_IMPORT table and add that object to the list of records to be
processed.
*
* @param {TimeImportControllerAPI} teiAPI the import API
* @param {ResultSet} source the current source data record
*/
function importRecord(teiAPI, source) {
// Example mapping:
var record = {
employee_lookup_id: source.EMPLOYEE_ID,
asgnmt_match_fld_value: source.EMPLOYEE_ID,
work_dt: source.WORK_DATE,
hours: source.HOURS,
pay_code: source.PAY_CODE,
other_text10: "Time Import"
};
// Add the record to the set of records being imported. The second parameter specifies
whether the record being
// added should map to TIME_SHEET_DETAIL or to SCHEDULE_DETAIL. Valid options are
TIME_ENTRY or SCHEDULE. (Case does
// not matter.)
teiAPI.addRecord(record, "TIME_ENTRY", validationFunction); //with validation function
// teiAPI.addRecord(record, "TIME_ENTRY"); //without validation function
/**
* This function is executed after the data mappings are complete, and allows for custom
validation logic to be applied
* to each record to determine if it should be imported or not. This function also allows for
any final modifications
* to be made to the data mappings.
*
* @param {AssignmentInfo} asgnmtInfo information for the assignment the record is being
imported for
* @param {Entry_type} entryType the entry type associated with the record
* @param {Object} record scriptable object containing the values mapped to the
TIME_ENTRY_DETAIL_IMPORT table
* @returns {String[]} array of error messages generated for this record
*/
function validationFunction(asgnmtInfo, entryType, record) {
return [];
function main() {
try {
// Initialize the API with the parameters defined above
var teiAPI = new TimeImportControllerAPI(parms);
// Commit the records to the database and launch the additional job(s) needed to finish
importing the records
teiAPI.runTimeImportJobs();
}
catch (e) {
log.error("Error: " + e);
}
}
main();
includeDistributedPolicy("TIME_ENTRY_IMPORT_API");
var parms = {
// Specifies the field on the ASGNMT record that the value being mapped to
// TIME_ENTRY_DETAIL_IMPORT.ASGNMT_MATCH_FLD_VALUE is expected to match against to determine
which assignment to
// import the time record to.
matchField: "COMPUTED_MATCH_ID",
// Specifies the name of the Time Entry Import policy that should be kicked off
automatically to write records to the
// TIME_SHEET_DETAIL table. If there is only one Time Entry Import policy to be run, this
should be the exact name of
// that policy. If multiple Time Entry Import policies should be run, those policies should
be named <policy name>1,
// <policy name>2, and so on (for example TIME_IMPORT_1, TIME_IMPORT_2, etc.) and this
should just specify the
// <policy name> portion of that identifier, i.e. "TIME_IMPORT_". If this is set to an
empty string, no jobs will be
// run to import TIME_SHEET_DETAIL records.
timeImportPolicy: "TIME_IMPORT_POLICY",
// Specifies the number of different Time Entry Import policies that should be used to write
records to the
// TIME_SHEET_DETAIL table. The assignments that time records belong to will automatically
be distributed evenly
// across these different import jobs, so that all of the records being imported for a given
assignment will be
// processed during the same import job.
timeImportCount: 2,
// Specifies the name of the Time Entry Import policy that should be kicked off
automatically to write records to the
// SCHEDULE_DETAIL table. If there is only one Time Entry Import policy to run, this should
be the exact name of that
// policy. If multiple Time Entry Import policies should be run, those policies should be
named <policy name>1,
// <policy name>2, and so on (for example SCHEDULE_IMPORT_1, SCHEDULE_IMPORT_2, etc.) and
this should just specify the
// <policy name> portion of that identifier, i.e. "SCHEDULE_IMPORT_". If this is set to an
empty string, no jobs will
// be run to import SCHEDULE_DETAIL records.
scheduleImportPolicy: "",
// Specifies the number of different Time Entry Import policies that should be used to write
records to the
// SCHEDULE_DETAIL table. The assignments that time records belong to will automatically be
distributed evenly
// across these different import jobs, so that all of the records being imported for a given
assignment will be
// processed during the same import job.
scheduleImportCount: 1,
// Specifies the maximum number of time entry import jobs launched by this script that are
allowed to run at one time.
// If more time entry import jobs than this number are intended to be launched by this job,
only this number will be
// started initially and then as each of those jobs finishes the next one will be started
automatically. If this is
// set to zero, all jobs will be run simultaneously.
maximumSimultaneousJobs: 5,
// Specifies the date format that all date strings are expected to be provided in.
dateFormat: "MM/dd/yyyy",
// Specifies the date-time format that all date-time strings are expected to be provided in.
dateTimeFormat: "MM/dd/yyyy HH:mm:ss",
// Specifies the pay code set to use to limit which pay codes are allowed to be imported.
If a pay code set is
// defined, only pay codes in that set can be imported. If set to null or an empty string,
all pay codes will be
// allowed to import successfully.
payCodeSet: "",
// Specifies whether time sheets should be automatically amended when a time record with a
work date that falls in a
// closed period is imported. If set to false, records with work dates in a closed period
will error out.
amendTimeSheets: false,
// Specifies whether amended time sheets that have already been approved should be
automatically unapproved when a
// time record with a work date that falls in that period is imported. If
amendTimeSheetsIfNecessary is set to false,
// then this option will have no effect.
unapproveAmendedPeriods: false,
// Specifies the login_id of the user to use for generating unapprovals when
unapproveAmendedPeriods is set to true.
approver: "WORKFORCE",
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
/**
* Uses the data in the source data record to construct a JavaScript object with properties
corresponding to the fields
* in the TIME_ENTRY_DETAIL_IMPORT table and add that object to the list of records to be
processed.
*
* @param {TimeImportControllerAPI} teiAPI the import API
* @param {ResultSet} source the current source data record
*/
function importRecord(teiAPI, source) {
// Example mapping:
var record = {
employee_lookup_id: source.EMPLOYEE_ID,
asgnmt_match_fld_value: source.EMPLOYEE_ID,
work_dt: source.WORK_DATE,
hours: source.HOURS,
pay_code: source.PAY_CODE,
other_text10: "Time Import"
};
// Add the record to the set of records being imported. The second parameter specifies
whether the record being
// added should map to TIME_SHEET_DETAIL or to SCHEDULE_DETAIL. Valid options are
TIME_ENTRY or SCHEDULE. (Case does
// not matter.)
teiAPI.addRecord(record, "TIME_ENTRY", validationFunction); //with validation function
// teiAPI.addRecord(record, "TIME_ENTRY"); //without validation function
}
function main() {
try {
// Initialize the API with the parameters defined above
var teiAPI = new TimeImportControllerAPI(parms);
// Commit the records to the database and launch the additional job(s) needed to finish
importing the records
teiAPI.runTimeImportJobs();
}
catch (e) {
log.error("Error: " + e);
}
}
/**
* This function is executed after the data mappings are complete, and allows for custom
validation logic to be applied
* to each record to determine if it should be imported or not. This function also allows for
any final modifications
* to be made to the data mappings.
*
* @param {AssignmentInfo} asgnmtInfo information for the assignment the record is being
imported for
* @param {Entry_type} entryType the entry type associated with the record
* @param {Object} record scriptable object containing the values mapped to the
TIME_ENTRY_DETAIL_IMPORT table
* @returns {String[]} array of error messages generated for this record
*/
function validationFunction(asgnmtInfo, entryType, record) {
return [];
}
main();
includeDistributedPolicy("TIME_ENTRY_IMPORT_API");
var parms = {
// Specifies the field on the ASGNMT record that the value being mapped to
// TIME_ENTRY_DETAIL_IMPORT.ASGNMT_MATCH_FLD_VALUE is expected to match against to determine
which assignment to
// import the time record to.
matchField: "COMPUTED_MATCH_ID",
// Specifies the name of the Time Entry Import policy that should be kicked off
automatically to write records to the
// TIME_SHEET_DETAIL table. If there is only one Time Entry Import policy to be run, this
should be the exact name of
// that policy. If multiple Time Entry Import policies should be run, those policies should
be named <policy name>1,
// <policy name>2, and so on (for example TIME_IMPORT_1, TIME_IMPORT_2, etc.) and this
should just specify the
// <policy name> portion of that identifier, i.e. "TIME_IMPORT_". If this is set to an
empty string, no jobs will be
// run to import TIME_SHEET_DETAIL records.
timeImportPolicy: "",
// Specifies the number of different Time Entry Import policies that should be used to write
records to the
// TIME_SHEET_DETAIL table. The assignments that time records belong to will automatically
be distributed evenly
// across these different import jobs, so that all of the records being imported for a given
assignment will be
// processed during the same import job.
timeImportCount: 1,
// Specifies the name of the Time Entry Import policy that should be kicked off
automatically to write records to the
// SCHEDULE_DETAIL table. If there is only one Time Entry Import policy to run, this should
be the exact name of that
// policy. If multiple Time Entry Import policies should be run, those policies should be
named <policy name>1,
// <policy name>2, and so on (for example SCHEDULE_IMPORT_1, SCHEDULE_IMPORT_2, etc.) and
this should just specify the
// <policy name> portion of that identifier, i.e. "SCHEDULE_IMPORT_". If this is set to an
empty string, no jobs will
// be run to import SCHEDULE_DETAIL records.
scheduleImportPolicy: " TIME_IMPORT_POLICY ",
// Specifies the number of different Time Entry Import policies that should be used to write
records to the
// SCHEDULE_DETAIL table. The assignments that time records belong to will automatically be
distributed evenly
// across these different import jobs, so that all of the records being imported for a given
assignment will be
// processed during the same import job.
scheduleImportCount: 1,
// Specifies the date format that all date strings are expected to be provided in.
dateFormat: "MM/dd/yyyy",
// Specifies the date-time format that all date-time strings are expected to be provided in.
dateTimeFormat: "MM/dd/yyyy HH:mm:ss",
// Specifies the pay code set to use to limit which pay codes are allowed to be imported.
If a pay code set is
// defined, only pay codes in that set can be imported. If set to null or an empty string,
all pay codes will be
// allowed to import successfully.
payCodeSet: "",
// Specifies whether time sheets should be automatically amended when a time record with a
work date that falls in a
// closed period is imported. If set to false, records with work dates in a closed period
will error out.
amendTimeSheets: false,
// Specifies whether amended time sheets that have already been approved should be
automatically unapproved when a
// time record with a work date that falls in that period is imported. If
amendTimeSheetsIfNecessary is set to false,
// then this option will have no effect.
unapproveAmendedPeriods: false,
// Specifies the login_id of the user to use for generating unapprovals when
unapproveAmendedPeriods is set to true.
approver: "WORKFORCE",
// Specifies whether additional debug information should be written to the job log.
enableDebugLogging: false
};
/**
* Uses the data in the source data record to construct a JavaScript object with properties
corresponding to the fields
* in the TIME_ENTRY_DETAIL_IMPORT table and add that object to the list of records to be
processed.
*
* @param {TimeImportControllerAPI} teiAPI the import API
* @param {ResultSet} source the current source data record
*/
function importRecord(teiAPI, source) {
// Example mapping:
var record = {
employee_lookup_id: source.EMPLOYEE_ID,
asgnmt_match_fld_value: source.EMPLOYEE_ID,
work_dt: source.WORK_DATE,
hours: source.HOURS,
pay_code: source.PAY_CODE,
other_text10: "Time Import"
};
// Add the record to the set of records being imported. The second parameter specifies
whether the record being
// added should map to TIME_SHEET_DETAIL or to SCHEDULE_DETAIL. Valid options are
TIME_ENTRY or SCHEDULE. (Case does
// not matter.)
teiAPI.addRecord(record, "TIME_ENTRY", validationFunction); //with validation function
// teiAPI.addRecord(record, "SCHEDULE "); //without validation function
/**
* This function is executed after the data mappings are complete, and allows for custom
validation logic to be applied
* to each record to determine if it should be imported or not. This function also allows for
any final modifications
* to be made to the data mappings.
*
* @param {AssignmentInfo} asgnmtInfo information for the assignment the record is being
imported for
* @param {Entry_type} entryType the entry type associated with the record
* @param {Object} record scriptable object containing the values mapped to the
TIME_ENTRY_DETAIL_IMPORT table
* @returns {String[]} array of error messages generated for this record
*/
function validationFunction(asgnmtInfo, entryType, record) {
return [];
}
function main() {
try {
// Initialize the API with the parameters defined above
var teiAPI = new TimeImportControllerAPI(parms);
// Commit the records to the database and launch the additional job(s) needed to finish
importing the records
teiAPI.runTimeImportJobs();
}
catch (e) {
log.error("Error: " + e);
}
}
main();
Troubleshooting
The job log of the script using the Time Entry Import API will include informational messages, and in the case
of problems, error messages. This job log should be reviewed if there are problems using the API.
The following table lists some common error messages, their causes, and the solution.
Error Message Cause Solution
teiApiJavaWrapper is a reserved name An object is defined with name Don’t use any
when using this API. Please modify teiApiJavaWrapper and this object name is object with name
your script not to define any objects reserved for API use teiApiJavaWrapper
with this name.
startDate and endDate are required for Start and endDate are not provided for Provide start and
the replace existing records in specified replaceExistingRecordsInSpecifiedDateRange end dates
date range option method
startDate and endDate must be Start and end dates are instance of WDate startDate and
instances of WFSDate instead of WFSDate endDate must be
instances of
WFSDate
Invalid type specified in Invalid type is specified in the second Provide valid type
TimeImportControllerAPI.addRecord() parameter of TIME_ENTRY or
Valid options are options TIME_ENTRY TimeImportControllerAPI.addRecord() SCHEDULE
or SCHEDULE
Unable to process record: No mapping Mapping type is not defined in API parameters Define mapping
type defined type either
Function or
Decoder
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Time Entry Import API consist of following component(s).
1. TimeEntryImportAPI
2. TimeEntryImportCommitAPI
3. TimeImportControllerAPI
TimeEntryImportAPI
This API use within the Script Text portion of a Time Entry Import policy, this API provides methods to import
time records. Before importing any records, this API will validate that the Time Entry Import is configured
correctly for use with this functionality, and will stop any records from being processed if the configuration is
incorrect. The following methods are available for TimeEntryImportAPI.
TimeEntryImportAPI (parms)
Creates an instance of TimeEntryImportAPI.
Parameter Description
parms JavaScript object containing the configuration settings to use with this API
Note: This method cannot be used with the API set to operate on a decoder
printAvailableAPIParms ()
Prints a list of the parameters that are available for this API, along with the expected data types, to the log.
TimeEntryImportCommitAPI
This API provides methods for reconciling records being imported with the records that already exist on the
time sheet or schedule. It also provides functionality for approving any amended timesheets that have had
time imported into them during the import and for generating exception notifications for periods containing
imported time. The following methods are available for TimeEntryImportCommitAPI.
TimeEntryImportCommitAPI (parms)
Creates an instance of TimeEntryImportCommitAPI.
Parameter Description
parms JavaScript object containing the configuration settings to use with this API
Note: This API should be instantiated in the Commit Setup script section of the Time Entry Import policy.
replaceExistingRecordsOnDay ()
Removes existing records from the timesheet/schedule for only the days with new time being imported for
the current assignment being processed.
Note: This method should be called within the Commit Employee script section of the Time Entry Import
policy.
replaceExistingRecordsInPeriod ()
Removes existing records from the timesheet/schedule for all days in the periods with new time being
imported for the current assignment being processed.
Note: This method should be called within the Commit Employee script section of the Time Entry Import
policy.
replaceExistingRecordsInDateRange ()
Removes existing records from the timesheet/schedule for all days in between the earliest date and the
latest date that time is being imported on for the current assignment being processed.
Note: This method should be called within the Commit Employee script section of the Time Entry Import
policy.
replaceExistingRecordsInWeek ()
Removes existing records from the timesheet/schedule for all days that fall in weeks with new time being
imported on for the current assignment being processed.
Note: This method should be called within the Commit Employee script section of the Time Entry Import
policy.
Note: This method should be called within the Commit Employee script section of the Time Entry Import
policy.
replaceExistingRecords (replacementFunction)
Allows for custom removal behavior to be defined to control the removal of existing records from the
timesheet/schedule.
Parameter Description
replacementFunction The replacement function to use. To remove an existing record within this
function, set that record to null within the provided array. Likewise, to
prevent a record in the import set from being imported, set it to null within its
array. This function should include the following parameters:
(1) AssignmentInfo for the assignment being processed
(2) the array of records being imported for this assignment
(3) the array of existing records for this assignment that fall within the
periods with new records being imported
Note: This method should be called within the Commit Employee script section of the Time Entry Import
policy.
approveAmendedTimeSheets ()
Approves the timesheets for all amended periods that had time imported during this import.
Note: This method should be called within the Commit Cleanup script section of the Time Entry Import policy.
sendExceptionNotifications ()
Sends out notifications for any exceptions that were generated based on the time records that were
imported.
Note: This method should be called within the Commit Cleanup script section of the Time Entry Import policy.
printAvailableAPIParms ()
Prints a list of the parameters that are available for this API, along with the expected data types, to the log.
TimeImportControllerAPI
This API defines methods for creating TIME_ENTRY_DETAIL_IMPORT records and loading them into that table
from within a controller script. Throttling controls are provided to allow the script to control how many
different jobs the records will be divided between, and how many of those jobs can run simultaneously.
The following methods are available for TimeImportControllerAPI.
TimeImportControllerAPI (parms)
Create an instance of TimeImportControllerAPI.
Parameter Description
parms JavaScript object containing the configuration settings to use with this API
runTimeImportJobs (connection)
Commits the records to the database and runs the time entry and schedule import jobs that have been
defined to load the records from the time_entry_detail_import table into their final destinations.
Parameter Description
connection connection to the local database to use for committing records and running
additional jobs.
printAvailableAPIParms ()
Prints a list of the parameters that are available for this API, along with the expected data types.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The Distributed JavaScript Library TIME_OFF_REQUEST_API
Setup
No setup is necessary for the Time Off Request API. The distributed library is automatically available within
WorkForce Time and Attendance.
Use Cases
Creating a New Time Off Request
The Time Off Request API allows for the creation of a new time off request. This would typically be done
when an external system is serving as the system of record for time off request entry, in order to synchronize
the time off request data between the two systems.
// Define the details for the time off request. Each detail represents a single slice
// of time that should be included in the time off request. In this example, three
// slices of time will be included in the time off request.
var details = [
{
work_dt: convertWDate("2017-04-17"),
pay_code: "VAC"
},
{
work_dt: convertWDate("2017-04-18").
pay_code: "VAC"
},
{
work_dt: convertWDate("2017-04-19"),
pay_code: "VAC"
}
];
// Define the time off request to be created. The assignment ID could come from
// a source data file or some similar lookup
var timeOffRequest = {
asgnmt: "1234567890",
details: details,
comments: "I'm going to be in Florida for three days"
};
Note: The start and end date for the time off request will be determined by the earliest and latest date of the
details for the time off request
// Execute some process to determine the internal ID of the time off request
// to be looked up
var torId = getTimeOffRequestId();
// Execute some process to determine the external ID of the time off request
// to be looked up
var torId = getTimeOffRequestId();
// Execute some process to determine the internal ID of the time off request to be
// deleted
var torId = getTimeOffRequestId();
// Execute some process to determine the external ID of the time off request to be
// deleted
var torId = getTimeOffRequestId();
// Define the new details that should be used for the time off request
var details = [
{
work_dt: convertWDate("2017-05-11"),
pay_code: "SICK"
},
{
work_dt: convertWDate("2017-05-12"),
pay_code: "SICK"
}
];
Note: If the work date of the earliest or latest detail in the time off request changes, the start and/or end
dates of the time off request will be automatically updated to reflect the range spanned by the details.
Note: Time Off Requests cannot be changed from every status to every other status. For instance, an
approved time off request cannot be changed back to a pending status. The API enforces the same
rules regarding status changes that would be enforced by the time off request screens in the
application.
Note: The Time Off Request API cannot be used to change which employee a time off request belongs to. If
the new assignment ID specified is for a different employee than the original assignment belonged to,
an error will be generated.
Troubleshooting
The job log of the script using the Time Off Request API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
No user found for The assignment specified when Ensure that all assignments with time
assignment ASGNMT_ID creating a new time off request off requests being created have a
does not have a user record corresponding user record
associated with it.
Error creating time off There was an error creating the Read the accompanying error message
request: ERROR_MESSAGE time off request due to system-level and take action as necessary to resolve
validation of the time off request it
information provided.
No ACT Case with ID An ACT Case ID was specified when Ensure that all ACT Cases specified for
ACT_CASE_ID exists creating a new time off request, but time off requests exist
no ACT Case with that ID currently
exists.
ACT Case ACT_CASE_ID is The ACT Case ID specified when Ensure that the ACT Case specified for
not valid for employee with creating a new time off request all time off request matches the
ID EMPLOYEE_ID. It is belongs to a different employee employee that time off request will
assigned to a different than the employee than owns the belong to
employee, assignment specified for the time
OTHER_EMPLOYEE_ID off request.
Only intermittent cases can The ACT Case ID specified when Ensure that all ACT Cases specified for
be associated with time off creating a new time off request did time off requests match intermittent
requests. ACT Case not match an intermittent ACT Case ACT Cases.
ACT_CASE_ID is a
CASE_TYPE case
Unable to load time off A call was made to Ensure that a time off request ID is
request data: No internal ID getTimeOffRequest but no time off always specified when looking up time
specified request ID was specified off requests
Unable to load time off A call was made to Ensure that an external ID is always
request data: No external ID getTorByExternalId but no external specified when looking up time off
specified ID was specified requests by external ID
Unable to load time off A call was made to Ensure that at least one status is
request data: No status(es) getTimeOffRequestsByStatus by no always specified when looking up time
specified statuses were specified off requests by status
Unable to delete time off A call was made to Ensure that a time off request ID is
request: no time off request deleteTimeOffRequest but no time always specified when deleting a time
ID specified off request ID was specified off request
Unable to delete time off A call was made to Ensure than an external ID is always
request: no external ID deleteTorByExternalId but no specified when deleting time off
specified external ID was specified requests by external ID
No work date specified on A detail was specified when Ensure that all details specified for a
time off request detail creating a new time off request that time off request include a work date
did not include a work date value
No pay code specified on A detail was specified when Ensure that all details specified for a
time off request detail creating a new time off request that time off request include a pay code
did not include a pay code
No details specified for Time No details were specified when Ensure that at least one detail record is
Off Request creating a new time off request specified when creating a new time off
request
Unable to update TOR An attempt was made to update Ensue that at least one detail record is
details: no details specifiedthe details for a time off request specified when updating the details for
without any new details being a time off request
specified
Unable to change the owner An attempt was made to change Ensure that the time off request is in a
of a time off request with a the owner of a time off request status that allows for the owner to be
status of STATUS when that request was already in a changed before trying to change the
status that does not allow for the owner
owner to be changed
Time off requests can not be An attempt was made to change Ensure that only single or component
assigned to aggregate the owner of a time off request to assignments are targeted as the new
assignments an aggregate assignment owner for a time off request when
changing its owner
Cannot use changeOwner to An attempt was made to change Ensure that calls to changeOwner are
change the employee a time the owner of a time off request so only being used to change which
off request belongs to. that a different employee owns it assignment owns a time off request
Employee would have within a given employee’s assignments
changed from
ORIGINAL_EMPLOYEE to
NEW_EMPLOYEE
Table 88: Time Off Request API typical error messages, root problems, and solutions
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
TimeOffRequestAPI
Creates a new instance of the Time Off Request API
createTimeOffRequest(parms)
Creates a new time off request with the specified parameters, returning the ID of the newly created time off
request.
Available parameters are:
Parameter Name Description
asgnmt The assignment that the time off request belongs to
details Array of JS objects containing the details for the
time off request. Each detail object is expected to
have properties corresponding to the values for the
fields on the detail that should be created
externalId The ID from an external system to identify the time
off request. If no value is provided this ID will be
left blank
actCase The internal ACT Case ID for the ACT Case the time
off request should be associated with. If no value is
provided then the time off request will not be
associated with an ACT Case.
getTimeOffRequest(id)
Returns the time off request data for the time off request with the specified internal time off request ID.
getTorByExternalId(id)
Returns the time off request data for the time off request with the specified external ID value.
getTimeOffRequestsByStatus(statuses, asgnmts)
Returns an array of data for all time off requests that are currently in one of the specified statuses. An array
of assignments can optionally be specified, in which case only time off requests belonging to one of the
specified assignments will be returned.
deleteTimeOffRequest(id)
Deletes the time off request (and any associated time records that belong to it) with the specified internal
time off request ID.
deleteTorByExternalId(id)
Deletes the time off request (and any associated time records that belong to it) with the specified external ID
value.
TORScriptable
JavaScript object representing the data for a time off request. These will be returned by any of the methods
of the TimeOffRequestAPI that return time off request data.
getDetails()
Returns an array containing the information for the detail records that make up this time off request.
updateDetails(newDetails)
Replaces the existing details for the time off request with the new details specified. If the date range covered
by the new details has a different beginning or ending date, then the corresponding dates on the time off
request itself will also be updated.
changeStatus(newStatus, comments)
Update the status of the time off request to the specified status. The indicated comments will be associated
with that status change.
changeOwner(asgnmtId)
Switches the assignment that the time off request belongs to the indicated assignment. Anytime records
associated with the time off request will be moved as well. If the new assignment belongs to a different
employee than the time off request was originally associated with, this will generate an error.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The TIMESHEET_EXCEPTION_API distributed JavaScript Library
Setup
No setup is necessary for the TIMESHEET_EXCEPTION_API. The distributed library is automatically available
in WT&A.
Use Cases
Get Time Sheet Exceptions
The following script example demonstrate how the time sheet exceptions can be retrieved by providing
allowed parameters.
//Set desired parameters and remove rest. All parameters are optional.
var params = {
exceptionCodeSet: "FM_AS_EXCEPTIONS",
severity: "INFO_NO_ACTION",
exceptionSource: "TIME_SHEET_DETAIL_SPLIT",
placeholder: "LD1",
asOfDate: new WFSDate(2004, 08, 29),
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
1211617644),
asgnmtEffDate: new WFSDate(2012, 06, 30),
exceptionDate: new WFSDate(2004, 08, 30)
};
// get exceptions
var exceptions = timesheetExceptionAPI.getExceptions();
log.info(exceptions.length);
// get exceptions
var exceptions = timesheetExceptionAPI.getExceptions();
log.info(exceptions.length);
Example 1: Find time sheet exceptions between given date range without optional
parameters.
Retrieve time sheet exceptions between a date range.
includeDistributedPolicy("TIMESHEET_EXCEPTION_API");
// get exceptions
var exceptions = timesheetExceptionAPI.getExceptionsByDateRange(params);
log.info(exceptions.length);
Example 2: Find time sheet exceptions between given date range with optional
parameters.
Retrieve time sheet exceptions between a date range with optional parameters.
includeDistributedPolicy("TIMESHEET_EXCEPTION_API");
// get exceptions
var exceptions = timesheetExceptionAPI.getExceptionsByDateRange(params);
log.info(exceptions.length);
Troubleshooting
The job log of the script using the TIMESHEET_EXCEPTION_API will include informational messages, and in
the case of problems, error messages. This job log should be reviewed if there are problems using the API.
The following table lists some common error messages, their causes, and the solution.
Error Message Cause Solution
Exception start date not Required start date parameter is Provide required parameter for
provided missing or null in the method method.
getExceptionsByDateRange
Exception end date not Required end date parameter is Provide required parameter for
provided missing or null in the method method.
getExceptionsByDateRange
Specified end date is before Specified end date is before start date Specified correct dates.
start date in the method
getExceptionsByDateRange
Table 90: Timesheet Exception API common error messages, causes, and solutions
API Reference
Knowledge of JavaScript programming is necessary to make best use of this section.
TimesheetExceptionAPI
The following methods are available for TimesheetExceptionAPI.
TimesheetExceptionAPI (params)
Constructor to create new instance of API. Takes an object as parameter which includes defined parameter in
the table.
Parameter Description
isDebug True if debug content should be written to the log, false if not
getExceptions (params)
Retrieve time sheet exceptions based on parameters. This method has all optional parameters and retrieve
time sheet exceptions according to provided number of parameters, it retrieves all time sheet exceptions
from database if no parameter is provided. Takes an object as parameter which includes defined parameters
in the table.
Parameter Description
exceptionCodeSet Exception Codes for which Exceptions need to be retrieved
severity Retrieve exceptions for specified severity
exceptionSource Filter on exception source
placeholder Time sheet exception value Placeholder
asOfDate Filter on Employee Period Start and End date
matchCondition The condition to select the assignment
asgnmtEffDate The effective date to use for the retrieval of the assignment record
exceptionDate exceptionDate
getExceptionsByDateRange (params)
Retrieve time sheet exceptions based on date range. This method has some required and optional
parameters, optional parameters add additional criteria to get the desired time sheet exceptions. Takes an
object as parameter which includes defined parameters in the table.
Parameter Description
exceptionStartDate Filter on Exception Start date range
exceptionEndDate Filter on Exception End date range
exceptionCodeSet Exception Codes for which Exceptions need to be retrieved
severity Retrieve exceptions for specified severity
exceptionSource Filter on exception source
placeholder Time sheet exception value Placeholder
asOfDate Filter on Employee Period Start and End date
matchCondition The condition to select the assignment
asgnmtEffDate The effective date to use for the retrieval of the assignment record
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
1. The TIMESHEET_OPERATIONS_API Distributed JavaScript library
a. This automatically includes the API_UTIL Distributed JavaScript library
2. The MatchCondition which defines a condition to select employee, assignment, or user records
Setup
No setup is necessary for the Timesheet Operations API. The distributed library is automatically available
within WorkForce Time and Attendance.
Use Cases
Determining Existing Timesheet Status
The Timesheet Operations API allows a script to lookup the current state of a timesheet. This is useful if
certain actions should only be taken when a timesheet is in a certain state (for instance, swipes should only
be posted to an unapproved timesheet). This allows the script to conditionally change its behavior based on
the state of the timesheet. It also gives the script an opportunity to change the state of the timesheet before
continuing with any further processing.
// Get the status of the timesheet for the period containing the as-of date
var status = api.getTimesheetStatus(condition, asOfDate);
Note: If the condition specified matches more than one assignment, an exception will be generated
// Define the date that we're interested in knowing the period boundaries for
var asOfDate = new WFSDate(2016, 7, 21);
Approving Timesheets
The Timesheet Operations API can be used to assign a new approval to a timesheet or a set of timesheets.
This will act as though a manager with the indicated approval level had directly approved those timesheets.
Note: The API cannot be used to reduce the approval level of a timesheet while leaving it in an approved state
(e.g. changing the approval level from 5 to 3). If the existing approval level for the timesheet is greater
than the specified approval level, an exception will be generated. In order to apply a lower approval
level, the timesheet must first be unapproved and then approved again using the new, lower approval
level.
Unapproving Timesheets
The Timesheet Operations API can be used to reset a timesheet’s approval level back to zero. This will return
the timesheet back to its original unapproved, unsubmitted state, allowing new time entries to be added to
it. One common scenario where this may be desired is to allow swipes to post if there is a chance the
employee or a manager may have already increased the approval level of the timesheet.
Note: Timesheets that have already been locked or closed cannot be unapproved. Attempting to unapproved
a timesheet in one of these states will cause an exception to be generated.
Amending a Timesheet
The Timesheet Operations API can be used to amend a timesheet. This functions just like clicking on the
“Amend” button on the timesheet, and puts a closed timesheet in a state where it can be modified again. As
with any other amendments, timesheets that are amended using this process will be included during the
current period’s end-of-period processing once they have been approved.
// Define the condition to select the employee whose timesheet should be amended
var condition = new MatchCondition("employee", "display_employee",
MatchOperator.EQUALS, "E156247");
Note: The condition specified to select the timesheet to amend must select only a single assignment. If more
than one assignment matches the specified condition, an exception will be generated.
Calculating Timesheets
The Timesheet Operations API includes support for calculating timesheets. Some common scenarios where
this functionality might be desirable are:
• Scripting a nightly calculate process, if WorkForce Time and Attendance’s native support for that is
insufficient
• Calculating a range of future periods (e.g. to ensure balances are populated in the database for reports)
// Use the third occurrence of the template as the starting point for the cycle
// Use the first occurrence of the template as the starting point for the cycle
var sequenceNumber = sequenceNumbers[0];
includeDistributedPolicy("TIMESHEET_OPERATIONS_API");
Troubleshooting
The job log of the script using the Timesheet Operations API will contain information messages and, in the
case of problems, any error messages generated during processing. This job log should be reviewed if there
are any problems encountered while using the API.
No Matching assignment The match condition specified for a Ensure that the match condition being
found for Amend operation call to amendTimesheet did not used correctly identifies a single
match any assignments. assignment that should be amended,
and then ensure that the
employee/assignment data for the
matching assignment is correct.
No Matching assignment(s) The match condition specified for a Ensure that the match condition being
found for Calculate current call to calculateCurrentPeriod did used correctly identifies one or more
period operation not match any assignments assignments that should be calculated,
and then ensure that the
employee/assignment data for the
matching assignments is correct.
Could not find any An assignment group ID was Ensure that the expected assignment
assignment group with specified for one of the methods group exists and that the right ID for
display_id GROUP_ID that operates on an assignment that group is being used by the script.
group, but no group with that ID
currently exists.
No Matching assignment The match condition specified for a Ensure that the match condition being
found for Get Timesheet call to getTimesheetStatus did not used correctly identifies a single
Status operation match any assignments assignment, and then sure that the
employee/assignment data for the
matching assignment is correct.
No Matching assignment(s) The match condition specified for a Ensure that the match condition being
found for Assign Schedule call to assignScheduleCycle did not used correctly identifies one or more
operation match any assignments, or the assignments that should have the
assignment group specified for a schedule cycle assigned, and then
call to ensure that the employee/assignment
assignScheduleCycleForGroup was data for the matching assignments is
empty correct, or verify that the correct
assignment group is specified and
contains a non-zero number of
assignments.
More than one assignment The match condition specified for a Ensure that the match condition being
exist that match the call to amendTimesheet matched used correctly identifies a single
condition more than one assignment. assignment that should be amended.
No Matching assignment(s) The match condition specified for a Ensure that the match condition being
found for Approve operation call to approveTimesheet didn’t used correctly identifies one or more
match any assignments or the assignments that should be approved
assignment group specified for a or verify that the correct assignment
call to approveTimesheetForGroup group is specified and contains a non-
was empty zero number of assignments.
Assignment An editable timesheet does not Ensure that the correct assignment
ASSIGNMENT_ID does not already exist for the period and date are being used, and then
have editable timesheet containing the specified date. This make sure that the period being
defined for date: DATE generally means an approval event approved has been calculated before
is trying to be applied to a period attempting to apply the approval
that has never been calculated event.
before.
No Matching assignment(s) The match condition specified for a Ensure that the match condition being
found for Unapprove call to unapproveTimesheet didn’t used correctly identifies one or more
operation match any assignments or the assignments that should be
assignment group specified for a unapproved or verify that the correct
call to assignment group is specified and
unapproveTimesheetForGroup was contains a non-zero number of
empty assignments.
No Matching assignment(s) The match condition specified for a Ensure that the match condition being
found for Calculate call to calculateTimesheets didn’t used correctly identifies one or more
timesheet operation match any assignments or the assignments that should be calculated
assignment group specified for a or verify that the correct assignment
call to group is specified and contains a non-
calculateTimesheetsForGroup was zero number of assignments.
empty
Approval Level must be An approval level was specified to Ensure that an approval level between
greater than 0 and less than be applied that was either negative 0 and 98 is specified when approving
99 or greater than 98, which are not timesheets.
valid approval level values
Timesheet for assignment An approval level was specified that Ensure that the approval level being
ASSIGNMENT_ID has is lower than the existing approval provided is the same or higher than
already a higher approval level when approving a timesheet the approval level that already exists
level of value: for the timesheet when approving it.
APPROVAL_LEVEL
Timesheet for assignment An attempt was made to unapprove Ensure that the timesheets being
ASSIGNMENT_ID is already a timesheet that is currently in a unapproved are open for editing and
locked for date DATE locked state not currently in a locked state before
unapproving.
Timesheet for assignment An attempt was made to Ensure that the timesheets being
ASSIGNMENT_ID is already unapproved a timesheet that is unapproved are open for editing
closed for date DATE currently in a closed state before unapproving.
SEQUENCE_NUMBER is not The sequence number specified is Ensure that the sequence number
a valid sequence number for either too low or too high for the specified matches a valid sequence
Schedule cycle CYCLE_ID specified schedule cycle based on number for the schedule cycle.
the number of templates that
currently make up that cycle
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The Timesheet Operations API consists of the following component(s):
1. The MatchCondition, which defines a condition to select employee, assignment, or user records
a. This MatchCondition makes use of a MatchOperator which defines a comparison operation
to use in a condition for matching a specified value against the employee/assignment/user
data.
2. The TimesheetOpsAPI, which provides assorted methods for manipulating timesheets and retrieving
information about their status.
The following is a summary of the available methods and common uses:
MatchOperator
EQUALS
Only data where the value in the specified field exactly matches the indicated value will be matched by the
condition
NOT_EQUALS
Only data where the value in the specified field does not match the indicated value will be matched by the
condition
GREATER_THAN
Only data where the value in the specified field is greater than the indicated value will be matched by the
condition. For string fields this is applied lexicographically, meaning that “3” is greater than “20”.
GREATER_THAN_OR_EQUALS
Only data where the value in the specified field exactly matches or is greater than the indicated value will be
matched by the condition. For string fields this is applied lexicographically, meaning that “3” is greater than
“20”.
LESS_THAN
Only data where the value in the specified field is less than the indicated value will be matched by the
condition. For string fields this is applied lexicographically, meaning that “20” is less than “3”.
LESS_THAN_OR_EQUALS
Only data where the value in the specified field exactly matches or is less than the indicated value will be
matched by the condition. For string fields this is applied lexicographically, meaning that “20” is less than
“3”.
IN
Only data where the value in the specified field exactly matches one of the values in the indicated array of
values will be matched by the condition.
NOT_IN
Only data where the value in the specified field does not match any of the values in the indicated array of
values will be matched by the condition.
LIKE
Only data where the value in the specified field matches the pattern defined by the indicated value will be
matched by the condition.
NOT_LIKE
Only data where the value in the specified field does not match the pattern defined by the indicated value
will be matched by the condition.
BETWEEN
Only data where the value in the specified falls between the two values defined by the indicated array of
values (inclusive of the two endpoints) will be matched by the condition. For string fields this is applied
lexicographically, meaning that “5” is between “37” and “62”.
MatchCondition
MatchCondition(table, field, operator, values, isNotOperator)
Creates a new MatchCondition for the indicated table/field combination that matches against the specified
value(s) using the indicated MatchOperator.
If the MatchOperator specified is IN or NOT_IN, then values is expected to be an array of all the different
values to be evaluated by the “in” operation. If the MatchOperator specified is BETWEEN, then values is
expected to be an array containing two values: the starting point and ending point of the range to be
evaluated by the “between” operation. For all other MatchOperators, values is expected to be a single
value.
The isNotOperator controls whether the results of the MatchCondition should be negated. If this is set to
true, then a MatchCondition that normally evaluates to false will instead evaluate to true, and a
MatchCondition that normally evaluates to true will instead evaluate to false. This allows for conditions that
don’t have an explicit MatchOperator defined, such as “not between”, to be defined.
and(condition)
Modifies the MatchCondition to return only the intersection of its original condition and the specified
condition.
or(condition)
Modifies the MatchCondition to return the union of its original condition and the specified condition.
TimesheetOpsAPI
Creates a new instance of the Timesheet Operations API
unapproveTimesheet(condition, asOfDate)
Unapproves the timesheets for the period containing the indicated as-of date for assignments matching the
specified condition by resetting their approval level to zero.
unapproveTimesheetForGroup(displayId, asOfDate)
Unapproves the timesheets for the period containing the indicated as-of date for all assignments belonging
to the assignment group with the specified display ID by resetting their approval level to zero.
amendTimesheet(condition, asOfDate)
Creates a new amended version of the timesheet for the period containing the indicated as-of date for the
assignment matching the specified condition.
calculateCurrentPeriod(condition)
Calculates the current period timesheet for all assignments matching the specified condition.
calculateCurrentPeriodForGroup(displayId)
Calculates the current period timesheet for all assignments belonging to the assignment group with the
specified display ID.
getTimesheetStatus(condition, asOfDate)
Returns information about the timesheet for the period containing the indicated as-of date for the
assignment matching the specified condition. Information returned includes:
▪ approvalLevel: the current overall approval level of the timesheet
▪ mgrApprovalLevel: the current manager approval level of the timesheet
▪ isEmployeeApproved: true if the employee submitted the timesheet, false if not
▪ isAmendment: true if the timesheet is an amended timesheet, false if not
▪ currentState: one of OPEN, APPROVED, LOCKED, or CLOSED, based on the approval level
getCurrentPeriodDates(policyProfile)
Returns the beginning and ending dates of the current period for specified Policy Profile (string)
getPeriodDates(policyProfile, asOfDate)
Returns the beginning and ending dates for the period containing the indicated as-of date for the specified
Policy Profile. Information returned includes:
▪ ppBegin: the beginning date of the period containing the as-of date
▪ ppEnd: the ending date of the period containing the as-of date
getSequenceNumbers(cycle, template)
Returns an array of all of the different sequence numbers within the indicated schedule cycle where the
specified schedule template occurs. If the template does not occur within the schedule cycle, the array
returned will be empty.
Pre-requisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The TIMESHEET_OUTPUT_API Distributed JavaScript Library.
• The TimesheetOutputAPI Java class.
• The API_UTIL Distributed JavaScript Library.
Setup
No setup is necessary for the Timesheet Output API. The distributed library is automatically available in
WT&A.
Use Cases
Getting TSO data using Match Condition for assignment;
Version: INITIAL
Timesheet is not locked.
includeDistributedPolicy("TIMESHEET_OUTPUT_API");
var constructParms = {
runCalculate: true,
minApprovalLevel: 0
};
var api = new TimesheetOutputAPI(constructParms);
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
asgnmtEffDate: new WFSDate(2010, 05, 29),
startDate: new WFSDate(2008, 04, 21),
endDate: new WFSDate(2008, 04, 25),
version:"INITIAL"
};
var result = api.getTimesheetOutputListBetweenWorkDates(queryParms);
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
asgnmtEffDate: new WFSDate(2010, 05, 29),
asOfDate: new WFSDate(2008, 04, 26),
version:"OPEN"
};
var result = api.getTimesheetOutputListForPeriodAsOf(queryParms);
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
asgnmtEffDate: new WFSDate(2010, 05, 29),
asOfDate: new WFSDate(2008, 04, 26),
version:"LATEST"
};
var result = api.getTimesheetOutputListForPeriodAsOf(queryParms);
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
asgnmtEffDate: new WFSDate(2010, 05, 29),
asOfDate: new WFSDate(2008, 04, 26),
version:" LATEST_CLOSED"
};
var result = api.getTimesheetOutputListForPeriodAsOf(queryParms);
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
asgnmtEffDate: new WFSDate(2010, 05, 29),
asOfDate: new WFSDate(2008, 04, 26),
version: "LATEST_CLOSED",
payCodeSet: "INT_5184_TEST"
};
var result = api.getTimesheetOutputListForPeriodAsOf(queryParms);
var matchConditionParam = {
table: TimesheetMatchTable.TIME_SHEET_OUTPUT,
field: "HOURS",
operator: MatchOperator.EQUALS,
values: ["3"]
};
var queryParms = {
matchCondition: new MatchCondition(MatchTable.ASGNMT, "ASGNMT", MatchOperator.EQUALS,
"1215736324"),
asgnmtEffDate: new WFSDate(2010, 05, 29),
asOfDate: new WFSDate(2008, 04, 26),
version: "LATEST_CLOSED",
payCodeSet: "INT_5184_TEST",
timesheetMatchCondition: new TimesheetMatchCondition(matchConditionParam)
};
var result = api.getTimesheetOutputListForPeriodAsOf(queryParms);
Troubleshooting
The job log of the script using the Timesheet Output API will include informational messages, and in the case
of problems, error messages. This job log should be reviewed if there are problems using the API.
The following table lists some common error messages, their causes, and the solution.
Error Message Problem Solution
Invalid Pay Code Set provided The API will throw an error if the Pass a valid PayCodeSet.
name of PayCodeSet is not valid.
Version invalid The API will throw an error if the Pass a valid version from the
version is not valid. following: OPEN, LATEST,
LATEST_CLOSED, INITIAL.
API Reference
Knowledge of the JavaScript programming is necessary to best make use of this section.
getTimesheetOutputListForPeriodAsOf ()
Performs a lookup on TSO table using provided match condition and returns matching rows as
ReadOnlyScriptables for the period the provided effective date is in. Takes an object as parameter, having the
following fields:
Parameter Description
matchCondition The condition to select the assignment. Condition can be from EMPLOYEE,
EMPLOYEE_MASTER, ASGNMT and/or ASGNMT_MASTER table.
asgnmtEffDate The effective date to use for the retrieval of the assignment record.
asOfDate The date to retrieve the period of timesheet.
version The version of time sheet for which to retrieve the TSO records. The 4
versions are INITIAL, LATEST, LATEST_CLOSED, OPEN.
payCodeSet (optional) The pay codes for which to retrieve the TSO records. If a payCodeSet is passed
containing zero paycodes then it will return zero records.
timesheetMatchCondition
The TimesheetMatchCondition for TSO table to put additional filter criteria.
(optional)
getTimesheetOutputListBetweenWorkDates ()
Performs a lookup on TSO table using provided match condition and returns matching rows as
ReadOnlyScriptables between the provided dates. Takes an object as parameter, having the following fields:
Parameter Description
matchCondition The condition to select the assignment. Condition can be from EMPLOYEE,
EMPLOYEE_MASTER, ASGNMT and/or ASGNMT_MASTER table.
asgnmtEffDate The effective date to use for the retrieval of the assignment record.
Prerequisites
To use this API, you should know the following:
Components
The components of the XML Reader API are as follows:
Setup
No setup is necessary for the Xml Reader API. The distributed library is automatically available within
WT&A.
Use cases
Read a flat XML File into an XMLResultSet
// including the distributed API
includeDistributedPolicy('XML_READER_API');
// creating parameters object
var parameters = {"filePath":
"workforce/RegTests/ParserValidation/xmlreaderwriterapi_expected.xml", "fieldsAsAttributes":
true}
var xr = new XMLFlatFileReader(parameters);
// looping through all the records in XML
while (xr.next()) {
log.info(xr.getString("name"));
}
var children = xmlScriptable.getValue(); // this can either have a string, or a list of xml
scriptables
children[0].getValue(); // and so on, until you reach the leaf node
xmlScriptable.getNamespaceIUri()
Troubleshooting
The job log of the script using the XML Reader API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The XML Reader API consists of one public initializer component:
o XMLFlatFileReader. It contains the following public methods:
▪ next()
▪ getString(“fieldname”)
o XMLReader. It contains the following public static methods:
▪ parseFromFile
next()
Sets the cursor to the next record in XMLReader object and returns true. Returns false if there is no next row.
Prerequisites
To use this API, you should know the following:
Components
The components of the XML Writer API are as follows:
Setup
No setup is necessary for the XML Writer API. The distributed library is automatically available within
WT&A.
Use cases
Create and write a hierarchy of XmlScriptables into an XML File
// include distributed API
includeDistributedPolicy('XML_WRITER_API');
var filePath = "workforce/RegTests/ParserValidation/xmlreaderwriterapi_generated.xml";
// writing to file
XMLWriterAPI.writeXmlScriptable(rootNode, filePath);
// writing to file
XMLWriterAPI.writeXmlScriptableEncrypted(rootNode, filePath, alias);
Troubleshooting
The job log of the script using the XML Writer API will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
Error Message Problem Solution
java.lang.IllegalArgumentExc No encryption alias is provided in Provide a valid encryption alias
eption: No encryption alias the method
provided writeXmlScriptableEncrypted
com.workforcesoftware.Util. An invalid alias is provided in the Provide a valid encryption alias
pgp.NoPublicKeyFound: A method
PGP key with the identifier writeXmlScriptableEncrypted
'VeryBadAlias' was not
found in the system
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The XML Writer API does not need to be initialized. It consists of the following component(s):
1. writeXmlScriptable(xmlScriptable: XmlScriptable, filePath: string, encoding: string), which writes the
provided xml scriptable object’s string representation to the provided filePath.
2. writeXmlScriptableEncrypted(xmlScriptable: XmlScriptable, filePath: string, encryptionAlias: string,
encoding: string), which writes the provided xml scriptable object’s string representation to the
provided filePath, and encrypting it to the provided alias.
3. XmlScriptable, which is a representation of a single XML Node/Element. It may or may not contain
other XmlScriptables as children (just like a normal xml node would). It can be initialized from
anywhere in JS using the following:
var varName = new XmlSriptable(“scriptableName”);
Appendix: API_UTIL
Overview and Capabilities
The API_UTIL provides different types of utilities common to multiple APIs that can be used in scripts, such as
String, Date, Database, Log, and EDAI (EmpCenter Data Access Interface).
Note: This document provides information only on the EDAI component of API_UTIL. For additional
information about API_UTIL, contact your WorkForce Software representative.
Prerequisites
To use this API, you should be familiar with the following functionality:
• Basic JavaScript coding
Components
This API consists of the following component(s):
• The API_UTIL distributed JavaScript Library
Setup
No setup is necessary for the API_UTIL. The distributed library is automatically available in WorkForce Time
and Attendance.
Use Cases
Access and modify timesheet slices
These script excerpts demonstrate how the filter can be applied to the list of timesheet slices to get specific
slices and then how to modify them if needed.
//Once you have list of slices then filter can applied to get filtered slices.
var daySlices = timesheetSlices.filter(dayFilter);
if (daySlice != null) {
//update this timesheet slice
var copiedDaySlice = daySlice.copy();
copiedDaySlice.start_dttm = copiedDaySlice.start_dttm.addHours(1);
copiedDaySlice.end_dttm = copiedDaySlice.end_dttm.addHours(1.5);
copiedDaySlice.hours = 4.5;
timesheet.updateSlice(copiedDaySlice);
}
//Once you have list of slices then filter can applied to get filtered slices.
var daySlices = timesheetSlices.filter(dayFilter);
if (scheduleSlice != null) {
//update this schedule slice
var copiedScheduleSlice = scheduleSlice.copy();
copiedScheduleSlice.start_dttm = copiedScheduleSlice.start_dttm.addHours(1);
copiedScheduleSlice.end_dttm = copiedScheduleSlice.end_dttm.addHours(1.5);
copiedScheduleSlice.hours = 4.5;
schedule.updateSlice(copiedScheduleSlice);
}
//Once you have list of slices then filter can applied to get filtered slices.
var filteredScheduleSlices= scheduleSlices.filter(scheduleSlicesFilter);
//once you have list of records then filter can be applied to get filtered records
var filteredtimeSheetOutputs= timeSheetOutputs.filter(timeSheetOutputFilter);
includeDistributedPolicy("API_UTIL");
//once you have list of records then filter can be applied to get filtered assignments
var filteredAssignments = assignments.filter(assignmentFilter);
function getAssigmnent() {
// logic to get actual assignment relevant to this script
return "12318723";
}
Troubleshooting
The job log of the script using the API UTIL will include informational messages, and in the case of problems,
error messages. This job log should be reviewed if there are problems using the API.
The following table lists some common error messages, their causes, and the solution.
API Reference
Knowledge of JavaScript programming is necessary to best make use of this section.
The API_UTIL consists many components. This document only provides details on the following
component:
• The EDAIUtil, which defines EDAI utility functions for creating conforming transformation and
filter implementations.
Note: For information about other components of API_UTIL, contact your WorkForce Software representative.
EDAIUtil
The EmpCenter Data Access Interface (EDAI) is used to access employee, assignment, and timesheet data.
The following functions are available in EDAIUtil.
createAssignmentFilter (keepFunction)
Creates assignment filter implementation with the specified keep function.
Parameter Description
keepFunction Function that specifies assignment records from being filtered
createScheduleDetailFilter (keepFunction)
Creates schedule_detail filter implementation with the specified keep function.
Parameter Description
keepFunction Function that specifies schedule_detail records from being filtered
createTimeSheetDetailSplitFilter (keepFunction)
Creates time_sheet_detail_split filter implementation with the specified keep function.
Parameter Description
keepFunction Function that specifies time_sheet_detail_split records from being filtered
createTimeSheetOutputFilter (keepFunction)
Creates time_sheet_output filter implementation with the specified keep function.
Parameter Description
keepFunction Function that specifies time_sheet_output records from being filtered
createScheduleDetailTransformer (updateFunction)
Creates schedule_detail transformation implementation with the specified update function.
Parameter Description
updateFunction Function that specifies logic to update schedule_detail records
createTimeSheetDetailSplitTransformer (updateFunction)
Creates time_sheet_detail_split transformation implementation with the specified update function.
Parameter Description
updateFunction Function that specifies logic to update time_sheet_detail_split records
Appendix: JS_UTIL
Overview and Capabilities
The JS_UTIL API provides different types of lookup tables and utility methods that can be used in scripts. You
can use the lookup table and effective-dated lookup table objects to hold data as key/value pairs. The
following types of utilities are also available:
• convert date/time
• get/convert time-zone
• trimming or padding a string
• rounding a number
• validating/renaming files
• archiving files
Note: JS_UTIL is the latest version of this API that should be used in scripts. Previously, it was named
EMPLOYEE_IMPORT_UTIL. Although EMPLOYEE_IMPORT_UTIL has been retained for backward
compatibility purposes, its code has been moved into JS_UTIL, and it now simply includes the JS_UTIL
distributed library.
Prerequisites
To use this API, you should know:
• Basic JavaScript coding.
Components
This API consists of the following component(s):
• JS_UTIL distributed JavaScript Library
Setup
No setup is necessary for JS_UTIL API. The distributed library is automatically available within WorkForce
Time and Attendance.
Use Cases
LookupTable
The following example shows how to use the LookupTable.
includeDistributedPolicy("JS_UTIL");
EffDatedLookupTable
The following example shows how to use the EffDatedLookupTable.
includeDistributedPolicy("JS_UTIL");
// fetching the value mapped against the key for asOfDate in EffDatedLookupTable
var value = map.get(emp_01, "2017-02-01"); // returns asgnmt_01
if (value == asgnmt_01) {
log.info("Assignment: " + asgnmt_01 + " for Employee: " + emp_01 );
}
else {
log.error("Unexpected asgnmt value: " + value);
}
Utility Functions
The following example shows how to use the different utility functions defined in JS_UTIL.
includeDistributedPolicy("JS_UTIL");
if (timeZone != "Africa/Addis_Ababa") {
log.error("Incorrect timezone name: " + timeZone);
}
The following example provides a way of creating and using connection scriptable with the help of a method
available in JS_UTIL distributed library.
includeDistributedPolicy("JS_UTIL");
// The policy name of the "DB Connection Info" policy to the source data
var DB_CONNECTION_POLICY = "EMPCENTER_CONNECTION_1";
var connectionScriptable;
var sql;
try {
// Returns a connection scriptable to the corresponding database
connectionScriptable = createSourceConnection(DB_CONNECTION_POLICY);
var query = "Select TOP(10) DISPLAY_EMPLOYEE, FIRST_NAME from EMPLOYEE";
// Creates a Sql scriptable
sql = new Sql(connectionScriptable, query);
while (sql.next()) {
log.info("EmpId: " + sql.DISPLAY_EMPLOYEE + " First Name: " + sql.FIRST_NAME);
}
}
catch (e) {
log.error("Error while execution: " + e);
}
finally {
if (connectionScriptable) {
connectionScriptable.closeStatements();
connectionScriptable.close();
}
if (sql) {
sql.close();
}
}
The following example shows how to use methods related to file processing.
includeDistributedPolicy("JS_UTIL");
// Archives the specified file in the specified folder. The original file will no longer
exist.
var archivedPath = filePath + "Archive/";
if (archiveFile(file, archivedPath)) {
log.info("Successfully archived " + file.getName() + " in " + archivedPath);
}
else {
log.info("Archiving failed for " + file.getName());
}
// Creates two different file objects for comparison of their modification date.
var file1 = new java.io.File(filePath + "File1.csv");
var file2 = new java.io.File(filePath + "File2.csv");
var difference = compareFilesByModificationDate(file1, file2);
if (difference < 0) {
log.info(file2.getName() + " is modified later.");
}
else if (difference > 0) {
log.info(file1.getName() + " is modified later.");
}
else {
log.info("Same modification date.");
}
Troubleshooting
The job log of the script using JS_UTIL distributed library will contain information messages and, in the case of
problems, any error messages generated during processing. This job log should be reviewed if there are any
problems encountered while using the API.
Error Message Cause Solution
Directory <directoryPath> does {IOException} If file denoted by Provide valid directory path
not exist abstract directory path in
listItemsInDirectory does not
exist or is not a directory
Unable to rename file: file Unable to rename file: file Provide appropriate file name and
doesn’t exist doesn’t exist directory path
Unparseable WDate: <date> ParseException Provide appropriate date string in
convertWDate
Can’t parse <date> in these ParseException Provide appropriate date and
formats format string in
convertWDateTime/convertWTime
API Reference
LookupTable
The LookupTable is a map which contains key value pairs and has the following methods available:
set(key, object)
Sets the value of object against the key in the lookup table, object can be an instance of an array, string or
primitive.
Parameter Description
key Object or String
object Object or String
containsKey(key)
Checks the lookup table to determine if it contains the key and returns a Boolean.
Parameter Description
key Object or String
get(key)
Returns the object mapped against the key in the lookup table.
Parameter Description
key Object or String
setAll(keys, objects)
Sets the value of objects against the array of keys provided in the lookup table. If the object is an instance of
an array, then each key in the keys array is mapped against the value in the objects array of the same index.
Parameter Description
keys Array
objects Array, Object or String
getKeyArray()
Returns a list of keys contained in the lookup table.
clear()
Clears the data contained in the lookup table.
EffDatedLookupTable
The EffDatedLookupTable has the following methods available:
get(key, asOfDate)
Returns a value mapped against the key on asOfDate in the EffDatedLookupTable, or null if no key is found.
Parameter Description
key Object or String
asOfDate String or WFSDate
containsKey(key, asOfDate)
Returns a Boolean true if the EffDatedLookupTable contains the key on asOfDate, false otherwise.
Parameter Description
key Object or String
asOfDate String or WFSDate
Utility Functions
isValidISO4217CurrencyCode(currencyCode)
Returns as Boolean after validating the string currencyCode is a valid ISO 4217 3-letter currency code.
Parameter Description
currencyCode Currency code to be validated
convertWDate(date, format)
Converts and returns the date string into the WDate object, using the specified date format string. If the date
format is not provided, then the standard date format yyyy-MM-dd is used.
Parameter Description
date Date string to be converted
format String date format
convertWDateTime(date, format)
Converts and returns the date-time string into the WDateTime object, using the specified date-time format
string. If the date-time format is not provided, then the standard date format yyyy-MM-dd HH:mm:ss is used.
Parameter Description
date Date-time string to be converted
format String date-time format
convertWTime(date, format)
Converts and returns the time string into the WTime object, using the specified time format string. If the time
format is not provided, then the standard date format HH:mm:ss is used.
Parameter Description
date Time string to be converted
format String time format
Parameter Description
local Policy ID of local timezone
getTZName(offset, useDST)
Gets first timezone name for Java for the given timezone offset measured in hours.
Parameter Description
offset Size of offset in hours
useDST Boolean value to represent whether Daylight Savings Time
should be considered
LTrim(s)
Trims white space that is left-justified.
Parameter Description
s String to be trimmed
RTrim(s)
Trims white space that is right-justified.
Parameter Description
s String to be trimmed
Trim(s)
Trims white space present in the string.
Parameter Description
s String to be trimmed
Parameter Description
str String to be padded
padChar Character that is to be added to the string
length Final length of string after padding
max()
Returns the greatest of the specified values, eliminating null elements from consideration.
min()
Returns the least of the specified values, eliminating null elements from consideration.
isBlank(str)
Returns a Boolean value after validating if the specified string is empty or null.
Parameter Description
str String to be evaluated
isUndefined(arg)
Returns true if arg is undefined or null, false otherwise.
Parameter Description
arg Object to be evaluated
inArray(array, value)
Returns true if value is present in the provided array, false otherwise.
Parameter Description
array Array
value Object of specified array type
createSourceConnection(id)
Creates and returns a Database Connection Info policy.
Parameter Description
id Policy ID of Db_connection_info policy
roundNumber(num, dec)
Returns a number after rounding using the provided number to specify the decimal precision.
Parameter Description
num Number to round
dec Decimal precision number
roundNumberToQuarter(value)
Rounds the provided value to nearest quarter value.
Parameter Description
value Number to round
fileExists(file)
Returns true if a file exists and is a file (i.e., not a directory). False otherwise.
Parameter Description
file java.io.File object
listItemsInDirectory(directoryPath, includeDirectories)
Returns a list of all the files/directories contained in the specified folder.
Parameter Description
directoryPath Path to the directory whose files will be listed
includeDirectories If this Boolean flag is set to true, directories inside the
specified directory will be included. If set to false, only files
will be returned.
archiveFile(file, archivePath)
Archives the specified file. Archiving means that the timestamp is appended to the beginning of the file
name. Then the file is moved to the location specified by the archivePath. The original file will no longer exist.
Parameter Description
file java.io.File object to archive
archivePath The directory the file should be archived in
compareFilesByModificationDate(file1, file2)
Compares specified file objects based on their last modification date. Returns 1 if file1 is modified later than
file2, -1 if file1 is modified earlier, or 0 if both files have the same modification date-time. Both files are
java.io.File objects.
getCurrentLoggedInUser()
Using current run time context, returns current logged-in user.