DS324SVR Lab Exercises
DS324SVR Lab Exercises
Server Edition
(DS324SVR)
Lab Exercises
Table of Contents
4. Click OK.
1. Click Import from the main toolbar and choose DataStage Components.
2. In the Import from file: box, browse to the location of the
DS324SvrProjectFiles.dsx file. It will be located in the directory where the data
files were copied.
3. Make sure that the Import All radio button and Perform Usage Analysis box
are checked.
1. Select the Jobs branch on the left hand pane of the window.
2. Select the ClearWarehouseTables job from the right hand pane.
3. Drag the job on top of the Warehouse Utilities category in the left hand pane.
4. Select all of the remaining jobs on the right hand pane.
5. Drag the selection to DS324 Warehouse category on the left hand pane.
3. Right click on the Employees table definition on the right hand pane and click
Usage Analysis.
4. Which objects are referenced by this table definition?
5. ___lkpEmployeBonus___________________________________
___seqRecentEmployess__________________________________
__seqRecentEmployees_____________________________
Fig. 4-1
3. Replace <enter parameter for file path here> with #FILE_DIR# in the
File name field.
4. Click on the Columns tab.
5. Click Load.
6. Expand DS324Files under the Sequential branch and select the
Employees table definition.
7. Click OK.
8. Click View Data. If an error message appears, note the error and check the
setup of the stage. Then try to view data again.
9. Click OK to accept the parameter values.
10. Verify the data is readable.
11. Click Close.
12. Click OK to return to the stage properties.
13. Click OK to accept the changes to the stage and return to the canvas.
___________________________
1. Click Tools from the Main Tool Bar and select Run Director.
2. Select hashCreateJOBS job in the right hand pane.
3. Click the Run button from the Standard Tool Bar.
4. Supply any empty parameter values.
5. Click Run in the Job Run Options dialog box.
1. Select hashCreateJOBS.
2. Click Tools from the Main Toolbar and select New Monitor.
3. Expand xFilter and note the number of records that were loaded into the
target file.
_____________________
4. Click Close.
_____________________
1. Click the Start button on the Windows taskbar and click Run…
2. In the Open box, type odbcad32.exe.
3. Click OK. The ODBC Data Source Administrator dialog box appears.
4. Click on the System DSN tab.
5. Click Add…
6. Select the Microsoft Access Driver from the list and click Finish.
7. Enter DS324 as the Data Source Name.
8. Click Select…
9. Browse to the DS324Files directory and select ds324.mdb as the database.
10. Click OK to accept the new database.
11. Click OK to accept the new data source.
12. Click OK to close the ODBC Administrator.
1. Click the New icon on the Standard Toolbar to create a new object.
2. Select New Server Job.
3. Click OK.
4. Click on the Favorites category of the Palette.
5. Drag the ODBC stage to the canvas twice, one for a source and one for a
target.
6. Drag the Transformer stage to the canvas.
7. Either use the link stage in the General category of the Palette or right click
in the source stage and drag a link to the Transformer. Then create a link
from the Transformer to the target stage.
8. Label stages and links as below in Fig. 6-1:
Fig. 6-1
Fig. 6-2
Derivation: sum(bonus)
SQL type: Double
Length: 15
_________________________________________________
1. View the raw data files on the operating system and note the layout of
the files.
2. If not already logged into the DataStage Designer, login.
3. Open the srcTemplate job.
4. Open the source Sequential stage.
5. Go to the Output tab.
6. Modify the format to match the file layout.
7. Click the Save on the Standard toolbar.
8. Close the job.
Fig. 7-1
Job: srcProducts
Raw data directory: \DS324Files\Raw
Job: srcOrders
Raw data directory: \DS324Files\Raw
Raw data file name: ORDERS.980506.D01
Table definition: COBOL FD\ORDER_DETAIL
Source file directory: \DS324Files\Source
Source file name: Orders.src
Table definition: DS324Files\Orders
Fig. 8-1
7. Verify on the Options page that the file type to be created is Type
30(Dynamic).
8. Verify that write caching is enabled.
9. Verify that a key column has been selected and is valid.
10. Repeat steps 4-9 for the other hashed files.
11. Save, compile, and run the job.
12. Verify that no records were loaded into each of the hash files, but that the
files exist. You can determine that the files were created by right clicking
on the appropriate hashed file stage and clicking on View link name data.
If the file exists but has no records, you will see ‘Data source empty.’ This
is perfectly acceptable. If the file does not exist, an error message will
appear noting that the engine could not locate the file name.
Fig. 9-1
1. In the source stage of the trnCustomers job, load the Customers table
definition. Modify the format in the stage if necessary.
2. View data providing an appropriate Source file directory if one is not
present.
3. Close the Properties page.
1. Use the drop-down lists to select the project name and hashed file name in
each hashed file stage.
2. Load the HA_SKEY_CUSTOMERS table definition into both hashed file
stages.
Fig. 9-2
_____91____ Inserted
_______0__ Updated
Fig. 9-3
8. Select the project and hashed file name from the drop-down lists and load the
table definitions for the new hashed file stages (HA_SKEY_CUSTOMERS,
HA_SKEY_PRODUCTS, HA_SKEY_TIMES).
9. Setup the HA_SKEY_ORDERS and HA_SKEY_ORDERS2 hashed file stages
and ODBC target stage as you did in the trnCustomers and trnProducts
jobs.
10. Load the Orders sequential file table definition for the rejects output file stage
and the source input file stage. In addition to loading the table definition in
the source file stage, you must also modify the file name and format of the
file. Removing the link from the stage removed the previously saved
information.
11. Open the xfmFilter transformer.
12. Setup the lookup expressions by mapping the columns from the stream input
link that correspond to the key columns in the hashed file lookup links.
13. Set the constraints to reflect a valid lookup in each hashed file stage. Don’t
forget to connect the statements with an ‘And’ statement, not an ‘Or’
statement.
14. Use the ODBC\Orders table definition for the columns in the
ValidOrdersOut link. All of the columns will be used except the OrderKey
column—you can delete it. The remaining columns will have a corresponding
column in the input links, with the exception of the SalesAmount column.
The derivation for that column comes from the Quantity, UnitPrice, and
Discount columns. It should look similar to the following expression:
_________ Inserted
_________ Updated
Fig. 10-1
3. Define the source file stage using Orders.src as the file name. Also define
the format of the file and load the column definitions.
4. Open the xfmOrderDetails transformer and create the following columns in
the ShippedOrdersOut output link:
5. Create three new stage variables to hold the date values. Create derivations
for each of the dates that will convert the date strings into the internal day
number format.
6. Define a constraint to send records with a valid ShippedDate to the
ShippedOrdersOut link. Reject any records that do not have a ship date.
7. Utilize the stage variables to create the proper derivations for each of the
output columns based on the rules listed in the table above.
8. Map all of the input columns directly to the UnshippedOrdersOut link.
These records can be processed by another job.
9. Name the output tables in the odbc stage as SHIPPED_ORDERS and
UNSHIPPED_ORDERS. On each input link in the odbc stage, have the stage
create the table before the records are loaded.
10. Save, compile, & run the job.
1. Open the Orders.src file in a text editor or a data browser within a job.
2. Note the format of the three date columns:
OrderDate ________________________
RequiredDate _____________________
ShippedDate _______________________
3. Expand the Transforms branch of the repository. Expand the SDK category
and browse through the Date categories. Locate possible transforms that
can read dates in the form of the three dates above. Pay particular attention
to transforms that can output a date in an ODBC format. Note the transforms
that may be suitable for the dates.
OrderDate ________________________
RequiredDate _____________________
ShippedDate _______________________
Fig. 12-1
5. Open the properties page of the IPC stage and go to the Outputs page.
6. On the columns page, load the ODBC\Orders table definition.
7. Open the xfmMap stage and re-map the columns if necessary.
8. Save, compile and run the job.
9. Compare the run times of this job with the run times of the
trnProductsRowBuff job.
to the transformer stages. Link the transformers to the link collector stage.
Make a new link from the link collector stage to the xfmMap stage.
4. Rename the new stages and links as shown in Fig. 12-2.
Fig. 12-2
5. Beginning with the link partitioner stage, propagate the columns along each
of the links. Finish propagating columns with the link collector stage to the
xfmMap stage.
6. Open the properties page of the link partitioner stage and go to the Outputs
page.
7. On the columns page, load the ODBC\Orders table definition.
8. Open the xfmMap stage and re-map the columns if necessary.
9. Save, compile and run the job.
10. Compare the run times of this job with the run times of the
trnOrdersRowBuff job.
Fig. 13-1
Fig. 13-2
Fig. 13-3
Fig. 13-4
Fig. 13-5
Fig. 13-7