[go: up one dir, main page]

0% found this document useful (0 votes)
944 views345 pages

Informatica Interview QnA

infa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
944 views345 pages

Informatica Interview QnA

infa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 345

INFORMATICA INTERVIEW QUESTIONS

1.Informatica - Why we use lookup transformations?


QUESTION #1 Lookup Transformations can access data from
relational tables
that are not sources in mapping. With Lookup transformation,
we can
accomplish the following tasks:
Get a related value-Get the Employee Name from Employee table based on the
Employee
IDPerform Calculation.
Update slowly changing dimension tables - We can use unconnected lookup
transformation to
determine whether the records already exist in the target or not.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2006 01:12:33 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: Why we use lookup transformations?
=======================================
Nice Question If we don't have a look our datawarehouse will be have more
unwanted duplicates
Use a Lookup transformation in your mapping to look up data in a relational table
view or synonym.
Import a lookup definition from any relational database to which both the
Informatica Client and Server
can connect. You can use multiple Lookup transformations in a mapping
Cheers
Sithu
file:///C|/Perl/bin/result.html (1 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Lookup Transformations used to search data from relational tables/FLAT Files
that are not used in
mapping.
Types of Lookup:
1. Connected Lookup
2. UnConnected Lookup
=======================================
The main use of lookup is to get a related value either from a relational sources or
flat files
=======================================
The following reasons for using lookups.....
1)We use Lookup transformations that query the largest amounts of data to
improve overall performance. By doing that we can reduce the number of lookups
on the same table.
2)If a mapping contains Lookup transformations we will enable lookup caching
if this option is not enabled .
We will use a persistent cache to improve performance of the lookup whenever
possible.
We will explore the possibility of using concurrent caches to improve session
performance.
We will use the Lookup SQL Override option to add a WHERE clause to the
default
SQL statement if it is not defined
We will add ORDER BY clause in lookup SQL statement if there is no order by
defined.
We will use SQL override to suppress the default ORDER BY statement and enter
an
override ORDER BY with fewer columns. Indexing the Lookup Table
We can improve performance for the following types of lookups:
For cached lookups we will index the lookup table using the columns in the
lookup ORDER BY statement.
For Un-cached lookups we will Index the lookup table using the columns in the
lookup where condition.
file:///C|/Perl/bin/result.html (2 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
3)In some cases we use lookup instead of Joiner as lookup is faster than
joiner in some cases when lookup contains the master data only.
4)This lookup helps in terms of performance tuning of the mappings also.
=======================================
Look up Transformation is like a set of Reference for the traget table.For example
suppose you are
travelling by an auto ricksha..In the morning you notice that the auto driver
showing you some card and
saying that today onwards there is a hike in petrol.so you have to pay more. So the
card which he is
showing is a set of reference for there costumer..In the same way the lookup
transformation works.
These are of 2 types :
a) Connected Lookup
b) Un-connected lookup
Connected lookup is connected in a single pipeline from a source to a target where
as Un Connected
Lookup is isolated with in the mapping and is called with the help of a Expression
Transformation.
=======================================
Look up tranformations are used to
Get a related value
Updating slowly changing dimension
Caluculating expressions
=======================================
2.Informatica - While importing the relational
source definition
from database, what are the meta data of source U i
QUESTION #2 Source name
Database location
Column names
Datatypes
Key constraints
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
file:///C|/Perl/bin/result.html (3 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
September 28, 2006 06:30:08 #1
srinvas vadlakonda
RE: While importing the relational source defintion fr...
=======================================
source name data types key constraints database location
=======================================
Relational sources are tables views synonyms. Source name Database location
Column name Datatype
Key Constraints. For synonyms you will have to manually create the constraints.
=======================================
3.Informatica - How many ways you can update a
relational
source defintion and what r they?
QUESTION #3 Two ways
1. Edit the definition
2. Reimport the defintion
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 30, 2006 04:59:06 #1
gazulas Member Since: January 2006 Contribution: 17
RE: How many ways you can update a relational source d...
=======================================
in 2 ways we can do it
1) by reimport the source definition
2) by edit the source definition
=======================================
4.Informatica - Where should U place the flat file to
import the
file:///C|/Perl/bin/result.html (4 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
flat file defintion to the designer?
QUESTION #4 Place it in local folder
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
December 13, 2005 08:42:59 #1
rishi
RE: Where should U place the flat file to import the f...
=======================================
There is no such restrication to place the source file. In performance point of view
its better to place the
file in server local src folder. if you need path please check the server properties
availble at workflow
manager.
It doesn't mean we should not place in any other folder if we place in server src
folder by default src
will be selected at time session creation.
=======================================
file must be in a directory local to the client machine.
=======================================
Basically the flat file should be stored in the src folder in the informatica server
folder.
Now logically it should pick up the file from any location but it gives an error of
invalid identifier or
not able to read the first row.
So its better to keep the file in the src folder.which is already created when the
informatica is installed
=======================================
We can place source file any where in network but it will consume more time to
fetch data from source
file but if the source file is present on server srcfile then it will fetch data from
source upto 25 times
faster than previous.
=======================================
5.Informatica - To provide support for Mainframes
source data,
which files r used as a source definitions?
QUESTION #5 COBOL files
Click Here to view complete document
file:///C|/Perl/bin/result.html (5 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
No best answer available. Please pick the good answer available or submit your
answer.
October 07, 2005 11:49:42 #1
Shaks Krishnamurthy
RE: To provide support for Mainframes source data,whic...
=======================================
COBOL Copy-book files
=======================================
The mainframe files are Used as VSAM files in Informatica by using the
Normaliser transformation
=======================================
6.Informatica - Which transformation should u need
while using
the cobol sources as source defintions?
QUESTION #6 Normalizer transformaiton which is used to
normalize the data.
Since cobol sources r oftenly consists of Denormailzed data.
Click Here to view complete document
Submitted by: sithusithu
Normalizer transformaiton
Cheers,
Sithu
Above answer was rated as good by the following members:
ramonasiraj
=======================================
Normalizer transformaiton
Cheers
Sithu
file:///C|/Perl/bin/result.html (6 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Normalizer transformaiton which is used to normalize the data
=======================================
7.Informatica - How can U create or import flat file
definition in
to the warehouse designer?
QUESTION #7 U can not create or import flat file defintion in to
warehouse
designer directly.Instead U must analyze the file in source
analyzer,then drag it
into the warehouse designer.When U drag the flat file source
defintion into
warehouse desginer workspace,the warehouse designer creates a
relational target
defintion not a file defintion.If u want to load to a file,configure
the session to
write to a flat file.When the informatica server runs the session,it
creates and
loads the flatfile.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
August 22, 2005 03:23:12 #1
Praveen
RE: How can U create or import flat file definition in to the warehouse
designer?
=======================================
U can create flat file definition in warehouse designer.in the warehouse designer u
can create new
target: select the type as flat file. save it and u can enter various columns for that
created target by
editing its properties.Once the target is created save it. u can import it from the
mapping designer.
=======================================
Yes you can import flat file directly into Warehouse designer. This way it will
import the field
definitions directly.
=======================================
1) Manually create the flat file target definition in warehouse designer
2)create target definition from a source definition. This is done my dropping a
source definition in
warehouse designer.
3)Import flat file definitionusing a flat file wizard. ( file must be local to the client
machine)
=======================================
While creating flatfiles manually we drag and drop the structure from SQ if the
structure we need is the
same as of source for this we need to check-in the source and then drag and drop it
into the Flatfile if
file:///C|/Perl/bin/result.html (7 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
not all the columns in the source will be changed as primary keys.
=======================================
8.Informatica - What is the maplet?
QUESTION #8 Maplet is a set of transformations that you build
in the maplet
designer and U can use in multiple mapings.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
December 08, 2005 23:38:47 #1
phani
RE: What is the maplet?
=======================================
For Ex:Suppose we have several fact tables that require a series of dimension
keys.Then we can create a
mapplet which contains a series of Lkp transformations to find each dimension key
and use it in each
fact table mapping instead of creating the same Lkp logic in each mapping.
=======================================
Part(sub set) of the Mapping is known as Mapplet
Cheers
Sithu
=======================================
Set of transforamations where the logic can be reusble
=======================================
A mapplet should have a mapplet input transformation which recives input values
and a output
transformation which passes the final modified data to back to the mapping.
when the mapplet is displayed with in the mapping only input & output ports are
displayed so that the
internal logic is hidden from end-user point of view.
=======================================
file:///C|/Perl/bin/result.html (8 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Reusable mapping is known as mapplet & reusable transformation with mapplet
=======================================
Maplet is a reusable business logic which can be used in mappings
=======================================
A maplet is a reusable object which contains one or more than one transformation
which is used to
populate the data from source to target based on the business logic and we can use
the same logic in
different mappings without creating the mapping again.
=======================================
Mapplet Is In Mapplet Designer It is used to create Mapplets.
=======================================
A mapplet is a reusable object that represents a set of transformations. Mapplet can
be designed using
mapping designer in informatica power center
=======================================
Basically mapplet is a subset of the mapping in which we can have the information
of the each
dimension keys by keeping the different mappings created individually. If we want
a series of
dimension keys in the final fact table we will use mapping designer.
=======================================
9.Informatica - what is a transforamation?
QUESTION #9 It is a repostitory object that generates,modifies
or passes data.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
November 23, 2005 16:06:23 #1
sir
RE: what is a transforamation?
=======================================
a transformation is repository object that pass data to the next stage(i.e to the next
transformation or
target) with/with out modifying the data
=======================================
It is a process of converting given input to desired output.
=======================================
set of operation
Cheers
Sithu
file:///C|/Perl/bin/result.html (9 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Transformation is a repository object of converting a given input to desired
output.It can generates
modifies and passes the data.
=======================================
A TransFormation Is a Repository Object.
That Generates Modifies or Passes Data.
The Designer Provides a Set of Transformations That perform Specific Functions.
For Example An AGGREGATOR Transformation Performs Calculations On
Groups Of Data.
=======================================
10.Informatica - What r the designer tools for
creating
tranformations?
QUESTION #10 Mapping designer
Tansformation developer
Mapplet designer
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
February 21, 2007 05:29:40 #1
MANOJ KUMAR PANIGRAHI
RE: What r the designer tools for creating tranformati...
=======================================
There r 2 type of tool r used 4 creating transformation......just like
Mapping designer
Mapplet designer
=======================================
Mapping Designer
Maplet Designer
Transformation Deveoper - for reusable transformation
=======================================
11.Informatica - What r the active and passive
transforamtions?
QUESTION #11 An active transforamtion can change the number
of rows that
file:///C|/Perl/bin/result.html (10 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
pass through it.A passive transformation does not change the
number of rows
that pass through it.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 24, 2006 03:32:14 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What r the active and passive transforamtions?
=======================================
Transformations can be active or passive. An active transformation can change the
number of rows that
pass through it such as a Filter transformation that removes rows that do not meet
the filter condition.
A passive transformation does not change the number of rows that pass through it
such as an
Expression transformation that performs a calculation on data and passes all rows
through the
transformation.
Cheers
Sithu
=======================================
Active Transformation : A Transformation which change the number of rows when
data is flowing from
source to target
Passive Transformation : A transformation which does not change the number of
rows when the data is
flowing from source to target
=======================================
12.Informatica - What r the connected or
unconnected
transforamations?
QUESTION #12 An unconnected transforamtion is not connected
to other
transformations in the mapping.Connected transforamation is
connected to
other transforamtions in the mapping.
Click Here to view complete document
file:///C|/Perl/bin/result.html (11 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
No best answer available. Please pick the good answer available or submit your
answer.
August 22, 2005 03:26:32 #1
Praveen
RE: What r the connected or unconnected transforamations?
=======================================
An unconnected transformation cant be connected to another transformation. but it
can be called inside
another transformation.
=======================================
Here is the deal
Connected transformation is a part of your data flow in the pipeline while
unconnected Transformation
is not.
much like calling a program by name and by reference.
use unconnected transforms when you wanna call the same transform many times
in a single mapping.
=======================================
In addition to first answer uncondition transformation are directly connected and
can/used in as many as
other transformations. If you are using a transformation several times use
unconditional. You get better
performance.
=======================================
Connect Transformation :
A transformation which participates in the mapping data flow
Connected
transformation can receive multiple inputs and provides multiple outputs
Unconnected :
A unconnected transformation does not participate in the mapping data flow
It can receive multiple inputs and provides single output
file:///C|/Perl/bin/result.html (12 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Thanks
Rekha
=======================================
13.Informatica - How many ways u create ports?
QUESTION #13 Two ways
1.Drag the port from another transforamtion
2.Click the add buttion on the ports tab.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
September 28, 2006 06:31:21 #1
srinivas.vadlakonda
RE: How many ways u create ports?
=======================================
Two ways
1.Drag the port from another transforamtion
2.Click the add buttion on the ports tab.
=======================================
we can copy and paste the ports in the ports tab
=======================================
14.Informatica - What r the reusable
transforamtions?
QUESTION #14 Reusable transformations can be used in
multiple mappings.
When u need to incorporate this transformation into maping,U
add an instance
of it to maping.Later if U change the definition of the
transformation ,all
instances of it inherit the changes.Since the instance of reusable
transforamation
is a pointer to that transforamtion,U can change the
transforamation in the
transformation developer,its instances automatically reflect
these changes.This
feature can save U great deal of work.
Click Here to view complete document
Submitted by: sithusithu
file:///C|/Perl/bin/result.html (13 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
A transformation can reused, that is know as reusable transformation
You can design using 2 methods
1. using transformation developer
2. create normal one and promote it to reusable
Cheers
Sithu
Above answer was rated as good by the following members:
ramonasiraj
=======================================
A transformation can reused that is know as reusable transformation
You can design using 2 methods
1. using transformation developer
2. create normal one and promote it to reusable
Cheers
Sithu
=======================================
Hai to all friends out there
the transformation that can be reused is called a reusable transformation.
as the property suggests it has to be reused:
so for reusing we can do it in two different ways
1) by creating normal transformation and making it reusable by checking it in the
check box of the
properties of the edit transformation.
2) by using transformation developer here what ever transformation is developed it
is reusable and it
can be used in mapping designer where we can further change its properties as per
our requirement.
file:///C|/Perl/bin/result.html (14 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
1. A reusable transformation can be used in multiple transformations
2.The designer stores each reusable transformation as metada separate from
any mappings that use the transformation.
3. Every reusable transformation falls within a category of transformations
available in the Designer
4.one can only create an External Procedure transformation as a reusable
transformation.
=======================================
15.Informatica - What r the methods for creating
reusable
transforamtions?
QUESTION #15 Two methods
1.Design it in the transformation developer.
2.Promote a standard transformation from the mapping
designer.After U add a
transformation to the mapping , U can promote it to the status of
reusable
transformation.
Once U promote a standard transformation to reusable status,U
can demote it to
a standard transformation at any time.
If u change the properties of a reusable transformation in
mapping,U can revert
it to the original reusable transformation properties by clicking
the revert
button.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
September 12, 2005 12:22:21 #1
Praveen Vasudev
RE: methods for creating reusable transforamtions?
file:///C|/Perl/bin/result.html (15 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
PLEASE THINK TWICE BEFORE YOU POST AN ANSWER.
Answer: Two methods
1.Design it in the transformation developer. by default its a reusable transform.
2.Promote a standard transformation from the mapping designer.After U add a
transformation to the
mapping U can promote it to the status of reusable transformation.
Once U promote a standard transformation to reusable status U CANNOT demote
it to a standard
transformation at any time.
If u change the properties of a reusable transformation in mapping U can revert it
to the original
reusable transformation properties by clicking the revert button.
=======================================
You can design using 2 methods
1. using transformation developer
2. create normal one and promote it to reusable
Cheers
Sithu
=======================================
16.Informatica - What r the unsupported repository
objects for a
mapplet?
QUESTION #16 COBOL source definition
Joiner transformations
Normalizer transformations
Non reusable sequence generator transformations.
Pre or post session stored procedures
Target defintions
Power mart 3.5 style Look Up functions
XML source definitions
IBM MQ source defintions
file:///C|/Perl/bin/result.html (16 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2006 04:23:12 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What r the unsupported repository objects for a ma...
=======================================
l Source definitions. Definitions of database objects (tables views synonyms) or
files that provide
source data.
l Target definitions. Definitions of database objects or files that contain the target
data.
l Multi-dimensional metadata. Target definitions that are configured as cubes and
dimensions.
l Mappings. A set of source and target definitions along with transformations
containing business
logic that you build into the transformation. These are the instructions that the
Informatica Server uses
to transform and move data.
l Reusable transformations. Transformations that you can use in multiple
mappings.
l Mapplets. A set of transformations that you can use in multiple mappings.
l Sessions and workflows. Sessions and workflows store information about how
and when the
Informatica Server moves data. A workflow is a set of instructions that describes
how and when to run
tasks related to extracting transforming and loading data. A session is a type of task
that you can put in
a workflow. Each session corresponds to a single mapping.
Cheers
Sithu
=======================================
Hi
The following answer is from Informatica Help Documnetation
l You cannot include the following objects in a mapplet:
l Normalizer transformations
l Cobol sources
l XML Source Qualifier transformations
l XML sources
file:///C|/Perl/bin/result.html (17 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
l Target definitions
l Pre- and post- session stored procedures
l Other mapplets
Shivaji Thaneru
=======================================
normaliser xml source qualifier and cobol sources cannot be used
=======================================
Normalizer transformations
Cobol sources
XML Source Qualifier transformations
XML sources
Target definitions
Pre- and post- session stored procedures
Other mapplets
-PowerMart 3.5-style LOOKUP functions
-non reusable sequence generator
=======================================
17.Informatica - What r the mapping paramaters and
maping
variables?
QUESTION #17 Maping parameter represents a constant value
that U can define
before running a session.A mapping parameter retains the same
value throughout
the entire session.
When u use the maping parameter ,U declare and use the
parameter in a maping
or maplet.Then define the value of parameter in a parameter file
for the session.
Unlike a mapping parameter,a maping variable represents a
value that can
change throughout the session.The informatica server saves the
value of maping
variable to the repository at the end of session run and uses that
value next time
U run the session.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
September 12, 2005 12:30:13 #1
Praveen Vasudev
file:///C|/Perl/bin/result.html (18 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: mapping varibles
=======================================
Please refer to the documentation for more understanding.
Mapping variables have two identities:
Start value and Current value
Start value Current value ( when the session starts the execution of the undelying
mapping)
Start value <> Current value ( while the session is in progress and the variable
value changes in one
ore more occasions)
Current value at the end of the session is nothing but the start value for the
subsequent run of the
same session.
=======================================
You can use mapping parameters and variables in the SQL query user-defined join
and source filter of a
Source Qualifier transformation. You can also use the system variable
$$$SessStartTime.
The Informatica Server first generates an SQL query and scans the query to replace
each mapping
parameter or variable with its start value. Then it executes the query on the source
database.
Cheers
Sithu
=======================================
Mapping parameter represents a constant value defined before mapping run.
Mapping reusability can be achieved by using mapping parameters.
file:///C|/Perl/bin/result.html (19 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Mapping variable represents a value that can be changed during the mapping run.
Mapping variable can be used in incremental loading process.
=======================================
18.Informatica - Can U use the maping parameters
or variables
created in one maping into another maping?
QUESTION #18 NO.
We can use mapping parameters or variables in any
transformation of the same
maping or mapplet in which U have created maping parameters
or variables.
Click Here to view complete document
Submitted by: Ray
NO. You might want to use a workflow parameter/variable if you want it to be
visible with other
mappings/sessions
Above answer was rated as good by the following members:
ramonasiraj
=======================================
NO. You might want to use a workflow parameter/variable if you want it to be
visible with other
mappings/sessions
=======================================
Hi
The following sentences extracted from Informatica help as it is.Did it support the
above to answers.
After you create a parameter you can use it in the Expression Editor of any
transformation in a mapping
or mapplet. You can also use it in Source Qualifier transformations and reusable
transformations.
Shivaji Thaneru
=======================================
I differ on this; we can use global variable in sessions as well as in mappings. This
provision is
provided in Informatica 7.1.x versions; I have used it. Please check this in
properties.
Regards
-Vaibhav
=======================================
hi
file:///C|/Perl/bin/result.html (20 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Thanx Shivaji but the statement does not completely answer the question.
a mapping parameter can be used in reusable transformation
but does it mean u can use the mapping parameter whereever the instances of the
reusable
transformation are used?
=======================================
The scope of a mapping variable is the mapping in which it is defined. A variable
Var1 defined in
mapping Map1 can only be used in Map1. You cannot use it in another mapping
say Map2.
=======================================
19.Informatica - Can u use the maping parameters or
variables
created in one maping into any other reusable
transform
QUESTION #19 Yes.Because reusable tranformation is not
contained with any
maplet or maping.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
February 02, 2007 17:06:04 #1
mahesh4346 Member Since: January 2007 Contribution: 6
RE: Can u use the maping parameters or variables creat...
=======================================
But when one cant use Mapping parameters and variables of one mapping in
another Mapping then how
can that be used in reusable transformation when Reusable transformations
themselves can be used
among multiple mappings?So I think one cant use Mapping parameters and
variables in reusable
transformationsPlease correct me if i am wrong
=======================================
Hi you can use the mapping parameters or variables in a reusable transformation.
And when you use the
xformation in a mapping during execution of the session it validates if the mapping
parameter that is
used in the xformation is defined with this mapping or not. If not the session fails.
=======================================
20.Informatica - How can U improve session
performance in
aggregator transformation?
QUESTION #20 Use sorted input.
file:///C|/Perl/bin/result.html (21 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
September 12, 2005 12:34:09 #1
Praveen Vasudev
RE:
=======================================
use sorted input:
1. use a sorter before the aggregator
2. donot forget to check the option on the aggregator that tell the aggregator that
the input is sorted on
the same keys as group by.
the key order is also very important.
=======================================
hi
You can use the following guidelines to optimize the performance of an
Aggregator transformation.
Use sorted input to decrease the use of aggregate caches.
Sorted input reduces the amount of data cached during the session and improves
session performance.
Use this option with the Sorter transformation to pass sorted data to the Aggregator
transformation.
Limit connected input/output or output ports.
Limit the number of connected input/output or output ports to reduce the amount of
data the Aggregator
transformation stores in the data cache.
Filter before aggregating.
file:///C|/Perl/bin/result.html (22 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
If you use a Filter transformation in the mapping place the transformation before
the Aggregator
transformation to reduce unnecessary aggregation.
Shivaji T
=======================================
Following are the 3 ways with which we can improve the session performance:-
a) Use sorted input to decrease the use of aggregate caches.
b) Limit connected input/output or output ports
c) Filter before aggregating (if you are using any filter condition)
=======================================
By using Incrimental aggrigation also we can improve performence.Becaue it
passes the new data to the
mapping and uses historical data to perform aggrigation
=======================================
to improve session performance in aggregator transformation enable the session
option Incremental
Aggregation
=======================================
-Use sorted input to decrease the use of aggregate caches.
-Limit connected input/output or output ports.
Limit the number of connected input/output or output ports to reduce the amount of
data the Aggregator
transformation stores in the data cache.
-Filter the data before aggregating it.
=======================================
21.Informatica - What is aggregate cache in
aggregator
transforamtion?
QUESTION #21 The aggregator stores data in the aggregate
cache until it
completes aggregate calculations.When u run a session that uses
an aggregator
transformation,the informatica server creates index and data
caches in memory
to process the transformation.If the informatica server requires
more space,it
stores overflow values in cache files.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
file:///C|/Perl/bin/result.html (23 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
January 19, 2006 05:00:00 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What is aggregate cache in aggregator transforamti...
=======================================
When you run a workflow that uses an Aggregator transformation the Informatica
Server creates index
and data caches in memory to process the transformation. If the Informatica Server
requires more space
it stores overflow values in cache files.
Cheers
Sithu
=======================================
Aggregate cache contains data values while aggregate calculations are being
performed. Aggregate
cache is made up of index cache and data cache. Index cache contains group values
and data cache
consists of row values.
=======================================
when server runs the session with aggregate transformation it stores data in
memory until it completes
the aggregation
when u partition a source the server creates one memory cache and one disk cache
for each partition .it
routes the data from one partition to another based on group key values of the
transformation
=======================================
22.Informatica - What r the diffrence between joiner
transformation and source qualifier transformation?
QUESTION #22 U can join hetrogenious data sources in joiner
transformation
which we can not achieve in source qualifier transformation.
U need matching keys to join two relational sources in source
qualifier
transformation.Where as u doesnt need matching keys to join
two sources.
Two relational sources should come from same datasource in
sourcequalifier.U
can join relatinal sources which r coming from diffrent sources
also.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
file:///C|/Perl/bin/result.html (24 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
January 27, 2006 01:45:56 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What r the diffrence between joiner transformation...
=======================================
Source qualifier Homogeneous source
Joiner Heterogeneous source
Cheers
Sithu
=======================================
Hi
The Source Qualifier transformation provides an alternate way to filter rows.
Rather than filtering rows
from within a mapping the Source Qualifier transformation filters rows when read
from a source. The
main difference is that the source qualifier limits the row set extracted from a
source while the Filter
transformation limits the row set sent to a target. Since a source qualifier reduces
the number of rows
used throughout the mapping it provides better performance.
However the Source Qualifier transformation only lets you filter rows from
relational sources while the
Filter transformation filters rows from any type of source. Also note that since it
runs in the database
you must make sure that the filter condition in the Source Qualifier transformation
only uses standard
SQL.
Shivaji Thaneru
=======================================
hi as per my knowledge you need matching keys to join two relational sources both
in Source qualifier
as well as in Joiner transformation. But the difference is that in Source qualifier
both the keys must have
primary key - foreign key relation Whereas in Joiner transformation its not needed.
file:///C|/Perl/bin/result.html (25 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
source qualifier is used for reading the data from the database where as joiner
transformation is used for
joining two data tables.
source qualifier can also be used to join two tables but the condition is that both the
table should be
from relational database and it should have the primary key with the same data
structure.
using joiner we can join data from two heterogeneous sources like two flat files or
one file from
relational and one file from flat.
=======================================
23.Informatica - In which condtions we can not use
joiner
transformation(Limitaions of joiner
transformation)?
QUESTION #23 Both pipelines begin with the same original data
source.
Both input pipelines originate from the same Source Qualifier
transformation.
Both input pipelines originate from the same Normalizer
transformation.
Both input pipelines originate from the same Joiner
transformation.
Either input pipelines contains an Update Strategy
transformation.
Either input pipelines contains a connected or unconnected
Sequence Generator
transformation.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 25, 2006 12:18:35 #1
Surendra
RE: In which condtions we can not use joiner transform...
=======================================
This is no longer valid in version 7.2
Now we can use a joiner even if the data is coming from the same source.
SK
=======================================
You cannot use a Joiner transformation in the following situations(according to
infa 7.1):
file:///C|/Perl/bin/result.html (26 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Either input pipeline contains an Update Strategy transformation.
You connect a Sequence Generator transformation directly before the Joiner
transformation.
=======================================
I don't understand the second one which says we have a sequence generator?
Please can you explain on
that one?
=======================================
Can you please let me know the correct and clear answer for Limitations of joiner
transformation?
swapna
=======================================
You cannot use a Joiner transformation in the following situations(according to
infa 7.1): When You
connect a Sequence Generator transformation directly before the Joiner
transformation.
for more information check out the informatica manual7.1
=======================================
What about join conditions. Can we have a ! condition in joiner.
=======================================
No in a joiner transformation you can only use an equal to ( ) as a join condition
Any other sort of comparison operators is not allowed
> < ! <> etc are not allowed as a join condition
Utsav
=======================================
Yes joiner only supports equality condition
The Joiner transformation does not match null values. For example if both
EMP_ID1 and EMP_ID2
from the example above contain a row with a null value the PowerCenter Server
does not consider them
a match and does not join the two rows. To join rows with null values you can
replace null input with
default values and then join on the default values.
=======================================
We cannot use joiner transformation in the following two conditions:-
1. When our data comes through Update Strategy transformation or in other words
after Update strategy
we cannot add joiner transformation
2. We cannot connect a Sequence Generator transformation directly before the
Joiner transformation.
=======================================
24.Informatica - what r the settiings that u use to
cofigure the
joiner transformation?
file:///C|/Perl/bin/result.html (27 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
QUESTION #24 Master and detail source
Type of join
Condition of the join
Click Here to view complete document
Submitted by: sithusithu
l Master and detail source
l Type of join
l Condition of the join
the Joiner transformation supports the following join types, which you set in the
Properties tab:
l Normal (Default)
l Master Outer
l Detail Outer
l Full Outer
Cheers,
Sithu
Above answer was rated as good by the following members:
vivek1708
=======================================
l Master and detail source
l Type of join
l Condition of the join
the Joiner transformation supports the following join types which you set in the
Properties tab:
l Normal (Default)
l Master Outer
l Detail Outer
l Full Outer
file:///C|/Perl/bin/result.html (28 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Cheers
Sithu
=======================================
There are number of properties that you use to configure a joiner
transformation are:
1) CASE SENSITIVE STRING COMPARISON: To join the string based on the
case
sensitive basis.
2) WORKING DIRECTORY: Where to create the caches.
3) JOIN CONDITION : Like join on a.s v.n
4) JOIN TYPE: (Normal or master or detail or full)
5) NULL ORDERING IN MASTER
6) NULL ORDERING IN DETAIL
7) TRACING LEVEL: Level of detail about the operations.
8) INDEX CACHE: Store group value of the input if any.
9) DATA CACHE: Store value of each row of data.
10) SORTED INPUT: Check box will be there and will have to check it if the
input to the joiner is sorted.
file:///C|/Perl/bin/result.html (29 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
11) TRANSFORMATION SCOPE: The data to taken into consideration.
(transcation
or all input) transaction if it depends only on the processed rows while look up
if it depends on other data when it processes a row. Ex-joiner using the same
source in the pipeline so data is within the scope so transaction. using a
lookup so it depends on other data or if dynamic cache is enabled it has to
process on the other incoming data so will have to go for all input.
=======================================
25.Informatica - What r the join types in joiner
transformation?
QUESTION #25 Normal (Default)
Master outer
Detail outer
Full outer
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
September 12, 2005 12:38:39 #1
Praveen Vasudev
RE:
=======================================
Normal (Default) -- only matching rows from both master and detail
Master outer -- all detail rows and only matching rows from master
Detail outer -- all master rows and only matching rows from detail
Full outer -- all rows from both master and detail ( matching or non matching)
=======================================
follw this
1. In the Mapping Designer choose Transformation-Create. Select the Joiner
transformation. Enter
a name click OK.
The naming convention for Joiner transformations is JNR_TransformationName.
Enter a description for
the transformation. This description appears in the Repository Manager making it
easier for you or
others to understand or remember what the transformation does.
file:///C|/Perl/bin/result.html (30 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
The Designer creates the Joiner transformation. Keep in mind that you cannot use a
Sequence Generator
or Update Strategy transformation as a source to a Joiner transformation.
1. Drag all the desired input/output ports from the first source into the Joiner
transformation.
The Designer creates input/output ports for the source fields in the Joiner as detail
fields by default.
You can edit this property later.
1. Select and drag all the desired input/output ports from the second source into the
Joiner
transformation.
The Designer configures the second set of source fields and master fields by
default.
1. Double-click the title bar of the Joiner transformation to open the Edit
Transformations dialog
box.
1. Select the Ports tab.
1. Click any box in the M column to switch the master/detail relationship for the
sources. Change
the master/detail relationship if necessary by selecting the master source in the M
column.
Tip: Designating the source with fewer unique records as master increases
performance during a join.
1. Add default values for specific ports as necessary.
Certain ports are likely to contain NULL values since the fields in one of the
sources may be empty.
You can specify a default value if the target database does not handle NULLs.
1. Select the Condition tab and set the condition.
1. Click the Add button to add a condition. You can add multiple conditions. The
master and detail
ports must have matching datatypes. The Joiner transformation only supports
equivalent ( ) joins:
10. Select the Properties tab and enter any additional settings for the
transformations.
1. Click OK.
1. Choose Repository-Save to save changes to the mapping.
Cheers
file:///C|/Perl/bin/result.html (31 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Sithu
=======================================
26.Informatica - What r the joiner caches?
QUESTION #26 When a Joiner transformation occurs in a
session, the
Informatica Server reads all the records from the master source
and builds index
and data caches based on the master rows.
After building the caches, the Joiner transformation reads records
from the detail
source and perform joins.
Click Here to view complete document
Submitted by: bneha15
For version 7.x and above :
When the PowerCenter Server processes a Joiner transformation, it reads rows
from both sources
concurrently and builds the index and data cache based on the master rows. The
PowerCenter Server
then performs the join based on the detail source data and the cache data. To
improve performance for
an unsorted Joiner transformation, use the source with fewer rows as the master
source. To improve
performance for a sorted Joiner transformation, use the source with fewer duplicate
key values as the
master.
Above answer was rated as good by the following members:
vivek1708
=======================================
From a performance perspective.always makes the smaller of the two joining
tables to be the master.
=======================================
Specifies the directory used to cache master records and the index to these records.
By default the
cached files are created in a directory specified by the server variable
$PMCacheDir. If you override the
directory make sure the directory exists and contains enough disk space for the
cache files. The
directory can be a mapped or mounted drive.
Cheers
Sithu
=======================================
For version 7.x and above :
When the PowerCenter Server processes a Joiner transformation it reads rows from
both sources
concurrently and builds the index and data cache based on the master rows. The
PowerCenter Server
then performs the join based on the detail source data and the cache data. To
improve performance for
an unsorted Joiner transformation use the source with fewer rows as the master
source. To improve
file:///C|/Perl/bin/result.html (32 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
performance for a sorted Joiner transformation use the source with fewer duplicate
key values as the
master.
=======================================
27.Informatica - what is the look up transformation?
QUESTION #27 Use lookup transformation in ur mapping to
lookup data in a
relational table,view,synonym.
Informatica server queries the look up table based on the lookup
ports in the
transformation.It compares the lookup transformation port
values to lookup
table column values based on the look up condition.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
December 09, 2005 00:06:38 #1
phani
RE: what is the look up transformation?
=======================================
Using it we can access the data from a relational table which is not a source in the
mapping.
For Ex:Suppose the source contains only Empno but we want Empname also in the
mapping.Then
instead of adding another tbl which contains Empname as a source we can Lkp the
table and get the
Empname in target.
=======================================
A lookup is a simple single-level reference structure with no parent/child
relationships. Use a lookup
when you have a set of reference members that you do not need to organize
hierarchically.
=======================================
In DecisionStream a lookup is a simple single-level reference structure with no
parent/child
relationships. Use a lookup when you have a set of reference members that you do
not need to organize
hierarchically.HTH
=======================================
Use a Lookup transformation in your mapping to look up data in a relational table
view or synonym.
Import a lookup definition from any relational database to which both the
Informatica Client and Server
can connect. You can use multiple Lookup transformations in a mapping.
file:///C|/Perl/bin/result.html (33 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Cheers
Sithu
=======================================
Lookup transformation in a mapping is used to look up data in a flat file or a
relational table view or
synonym. You can import a lookup definition from any flat file or relational
database to which both the
PowerCenter Client and Server can connect. You can use multiple Lookup
transformations in a
mapping.
I hope this would be helpful for you.
Cheers
Sridhar
=======================================
28.Informatica - Why use the lookup transformation
?
QUESTION #28 To perform the following tasks.
Get a related value. For example, if your source table includes
employee ID, but
you want to include the employee name in your target table to
make your
summary data easier to read.
Perform a calculation. Many normalized tables include values
used in a
calculation, such as gross sales per invoice or sales tax, but not
the calculated
value (such as net sales).
Update slowly changing dimension tables. You can use a Lookup
transformation
to determine whether records already exist in the target.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
August 21, 2006 22:26:47 #1
samba
RE: Why use the lookup transformation ?
file:///C|/Perl/bin/result.html (34 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Lookup table is nothing but the lookup on a table view synonym and falt file.
by using lookup we can get a related value with join conditon and performs
caluclations
two types of lookups are there
1)connected
2)Unconnected
connected lookup is with in pipeline only but unconnected lookup is not connected
to pipeline
unconneceted lookup returns single column value only
let me know if you want to know any additions information
cheers
samba
=======================================
hey with regard to look up is there a dynamic look up and static look up ? if so how
do you set it. and is
there a combinationation of dynamic connected lookups..and static unconnected
look ups.?
=======================================
look up has two types Connected and Unconnected..Usually we use look-up so as
to get the related
Value from a table ..it has Input port Output Port Lookup Port and Return
Port..where Lookup Port
looks up the corresponding column for the value and resturn port returns the
value..usually we use when
there are no columns in common
=======================================
For maintaining the slowly changing diamensions
=======================================
Hi
The ans to ur question is Yes
There are 2 types of lookups: Dynamic and Normal (which you have termed as
Static).
To configure just double click on the lookup transformation and go to properties
tab
file:///C|/Perl/bin/result.html (35 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
There'll be an option - dynamic lookup cache. select that...
If u dont select this option then the lookup is merely a normal lookup.
Please let me know if there are any questions.
Thanks.
=======================================
29.Informatica - What r the types of lookup?
QUESTION #29 Connected and unconnected
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
November 08, 2005 18:44:53 #1
swati
RE: What r the types of lookup?
=======================================
i>connected
ii>unconnected
iii>cached
iv>uncached
=======================================
1.
Connected lookup
2.
Unconnected lookup
1.
file:///C|/Perl/bin/result.html (36 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Persistent cache
2.
Re-cache from database
3.
Static cache
4.
Dynamic cache
5.
Shared cache
Cheers
Sithu
=======================================
hello boss/madam
only two types of lookup are there they are:
1) Connected lookup
2) Unconnected lookup.
I don't understand why people are specifying the cache types I want to know that
now a days caches are
also taken into this category of lookup.
If yes do specify on the answer list
thankyou
=======================================
30.Informatica - Differences between connected and
unconnected
lookup?
QUESTION #30
Connected lookup Unconnected lookup
Receives input values diectly
from the pipe line.
Receives input values from the result of a lkp
expression in a another transformation.
U can use a dynamic or static
cache
U can use a static cache.
Cache includes all lookup
columns used in the maping
Cache includes all lookup out put ports in the
lookup condition and the lookup/return port.
Support user defined default
values
Does not support user defiend default values
file:///C|/Perl/bin/result.html (37 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
February 03, 2006 03:25:15 #1
Prasanna
RE: Differences between connected and unconnected look...
=======================================
In addition:
Connected Lookip can return/pass multiple rows/groups of data whereas
unconnected can return only
one port.
=======================================
In addition to this: In Connected lookup if the condition is not satisfied it returns
'0'. In UnConnected
lookup if the condition is not satisfied it returns 'NULL'.
=======================================
Hi
Differences Between Connected and Unconnected Lookups
Connected Lookup Unconnected Lookup
Receives input values directly from the pipeline.
Receives input values from the result of a :LKP
expression in another transformation.
You can use a dynamic or static cache. You can use a static cache.
Cache includes all lookup columns used in the
mapping (that is lookup source columns included in
the lookup condition and lookup source columns
linked as output ports to other transformations).
Cache includes all lookup/output ports in the
lookup condition and the lookup/return port.
Can return multiple columns from the same row or
insert into the dynamic lookup cache.
Designate one return port (R). Returns one
column from each row.
file:///C|/Perl/bin/result.html (38 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
If there is no match for the lookup condition the
PowerCenter Server returns the default value for all
output ports. If you configure dynamic caching the
PowerCenter Server inserts rows into the cache or
leaves it unchanged.
If there is no match for the lookup condition
the PowerCenter Server returns NULL.
If there is a match for the lookup condition the
PowerCenter Server returns the result of the lookup
condition for all lookup/output ports. If you
configure dynamic caching the PowerCenter Server
either updates the row the in the cache or leaves the
row unchanged.
If there is a match for the lookup condition the
PowerCenter Server returns the result of the
lookup condition into the return port.
Pass multiple output values to another
transformation. Link lookup/output ports to another
transformation.
Pass one output value to another
transformation. The lookup/output/return port
passes the value to the transformation calling :
LKP expression.
Supports user-defined default values. Does not support user-defined default values.
Shivaji Thaneru
=======================================
31.Informatica - what is meant by lookup caches?
QUESTION #31 The informatica server builds a cache in memory
when it
processes the first row af a data in a cached look up
transformation.It allocates
memory for the cache based on the amount u configure in the
transformation or
session properties.The informatica server stores condition values
in the index
cache and output values in the data cache.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
September 28, 2006 06:34:33 #1
srinivas vadlakonda
RE: what is meant by lookup caches?
file:///C|/Perl/bin/result.html (39 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
lookup cache is the temporary memory that is created by the informatica server to
hold the lookup data
and to perform the lookup conditions
=======================================
A LookUP Cache is a Temporary Memory Area which is Created by the
Informatica Server. which
stores the Lookup data based on certain Conditions. The Caches are of Three types
1) Persistent 2)
Dynamic 3) Static and 4) Shared Cache.
=======================================
32.Informatica - What r the types of lookup caches?
QUESTION #32 Persistent cache: U can save the lookup cache
files and reuse
them the next time the informatica server processes a lookup
transformation
configured to use the cache.
Recache from database: If the persistent cache is not
synchronized with he
lookup table,U can configure the lookup transformation to
rebuild the lookup
cache.
Static cache: U can configure a static or readonly cache for only
lookup table.By
default informatica server creates a static cache.It caches the
lookup table and
lookup values in the cache for each row that comes into the
transformation.when
the lookup condition is true,the informatica server does not
update the cache
while it prosesses the lookup transformation.
Dynamic cache: If u want to cache the target table and insert new
rows into
cache and the target,u can create a look up transformation to use
dynamic cache.
The informatica server dynamically inerts data to the target
table.
shared cache: U can share the lookup cache between multiple
transactions.U can
share unnamed cache between transformations inthe same
maping.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
December 13, 2005 06:02:36 #1
Sithu
file:///C|/Perl/bin/result.html (40 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: What r the types of lookup caches?
=======================================
Cache
1. Static cache
2. Dynamic cache
3. Persistent cache
Sithu
=======================================
Cache are three types namely Dynamic cache Static cache Persistent cache
Cheers
Sithu
=======================================
Dynamic cache
Persistence Cache
Re cache
Shared Cache
=======================================
hi could any one get me information where you would use these caches for look
ups and how do you set
them.
file:///C|/Perl/bin/result.html (41 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
thanks
infoseeker
=======================================
There are 4 types of lookup cache -
Persistent Recache Satic & Dynamic.
Bye
Stephen
=======================================
Types of Caches are :
1) Dynamic Cache
2) Static Cache
3) Persistent Cache
4) Shared Cache
5) Unshared Cache
=======================================
There are five types of caches such as
static cache
dynamic cache
persistant cache
shared cache etc...
=======================================
33.Informatica - Difference between static cache and
dynamic
cache
QUESTION #33
Static cache Dynamic cache
U can not insert or update the cache
U can insert rows into the cache as u
pass to the target
The informatic server returns a value from the lookup
table or cache when the condition is true.When the
condition is not true, informatica server returns the
default value for connected transformations and null for
unconnected transformations.
The informatic server inserts rows
into cache when the condition is false.
This indicates that the the row is not
in the cache or target table. U can
pass these rows to the target table
Click Here to view complete document
Submitted by: vp
file:///C|/Perl/bin/result.html (42 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
lets say for example your lookup table is your target table. So when you create the
Lookup selecting the
dynamic cache what It does is it will lookup values and if there is no match it will
insert the row in both
the target and the lookup cache (hence the word dynamic cache it builds up as you
go along), or if there
is a match it will update the row in the target. On the other hand Static caches dont
get updated when
you do a lookup.
Above answer was rated as good by the following members:
ssangi, ananthece
=======================================
lets say for example your lookup table is your target table. So when you create the
Lookup selecting the
dynamic cache what It does is it will lookup values and if there is no match it will
insert the row in both
the target and the lookup cache (hence the word dynamic cache it builds up as you
go along) or if there
is a match it will update the row in the target. On the other hand Static caches dont
get updated when
you do a lookup.
=======================================
34.Informatica - Which transformation should we
use to
normalize the COBOL and relational sources?
QUESTION #34 Normalizer Transformation.
When U drag the COBOL source in to the mapping Designer
workspace,the
normalizer transformation automatically appears,creating input
and output
ports for every column in the source.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2006 01:08:06 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: Which transformation should we use to normalize th...
file:///C|/Perl/bin/result.html (43 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
The Normalizer transformation normalizes records from COBOL and relational
sources allowing you to
organize the data according to your own needs. A Normalizer transformation can
appear anywhere in a
data flow when you normalize a relational source. Use a Normalizer
transformation instead of the
Source Qualifier transformation when you normalize a COBOL source. When you
drag a COBOL
source into the Mapping Designer workspace the Normalizer transformation
automatically appears
creating input and output ports for every column in the source
Cheers
Sithu
=======================================
35.Informatica - How the informatica server sorts the
string
values in Ranktransformation?
QUESTION #35 When the informatica server runs in the ASCII
data movement
mode it sorts session data using Binary sortorder.If U configure
the seeion to use
a binary sort order,the informatica server caluculates the binary
value of each
string and returns the specified number of rows with the higest
binary values for
the string.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
December 09, 2005 00:25:27 #1
phani
RE: How the informatica server sorts the string values...
file:///C|/Perl/bin/result.html (44 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
When Informatica Server runs in UNICODE data movement mode then it uses the
sort order configured
in session properties.
=======================================
36.Informatica - What is the Rankindex in
Ranktransformation?
QUESTION #36 The Designer automatically creates a
RANKINDEX port for
each Rank transformation. The Informatica Server uses the Rank
Index port to
store the ranking position for each record in a group. For
example, if you create a
Rank transformation that ranks the top 5 salespersons for each
quarter, the rank
index numbers the salespeople from 1 to 5:
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 12, 2006 04:41:57 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What is the Rankindex in Ranktransformation?
=======================================
Based on which port you want generate Rank is known as rank port the generated
values are known as
rank index.
Cheers
Sithu
=======================================
37.Informatica - What is the Router transformation?
QUESTION #37 A Router transformation is similar to a Filter
transformation
because both transformations allow you to use a condition to
test data.
However, a Filter transformation tests data for one condition
and drops the rows
of data that do not meet the condition. A Router transformation
tests data for
one or more conditions and gives you the option to route rows of
data that do
not meet any of the conditions to a default output group.
file:///C|/Perl/bin/result.html (45 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
If you need to test the same input data based on multiple
conditions, use a
Router Transformation in a mapping instead of creating multiple
Filter
transformations to perform the same task.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2006 04:46:42 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What is the Router transformation?
=======================================
A Router transformation is similar to a Filter transformation because both
transformations allow you
to use a condition to test data. A Filter transformation tests data for one condition
and drops the rows of
data that do not meet the condition. However a Router transformation tests data for
one or more
conditions and gives you the option to route rows of data that do not meet any of
the conditions to a
default output group.
Cheers
Sithu
=======================================
Note:- i think the definition and purpose of Router transformation define by
sithusithu sithu is not clear
and not fully correct as they of have mentioned
<A Router transformation tests data for one or more conditions >
sorry sithu and sithusithu
but i want to clarify is that in Filter transformation also we can give so many
conditions together. eg.
empno 1234 and sal>25000 (2conditions)
Actual Purposes of Router Transformation are:-
1. Similar as filter transformation to sort the source data according to the condition
applied.
2. When we want to load data into different target tables from same source but
with different condition
as per target tables requirement.
file:///C|/Perl/bin/result.html (46 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
e.g. From emp table we want to load data in three(3) different target tables
T1(where deptno 10) T2
(where deptno 20) and T3(where deptno 30).
For this if we use filter transformation we need three(3) filter transformations
So instead of using three(3) filter transformation we will use only one(1) Router
transformation.
Advantages:-
1. Better Performance because in mapping the Router transformation Informatica
server processes the
input data only once instead of three as in filter transformation.
2. Less complexity because we use only one router transformation instead of
multiple filter
transformation.
Router Transformation is :- Active and Connected.
=======================================
38.Informatica - What r the types of groups in
Router
transformation?
QUESTION #38 Input group Output group
The designer copies property information from the input ports of
the input group
to create a set of output ports for each output group.
Two types of output groups
User defined groups
Default group
U can not modify or delete default groups.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
December 09, 2005 00:35:44 #1
phani
RE: What r the types of groups in Router transformatio...
file:///C|/Perl/bin/result.html (47 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Input group contains the data which is coming from the source.We can create as
many user-defined
groups as required for each condition we want to specify.Default group contains all
the rows of data
that doesn't satisfy the condition of any group.
=======================================
A Router transformation has the following types of groups:
l Input
l Output
Input Group
The Designer copies property information from the input ports of the input group
to create a set of
output ports for each output group.
Output Groups
There are two types of output groups:
l User-defined groups
l Default group
You cannot modify or delete output ports or their properties.
Cheers
Sithu
=======================================
39.Informatica - Why we use stored procedure
transformation?
QUESTION #39 For populating and maintaining data bases.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
file:///C|/Perl/bin/result.html (48 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
January 19, 2006 04:41:34 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: Why we use stored procedure transformation?
=======================================
A Stored Procedure transformation is an important tool for populating and
maintaining databases.
Database administrators create stored procedures to automate time-consuming
tasks that are too
complicated for standard SQL statements.
Cheers
Sithu
=======================================
You might use stored procedures to do the following tasks:
l Check the status of a target database before loading data into it.
l Determine if enough space exists in a database.
l Perform a specialized calculation.
l Drop and recreate indexes.
Shivaji Thaneru
=======================================
You might use stored procedures to do the following tasks:
l Check the status of a target database before loading data into it.
l Determine if enough space exists in a database.
l Perform a specialized calculation.
l Drop and recreate indexes.
Shivaji Thaneru
=======================================
we use a stored procedure transformation to execute a stored procedure which in
turn might do the
file:///C|/Perl/bin/result.html (49 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
above things in a database and more.
=======================================
can you give me a real time scenario please?
=======================================
40.Informatica - What is source qualifier
transformation?
QUESTION #40 When U add a relational or a flat file source
definition to a
maping,U need to connect it to
a source qualifer transformation.The source qualifier
transformation represnets
the records
that the informatica server reads when it runs a session.
Click Here to view complete document
Submitted by: Rama Rao B.
Source qualifier is also a table, it acts as an intermediator in between source and
target metadata. And, it
also generates sql, which creating mapping in between source and target metadatas.
Thanks,
Rama Rao
Above answer was rated as good by the following members:
him.life
=======================================
When you add a relational or a flat file source definition to a mapping you need to
connect it to a
Source Qualifier transformation. The Source Qualifier represents the rows that the
Informatica Server
reads when it executes a session.
l Join data originating from the same source database. You can join two or more
tables with
primary-foreign key relationships by linking the sources to one Source Qualifier.
l Filter records when the Informatica Server reads source data. If you include a
filter condition the
Informatica Server adds a WHERE clause to the default query.
l Specify an outer join rather than the default inner join. If you include a user-
defined join the
Informatica Server replaces the join information specified by the metadata in the
SQL query.
l Specify sorted ports. If you specify a number for sorted ports the Informatica
Server adds an
ORDER BY clause to the default SQL query.
l Select only distinct values from the source. If you choose Select Distinct the
Informatica Server
adds a SELECT DISTINCT statement to the default SQL query.
file:///C|/Perl/bin/result.html (50 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
l Create a custom query to issue a special SELECT statement for the
Informatica Server to read
source data. For example you might use a custom query to perform aggregate
calculations or execute a
stored procedure.
Cheers
Sithu
=======================================
When you add a relational or a flat file source definition to a mapping you need to
connect it to a
Source Qualifier transformation. The Source Qualifier represents the rows that the
Informatica Server
reads when it executes a session.
Cheers
Sithu
=======================================
Source qualifier is also a table it acts as an intermediator in between source and
target metadata. And it
also generates sql which creating mapping in between source and target metadatas.
Thanks
Rama Rao
=======================================
Def:- The Transformation which Converts the source(relational or flat) datatype to
Informatica datatype.
So it works as an intemediator between and source and informatica server.
Tasks performed by qualifier transformation:-
1. Join data originating from the same source database.
2. Filter records when the Informatica Server reads source data.
3. Specify an outer join rather than the default inner join.
4. Specify sorted ports.
5. Select only distinct values from the source.
6. Create a custom query to issue a special SELECT statement for the Informatica
Server to read source
data.
=======================================
file:///C|/Perl/bin/result.html (51 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Source Qualifier Transformation is the beginning of the pipeline of any
transformation the main
purpose of this transformation is that it is reading the data from the relational or
flat file and is passing
the data ie. read into the mapping designed so that the data can be passed into the
other transformations
=======================================
Source Qualifier is a transformation with every source definiton if the source is
Relational Database.
Source Qualifier fires a Select statement on the source db.
With every Source Definition u will get a source qualifier without Source qualifier
u r mapping will be
invalid and u cannot define the pipeline to the other instance.
If the source is Cobol then for that source definition u will get a normalizer
transormation not the
Source Qualifier.
=======================================
41.Informatica - What r the tasks that source
qualifier performs?
QUESTION #41 Join data originating from same source data
base.
Filter records when the informatica server reads source data.
Specify an outer join rather than the default inner join
specify sorted records.
Select only distinct values from the source.
Creating custom query to issue a special SELECT statement for
the informatica
server to read
source data.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 24, 2006 03:42:08 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What r the tasks that source qualifier performs?
file:///C|/Perl/bin/result.html (52 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
l Join data originating from the same source database. You can join two or more
tables with
primary-foreign key relationships by linking the sources to one Source Qualifier.
l Filter records when the Informatica Server reads source data. If you include a
filter condition the
Informatica Server adds a WHERE clause to the default query.
l Specify an outer join rather than the default inner join. If you include a user-
defined join the
Informatica Server replaces the join information specified by the metadata in the
SQL query.
l Specify sorted ports. If you specify a number for sorted ports the Informatica
Server adds an
ORDER BY clause to the default SQL query.
l Select only distinct values from the source. If you choose Select Distinct the
Informatica Server
adds a SELECT DISTINCT statement to the default SQL query.
l Create a custom query to issue a special SELECT statement for the
Informatica Server to read
source data. For example you might use a custom query to perform aggregate
calculations or execute a
stored procedure.
Cheers
Sithu
=======================================
42.Informatica - What is the target load order?
QUESTION #42 U specify the target loadorder based on source
qualifiers in a
maping.If u have the multiple
source qualifiers connected to the multiple targets,U can
designatethe order in
which informatica
server loads data into the targets.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
March 01, 2006 14:27:34 #1
saritha
RE: What is the target load order?
file:///C|/Perl/bin/result.html (53 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
A target load order group is the collection of source qualifiers transformations and
targets linked
together in a mapping.
=======================================
43.Informatica - What is the default join that source
qualifier
provides?
QUESTION #43 Inner equi join.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 24, 2006 03:40:28 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What is the default join that source qualifier pro...
=======================================
The Joiner transformation supports the following join types which you set in the
Properties tab:
l Normal (Default)
l Master Outer
l Detail Outer
l Full Outer
Cheers
Sithu
=======================================
Equijoin on a key common to the sources drawn by the SQ.
=======================================
44.Informatica - What r the basic needs to join two
sources in a
file:///C|/Perl/bin/result.html (54 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
source qualifier?
QUESTION #44 Two sources should have primary and Foreign
key relation
ships.
Two sources should have matching data types.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
December 14, 2005 10:32:44 #1
rishi
RE: What r the basic needs to join two sources in a so...
=======================================
The both the table should have a common feild with same datatype.
Its not neccessary both should follow primary and foreign relationship. If any
relation ship exists that
will help u in performance point of view.
=======================================
Also of you are using a lookup in your mapping and the lookup table is small then
try to join that looup
in Source Qualifier to improve perf.
Regards
SK
=======================================
Both the sources must be from same database.
=======================================
45.Informatica - what is update strategy
transformation ?
QUESTION #45 This transformation is used to maintain the
history data or just
most recent changes in to target
table.
file:///C|/Perl/bin/result.html (55 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2006 04:33:23 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: what is update strategy transformation ?
=======================================
The model you choose constitutes your update strategy how to handle changes to
existing rows. In
PowerCenter and PowerMart you set your update strategy at two different levels:
l Within a session. When you configure a session you can instruct the Informatica
Server to
either treat all rows in the same way (for example treat all rows as inserts) or use
instructions
coded into the session mapping to flag rows for different database operations.
l Within a mapping. Within a mapping you use the Update Strategy transformation
to flag rows
for insert delete update or reject.
Chrees
Sithu
=======================================
Update strategy transformation is used for flagging the records for insert
update delete and reject
In Informatica power center u can develop update strategy at two levels
use update strategy T/R in the mapping design
target table options in the session
the following are the target table options
Insert
file:///C|/Perl/bin/result.html (56 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Update
Delete
Update as Insert
Update else Insert
Thanks
Rekha
=======================================
46.Informatica - What is the default source option
for update
stratgey transformation?
QUESTION #46 Data driven.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
March 28, 2006 05:03:53 #1
Gyaneshwar
RE: What is the default source option for update strat...
=======================================
DATA DRIVEN
=======================================
47.Informatica - What is Datadriven?
QUESTION #47 The informatica server follows instructions
coded into update
strategy transformations with in the session maping determine
how to flag
records for insert, update, delete or reject. If u do not choose data
driven option
setting,the informatica server ignores all update strategy
transformations in the
mapping.
Click Here to view complete document
file:///C|/Perl/bin/result.html (57 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2006 04:36:22 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What is Datadriven?
=======================================
The Informatica Server follows instructions coded into Update Strategy
transformations within the
session mapping to determine how to flag rows for insert delete update or reject.
If the mapping for the session contains an Update Strategy transformation this field
is marked Data
Driven by default.
Cheers
Sithu
=======================================
When Data driven option is selected in session properties it the code will consider
the update strategy
(DD_UPDATE DD_INSERT DD_DELETE DD_REJECT) used in the mapping
and not the options
selected in the session properties.
=======================================
48.Informatica - What r the options in the target
session of update
strategy transsformatioin?
QUESTION #48 Insert
Delete
Update
Update as update
Update as insert
Update esle insert
Truncate table
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
February 03, 2006 03:46:07 #1
Prasanna
file:///C|/Perl/bin/result.html (58 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: What r the options in the target session of update...
=======================================
Update as Insert:
This option specified all the update records from source to be flagged as inserts in
the target. In other
words instead of updating the records in the target they are inserted as new records.
Update else Insert:
This option enables informatica to flag the records either for update if they are old
or insert if they are
new records from source.
=======================================
49.Informatica - What r the types of maping wizards
that r to be
provided in Informatica?
QUESTION #49 The Designer provides two mapping wizards to
help you create
mappings quickly and easily. Both wizards are designed to create
mappings for
loading and maintaining star schemas, a series of dimensions
related to a central
fact table.
Getting Started Wizard. Creates mappings to load static fact and
dimension
tables, as well as slowly growing dimension tables.
Slowly Changing Dimensions Wizard. Creates mappings to load
slowly
changing dimension tables based on the amount of historical
dimension data you
want to keep and the method you choose to handle historical
dimension data.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 09, 2006 02:43:25 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What r the types of maping wizards that r to be pr...
file:///C|/Perl/bin/result.html (59 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Simple Pass through
Slowly Growing Target
Slowly Changing the Dimension
Type1
Most recent values
Type2
Full History
Version
Flag
Date
Type3
Current and one previous
=======================================
Inf designer :
Mapping -> wizards --> 1) Getting started -->Simple pass through mapping
-->Slowly growing target
2) slowly changing dimensions---> SCD 1 (only recent values)
--->SCD 2(HISTORY using flag or version or time)
--->SCD 3(just recent values)
one important point is dimensions are 2 types
1)slowly growing targets
file:///C|/Perl/bin/result.html (60 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
2)slowly changing dimensions.
=======================================
50.Informatica - What r the types of maping in
Getting Started
Wizard?
QUESTION #50 Simple Pass through maping :
Loads a static fact or dimension table by inserting all rows. Use
this mapping
when you want to drop all existing data from your table before
loading new
data.
Slowly Growing target :
Loads a slowly growing fact or dimension table by inserting new
rows. Use this
mapping to load new data when existing data does not require
updates.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 09, 2006 02:46:25 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What r the types of maping in Getting Started Wiza...
=======================================
1. Simple Pass through2. Slowly Growing TargetCheers Sithu
=======================================
51.Informatica - What r the mapings that we use for
slowly
changing dimension table?
QUESTION #51 Type1: Rows containing changes to existing
dimensions are
updated in the target by overwriting the existing dimension. In
the Type 1
Dimension mapping, all rows contain current dimension data.
Use the Type 1 Dimension mapping to update a slowly changing
dimension table
when you do not need to keep any previous versions of
dimensions in the table.
Type 2: The Type 2 Dimension Data mapping inserts both new
and changed
dimensions into the target. Changes are tracked in the target
table by versioning
the primary key and creating a version number for each
dimension in the table.
Use the Type 2 Dimension/Version Data mapping to update a
slowly changing
file:///C|/Perl/bin/result.html (61 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
dimension table when you want to keep a full history of
dimension data in the
table. Version numbers and versioned primary keys track the
order of changes to
each dimension.
Type 3: The Type 3 Dimension mapping filters source rows based
on user-defined
comparisons and inserts only those found to be new dimensions
to the target.
Rows containing changes to existing dimensions are updated in
the target. When
updating an existing dimension, the Informatica Server saves
existing data in
different columns of the same row and replaces the existing data
with the
updates
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
June 03, 2006 09:39:20 #1
mamatha
RE: What r the mapings that we use for slowly changing...
=======================================
hello sir
i want whole information on slowly changing dimension.and also want project on
slowly changing
dimension in informatica.
Thanking you sir
mamatha.
=======================================
1.Up Date strategy Transfermation
2.Look up Transfermation.
=======================================
file:///C|/Perl/bin/result.html (62 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Type1: Rows containing changes to existing dimensions are updated in the target
by overwriting the
existing dimension. In the Type 1 Dimension mapping all rows contain current
dimension data. Use the
Type 1 Dimension mapping to update a slowly changing dimension table when
you do not need to keep
any previous versions of dimensions in the table. Type 2: The Type 2 Dimension
Data mapping inserts
both new and changed dimensions into the target. Changes are tracked in the target
table by versioning
the primary key and creating a version number for each dimension in the table. Use
the Type 2
Dimension/Version Data mapping to update a slowly changing dimension table
when you want to keep
a full history of dimension data in the table. Version numbers and versioned
primary keys track the
order of changes to each dimension. Type 3: The Type 3 Dimension mapping
filters source rows based
on user-defined comparisons and inserts only those found to be new dimensions to
the target. Rows
containing changes to existing dimensions are updated in the target.
=======================================
SCD:
Source to SQ - 1 mapping
SQ to LKP - 2 mapping
SQ_LKP to EXP - 3 Mapping
EXP to FTR - 4 Mapping
FTR to UPD - 5 Mapping
UPD to TGT - 6 Mapping
SQGen to TGT - 7 Mapping.
I think these are the 7 mapping used for SCD in general;
For type 1: The mapping will be doubled that is one for insert and other for update
and total as 14.
For type 2 : The mapping will be increased thrice one for insert 2nd for update and
3 to keep the old
one. (here the history stores)
For type three : It will be doubled for insert one row and also to insert one column
to keep the previous
data.
Cheers
Prasath
=======================================
52.Informatica - What r the different types of Type2
dimension
maping?
QUESTION #52 Type2 Dimension/Version Data Maping: In this
maping the
updated dimension in the source will gets inserted in target along
with a new
version number.And newly added dimension
file:///C|/Perl/bin/result.html (63 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
in source will inserted into target with a primary key.
Type2 Dimension/Flag current Maping: This maping is also used
for slowly
changing dimensions.In addition it creates a flag value for
changed or new
dimension.
Flag indiactes the dimension is new or newlyupdated.Recent
dimensions will
gets saved with cuurent flag value 1. And updated dimensions r
saved with the
value 0.
Type2 Dimension/Effective Date Range Maping: This is also one
flavour of Type2
maping used for slowly changing dimensions.This maping also
inserts both new
and changed dimensions in to the target.And changes r tracked by
the effective
date range for each version of each dimension.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 04, 2006 05:31:39 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What r the different types of Type2 dimension mapi...
=======================================
Type2
1. Version number
2. Flag
3.Date
Cheers
Sithu
=======================================
file:///C|/Perl/bin/result.html (64 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
53.Informatica - How can u recognise whether or not
the newly
added rows in the source r gets insert in the target ?
QUESTION #53 In the Type2 maping we have three options to
recognise the
newly added rows
Version number
Flagvalue
Effective date Range
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
December 14, 2005 10:43:31 #1
rishi
RE: How can u recognise whether or not the newly added...
=======================================
If it is Type 2 Dimension the abouve answer is fine but if u want to get the info of
all the insert
statements and Updates you need to use session log file where you configure it to
verbose.
You will get complete set of data which record was inserted and which was not.
=======================================
Just use debugger to know how the data from source moves to target it will show
how many new rows
get inserted else updated.
=======================================
54.Informatica - What r two types of processes that
informatica
runs the session?
QUESTION #54 Load manager Process: Starts the session,
creates the DTM
process, and sends post-session email when the session
completes.
The DTM process. Creates threads to initialize the session, read,
write, and
transform data, and handle pre- and post-session operations.
Click Here to view complete document
file:///C|/Perl/bin/result.html (65 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
No best answer available. Please pick the good answer available or submit your
answer.
September 17, 2007 08:17:02 #1
rasmi Member Since: June 2007 Contribution: 20
RE: What r two types of processes that informatica run...
=======================================
When the workflow start to run
Then the informatica server process starts
Two process:load manager process and DTM Process;
The load manager process has the following tasks
1. lock the workflow and read the properties of workflow
2.create workflow log file
3. start the all tasks in workfolw except session and worklet
4.It starts the DTM Process.
5.It will send the post session Email when the DTM abnormally terminated
The DTM process involved in the following tasks
1. read session properties
2.create session log file
3.create Threades such as master thread read write transformation threateds
4.send post session Email.
5.run the pre and post shell commands .
6.run the pre and post stored procedures.
=======================================
55.Informatica - Can u generate reports in
Informatcia?
QUESTION #55 Yes. By using Metadata reporter we can generate
reports in
informatica.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2006 05:05:46 #1
sithusithu Member Since: December 2005 Contribution: 161
file:///C|/Perl/bin/result.html (66 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: Can u generate reports in Informatcia?
=======================================
It is a ETL tool you could not make reports from here but you can generate
metadata report that is not
going to be used for business analysis
Cheers
Sithu
=======================================
can u pls tell me how generate metadata reports?
=======================================
56.Informatica - Define maping and sessions?
QUESTION #56 Maping: It is a set of source and target
definitions linked by
transformation objects that define the rules for transformation.
Session : It is a set of instructions that describe how and when to
move data
from source to targets.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
December 04, 2006 15:07:09 #1
Pavani
RE: Define maping and sessions?
file:///C|/Perl/bin/result.html (67 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Mapping:
A set of source and target definitions linked by diffrent transformation that define
the rules for data
transformation.
Session:
A session identifies the mapping created with in the mapping designer.
and
Identification of the mapping by the informatica server is done with the help of
session.
-Pavani.
=======================================
57.Informatica - Which tool U use to create and
manage sessions
and batches and to monitor and stop the informatica
s
QUESTION #57 Informatica server manager.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
May 16, 2006 12:55:46 #1
Leninformatica
RE: Which tool U use to create and manage sessions and...
file:///C|/Perl/bin/result.html (68 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Informatica Workflow Managar and Informatica Worlflow Monitor
=======================================
58.Informatica - Why we use partitioning the session
in
informatica?
QUESTION #58 Partitioning achieves the session performance
by reducing the
time period of reading the source and loading the data into
target.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
September 30, 2005 00:26:04 #1
khadarbasha Member Since: September 2005 Contribution: 2
RE: Why we use partitioning the session in informatica...
=======================================
Performance can be improved by processing data in parallel in a single session by
creating multiple
partitions of the pipeline.
Informatica server can achieve high performance by partitioning the pipleline and
performing the
extract transformation and load for each partition in parallel.
=======================================
59.Informatica - How the informatica server
increases the session
performance through partitioning the source?
QUESTION #59 For a relational sources informatica server
creates multiple
connections for each parttion of a single source and extracts
seperate range of
data for each connection.Informatica server reads multiple
partitions of a single
source concurently.Similarly for loading also informatica server
creates multiple
connections to the target and loads partitions of data
concurently.
For XML and file sources,informatica server reads multiple files
concurently.For
loading the data informatica server creates a seperate file for
each partition(of a
file:///C|/Perl/bin/result.html (69 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
source file).U can choose to merge the targets.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
February 13, 2006 08:00:53 #1
durga
RE: How the informatica server increases the session p...
=======================================
fine explanation
=======================================
60.Informatica - What r the tasks that Loadmanger
process will
do?
QUESTION #60 Manages the session and batch scheduling: Whe
u start the
informatica server the load maneger launches and queries the
repository for a list
of sessions configured to run on the informatica server.When u
configure the
session the loadmanager maintains list of list of sessions and
session start times.
When u sart a session loadmanger fetches the session
information from the
repository to perform the validations and verifications prior to
starting DTM
process.
Locking and reading the session: When the informatica server
starts a session
lodamaager locks the session from the repository.Locking
prevents U starting the
session again and again.
Reading the parameter file: If the session uses a parameter
files,loadmanager
reads the parameter file and verifies that the session level
parematers are
declared in the file
Verifies permission and privelleges: When the sesson starts load
manger checks
whether or not the user have privelleges to run the session.
Creating log files: Loadmanger creates logfile contains the status
of session.
Click Here to view complete document
file:///C|/Perl/bin/result.html (70 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
No best answer available. Please pick the good answer available or submit your
answer.
August 17, 2005 02:08:34 #1
AnjiReddy
RE: What r the tasks that Loadmanger process will do?
=======================================
How can you determine whether informatica server is running or not with out
using event viewer by
using shell command. I would appreciate the solution for this one. Feel free to mail
me at puli.
reddy@gmail.com
=======================================
61.Informatica - What r the different threads in
DTM process?
QUESTION #61 Master thread: Creates and manages all other
threads
Maping thread: One maping thread will be creates for each
session.Fectchs
session and maping information.
Pre and post session threads: This will be created to perform pre
and post session
operations.
Reader thread: One thread will be created for each partition of a
source.It reads
data from source.
Writer thread: It will be created to load data to the target.
Transformation thread: It will be created to tranform data.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
October 12, 2006 00:56:46 #1
Killer
RE: What r the different threads in DTM process?
file:///C|/Perl/bin/result.html (71 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Yupz this make sense ! )
=======================================
62.Informatica - Can u copy the session to a different
folder or
repository?
QUESTION #62 Yes. By using copy session wizard u can copy a
session in a
different folder or repository.But that
target folder or repository should consists of mapping of that
session.
If target folder or repository is not having the maping of copying
session ,
u should have to copy that maping first before u copy the session
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
February 03, 2006 03:56:14 #1
Prasanna
RE: Can u copy the session to a different folder or re...
=======================================
In addition you can copy the workflow from the Repository manager. This will
automatically copy the
mapping associated source targets and session to the target folder.
=======================================
63.Informatica - What is batch and describe about
types of
batches?
QUESTION #63 Grouping of session is known as batch.Batches r
two types
Sequential: Runs sessions one after the other
Concurrent: Runs session at same time.
If u have sessions with source-target dependencies u have to go
for sequential
batch to start the
sessions one after another.If u have several independent sessions
u can use
concurrent batches.
Whch runs all the sessions at the same time.
Click Here to view complete document
file:///C|/Perl/bin/result.html (72 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
No best answer available. Please pick the good answer available or submit your
answer.
February 15, 2006 13:13:03 #1
sangroover
RE: What is batch and describe about types of batches?...
=======================================
Batch--- is a group of any thing
Different batches ----Different groups of different things.
=======================================
64.Informatica - Can u copy the batches?
QUESTION #64 NO
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
December 16, 2007 14:26:30 #1
dl_mstr Member Since: November 2007 Contribution: 26
RE: Can u copy the batches?
=======================================
Yes I think workflows can be copied from one folder/repository to another
=======================================
It should be definitely yes.
Without that :
1.the migration of the workflows from dev to test and to Production would not
make any sense
2. For the similar logics we have to do same cumbersome job
There might be some limitations while copying the batches like we might not be
able to copy the
overwritten properties of the workflow.
=======================================
there is a slight correction in the above answer.
file:///C|/Perl/bin/result.html (73 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
We might not be able to copy the overriden (wrote overwritten in above answer)
properties
=======================================
65.Informatica - When the informatica server marks
that a batch is
failed?
QUESTION #65 If one of session is configured to "run if previous
completes" and
that previous session fails.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
April 01, 2008 06:05:46 #1
Vani_AT Member Since: December 2007 Contribution: 16
RE: When the informatica server marks that a batch is failed?
=======================================
A batch fails when the sessions in the workflow are checked with the property
"Fail if parent fails"
and any of the session in the sequential batch fails.
=======================================
66.Informatica - What r the different options used to
configure the
sequential batches?
QUESTION #66 Two options
Run the session only if previous session completes sucessfully.
Always runs the
session.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
August 16, 2007 13:17:03 #1
rasmi Member Since: June 2007 Contribution: 20
RE: What r the different options used to configure the...
file:///C|/Perl/bin/result.html (74 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Hi
Where we have to specify these options.
=======================================
You have to specify those options in the workflow designer. You can double click
the pipeline which
connects two sessions and there you can define prevtaskstatus succeeded. Then
only the next session
runs. You can also go edit the session and check 'fail parent if this task fails' which
means it will mark
the workflow as failed. If the workflow is failed it won't run the remaining
sessions.
=======================================
I would like to make a small correction for the above answer.
Even if a session fails with the above property set
all the following sessions of the workflow still run and succeed depending on the
validity/correctness of
the individual sessions.
The only difference this property makes is it marks the workflow as failed.
=======================================
67.Informatica - In a sequential batch can u run the
session if
previous session fails?
QUESTION #67 Yes.By setting the option always runs the
session.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
June 26, 2008 01:03:25 #1
prade Member Since: May 2008 Contribution: 6
RE: In a sequential batch can u run the session if previous session fails?
file:///C|/Perl/bin/result.html (75 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Yes you can.
Start ---L1-------> S1 --------L2-------> S2
suppose S1 fails and we still want to run S2 then
L2 condition --> $S1.status FAILED or $S1.status SUCCEEDED
=======================================
68.Informatica - Can u start a batches with in a
batch?
QUESTION #68 U can not. If u want to start batch that resides
in a batch,create
a new independent batch and copy the necessary sessions into the
new batch.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
February 15, 2006 13:13:47 #1
sangroover
RE: Can u start a batches with in a batch?
=======================================
Logically Yes
=======================================
Logically Yes we can craete worklets and call the batch
=======================================
Logically yes.
I have not worked with worklets. But as we can start a single session within a
workflow; similarly we
should be able to start a worklet within a workflow.
=======================================
69.Informatica - Can u start a session inside a batch
idividually?
QUESTION #69 We can start our required session only in case of
sequential
batch.in case of concurrent batch
we cant do like this.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
file:///C|/Perl/bin/result.html (76 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
April 01, 2008 06:28:12 #1
Vani_AT Member Since: December 2007 Contribution: 16
RE: Can u start a session inside a batch idividually?
=======================================
Yes we can do this in any case. Sequential or concurrent doesn't matter.
Ther is no absolute concurrent workflow. Every workflow starts with a "start" task
and hence the
workflow is a hybrid.
=======================================
Yes we can do
=======================================
70.Informatica - How can u stop a batch?
QUESTION #70 By using server manager or pmcmd.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
May 28, 2007 04:38:06 #1
VEMBURAJ.P
RE: How can u stop a batch?
=======================================
by using menu command or pmcmd.
=======================================
In workflow monitor
1. click on the workflow name
2. click on stop
=======================================
71.Informatica - What r the session parameters?
QUESTION #71
Session parameters r like maping parameters,represent values U
might want to
change between
sessions such as database connections or source files.
Server manager also allows U to create userdefined session
parameters.
file:///C|/Perl/bin/result.html (77 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Following r user defined
session parameters.
Database connections
Source file names: use this parameter when u want to change the
name or
location of
session source file between session runs
Target file name : Use this parameter when u want to change the
name or
location of
session target file between session runs.
Reject file name : Use this parameter when u want to change the
name or
location of
session reject files between session runs.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
April 01, 2008 06:40:17 #1
Vani_AT Member Since: December 2007 Contribution: 16
RE: What r the session parameters?
=======================================
In addition to this we provide the lookup file name.
The values for these variables are provided in the parameter file and the parameters
start with a $$
(double dollar symbol). There is a predefined format for specifying session
parameter. For eg. If we
want to use a parameter for source file then it must be prefixed with
$$SrcFile_<any string optional>.
=======================================
a small correction it starts with single dollar symbol . double dollar symbol is used
for mapping
parameters.
=======================================
72.Informatica - What is parameter file?
QUESTION #72 Parameter file is to define the values for
parameters and
variables used in a session.A parameter
file is a file created by text editor such as word pad or notepad.
U can define the following values in parameter file
Maping parameters
Maping variables
file:///C|/Perl/bin/result.html (78 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
session parameters
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2006 01:27:04 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What is parameter file?
=======================================
When you start a workflow you can optionally enter the directory and name of a
parameter file. The
Informatica Server runs the workflow using the parameters in the file you specify.
For UNIX shell users enclose the parameter file name in single quotes:
-paramfile '$PMRootDir/myfile.txt'
For Windows command prompt users the parameter file name cannot have
beginning or trailing spaces.
If the name includes spaces enclose the file name in double quotes:
-paramfile $PMRootDir\my file.txt
Note: When you write a pmcmd command that includes a parameter file located on
another machine
use the backslash (\) with the dollar sign ($). This ensures that the machine where
the variable is defined
expands the server variable.
pmcmd startworkflow -uv USERNAME -pv PASSWORD -s SALES:6258 -f east -
w wSalesAvg -
paramfile '\$PMRootDir/myfile.txt'
Cheers
Sithu
=======================================
73.Informatica - What is difference between
partioning of
relatonal target and partitioning of file targets?
file:///C|/Perl/bin/result.html (79 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
QUESTION #73 If u parttion a session with a relational target
informatica
server creates multiple connections
to the target database to write target data concurently.If u
partition a session
with a file target
the informatica server creates one target file for each partition.U
can configure
session properties
to merge these target files.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
June 13, 2006 19:10:59 #1
UmaBojja
RE: What is difference between partioning of relatonal...
=======================================
Partition's can be done on both relational and flat files.
Informatica supports following partitions
1.Database partitioning
2.RoundRobin
3.Pass-through
4.Hash-Key partitioning
5.Key Range partitioning
All these are applicable for relational targets.For flat file only database partitioning
is not applicable.
Informatica supports Nway partitioning.U can just specify the name of the target
file and create the
partitions rest will be taken care by informatica session.
file:///C|/Perl/bin/result.html (80 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Could you please tell me how can we partition the session and target?
=======================================
74.Informatica - Performance tuning in Informatica?
QUESTION #74 The goal of performance tuning is optimize
session performance
so sessions run during the available load window for the
Informatica Server.
Increase the session performance by following.
The performance of the Informatica Server is related to network
connections.
Data generally moves across a network at less than 1 MB per
second, whereas a
local disk moves data five to twenty times faster. Thus network
connections
ofteny affect on session performance.So aviod netwrok
connections.
Flat files: If ur flat files stored on a machine other than the
informatca server,
move those files to the machine that consists of informatica
server.
Relational datasources: Minimize the connections to sources
,targets and
informatica server to
improve session performance.Moving target database into server
system may
improve session
performance.
Staging areas: If u use staging areas u force informatica server to
perform
multiple datapasses.
Removing of staging areas may improve session performance.
U can run the multiple informatica servers againist the same
repository.
Distibuting the session load to multiple informatica servers may
improve
session performance.
Run the informatica server in ASCII datamovement mode
improves the session
performance.Because ASCII datamovement mode stores a
character value in one
byte.Unicode mode takes 2 bytes to store a character.
If a session joins multiple source tables in one Source Qualifier,
optimizing the
query may improve performance. Also, single table select
statements with an
ORDER BY or GROUP BY clause may benefit from optimization
such as adding
indexes.
file:///C|/Perl/bin/result.html (81 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
We can improve the session performance by configuring the
network packet size,
which allows
data to cross the network at one time.To do this go to server
manger ,choose
server configure database connections.
If u r target consists key constraints and indexes u slow the
loading of data.To
improve the session performance in this case drop constraints
and indexes before
u run the session and rebuild them after completion of session.
Running a parallel sessions by using concurrent batches will also
reduce the time
of loading the
data.So concurent batches may also increase the session
performance.
Partittionig the session improves the session performance by
creating multiple
connections to sources and targets and loads data in paralel pipe
lines.
In some cases if a session contains a aggregator transformation
,u can use
incremental aggregation to improve session performance.
Aviod transformation errors to improve the session performance.
If the sessioin containd lookup transformation u can improve the
session
performance by enabling the look up cache.
If Ur session contains filter transformation ,create that filter
transformation
nearer to the sources
or u can use filter condition in source qualifier.
Aggreagator,Rank and joiner transformation may oftenly
decrease the session
performance .Because they must group data before processing
it.To improve
session performance in this case use sorted ports option.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 05, 2007 10:20:40 #1
Infoseek Member Since: January 2007 Contribution: 4
file:///C|/Perl/bin/result.html (82 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: Performance tuning in Informatica?
=======================================
thanks for your above answer hey would like to know how we can partition a big
datafile(flat) around
1Gig..and what are the options to set for the same. in Power centre V7.X
=======================================
Hey Thank You so much for the information. That was one of the best answers I
have read on this
website. Descriptive yet to the point and highly useful in real world. I appreciate
your effort.
=======================================
75.Informatica - What is difference between maplet
and reusable
transformation?
QUESTION #75 Maplet consists of set of transformations that is
reusable.A
reusable transformation is a
single transformation that can be reusable.
If u create a variables or parameters in maplet that can not be
used in another
maping or maplet.Unlike the variables that r created in a
reusable
transformation can be usefull in any other maping or maplet.
We can not include source definitions in reusable
transformations.But we can
add sources to a maplet.
Whole transformation logic will be hided in case of maplet.But it
is transparent
in case of reusable transformation.
We cant use COBOL source qualifier,joiner,normalizer
transformations in
maplet.Where as we can make them as a reusable
transformations.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2006 01:15:34 #1
sithusithu Member Since: December 2005 Contribution: 161
file:///C|/Perl/bin/result.html (83 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: What is difference between maplet and reusable tra...
=======================================
Maplet: one or more transformations
Reusable transformation: only one transformation
Cheers
Sithu
=======================================
Mapplet is a group of reusable transformation.The main purpose of using Mapplet
is to hide the logic
from end user point of view...It works like a function in C language.We can use it
N number of times.Its
a reusable object.
Reusable transformation is a single transformation.
=======================================
76.Informatica - Define informatica repository?
QUESTION #76 The Informatica repository is a relational
database that stores
information, or metadata, used by the Informatica Server and
Client tools.
Metadata can include information such as mappings describing
how to
transform source data, sessions indicating when you want the
Informatica
Server to perform the transformations, and connect strings for
sources and
targets.
The repository also stores administrative information such as
usernames and
passwords, permissions and privileges, and product version.
Use repository manager to create the repository.The Repository
Manager
connects to the repository database and runs the code needed to
create the
repository tables.Thsea tables
stores metadata in specific format the informatica server,client
tools use.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
file:///C|/Perl/bin/result.html (84 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
January 10, 2006 06:49:01 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: Define informatica repository?
=======================================
Infromatica Repository:The informatica repository is at the center of the
informatica suite. You create
a set of metadata tables within the repository database that the informatica
application and tools access.
The informatica client and server access the repository to save and retrieve
metadata.
Cheers
Sithu
=======================================
77.Informatica - What r the types of metadata that
stores in
repository?
QUESTION #77
Following r the types of metadata that stores in the repository
Database connections
Global objects
Mappings
Mapplets
Multidimensional metadata
Reusable transformations
Sessions and batches
Short cuts
Source definitions
Target defintions
Transformations
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
file:///C|/Perl/bin/result.html (85 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
January 19, 2006 01:40:54 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: What r the types of metadata that stores in reposi...
=======================================
l Source definitions. Definitions of database objects (tables views synonyms) or
files that provide
source data.
l Target definitions. Definitions of database objects or files that contain the target
data.
l Multi-dimensional metadata. Target definitions that are configured as cubes and
dimensions.
l Mappings. A set of source and target definitions along with transformations
containing business
logic that you build into the transformation. These are the instructions that the
Informatica Server uses
to transform and move data.
l Reusable transformations. Transformations that you can use in multiple
mappings.
l Mapplets. A set of transformations that you can use in multiple mappings.
l Sessions and workflows. Sessions and workflows store information about how
and when the
Informatica Server moves data. A workflow is a set of instructions that describes
how and when to run
tasks related to extracting transforming and loading data. A session is a type of task
that you can put in
a workflow. Each session corresponds to a single mapping.
Cheers
Sithu
=======================================
78.Informatica - What is power center repository?
QUESTION #78 The PowerCenter repository allows you to share
metadata
across repositories to create a data mart domain. In a data mart
domain, you
can create a single global repository to store metadata used
across an enterprise,
and a number of local repositories to share the global metadata
as needed.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 19, 2006 01:44:05 #1
sithusithu Member Since: December 2005 Contribution: 161
file:///C|/Perl/bin/result.html (86 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: What is power center repository?
=======================================
l Standalone repository. A repository that functions individually unrelated and
unconnected to other
repositories.
l Global repository. (PowerCenter only.) The centralized repository in a domain a
group of connected
repositories. Each domain can contain one global repository. The global repository
can contain common
objects to be shared throughout the domain through global shortcuts.
l Local repository. (PowerCenter only.) A repository within a domain that is not
the global repository.
Each local repository in the domain can connect to the global repository and use
objects in its shared
folders.
Cheers
Sithu
=======================================
79.Informatica - How can u work with remote
database in
informatica?did u work directly by using remote
connections?
QUESTION #79 To work with remote datasource u need to
connect it with
remote connections.But it is not
preferable to work with that remote source directly by using
remote connections .
Instead u bring that source into U r local machine where
informatica server
resides.If u work directly with remote source the session
performance will
decreases by passing less amount of data across the network in a
particular time.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
January 27, 2006 02:18:13 #1
sithusithu Member Since: December 2005 Contribution: 161
RE: How can u work with remote database in informatica...
file:///C|/Perl/bin/result.html (87 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
You can work with remote
But you have to
Configure FTP
Connection details
IP address
User authentication
Cheers
Sithu
=======================================
80.Informatica - What is tracing level and what r the
types of
tracing level?
QUESTION #80 Tracing level represents the amount of
information that
informatcia server writes in a log file.
Types of tracing level
Normal
Verbose
Verbose init
Verbose data
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
April 16, 2007 03:19:08 #1
Minnu
RE: What is tracing level and what r the types of trac...
file:///C|/Perl/bin/result.html (88 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Its
1) Terse
2) Normal
3) Verbose Init
4) Verbose data
=======================================
81.Informatica - If a session fails after loading of
10,000 records in
to the target.How can u load the records from
QUESTION #81 As explained above informatcia server has 3
methods to
recovering the sessions.Use performing recovery to load the
records from where
the session fails.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
April 08, 2007 11:17:03 #1
ggk.krishna Member Since: February 2007 Contribution: 12
RE: If a session fails after loading of 10,000 records...
=======================================
Hi
We can restart the session using session recovery option in workflow manager
and workflow
monitor. Then the loading starts from 10001 th row.
If you start the session normally it starts from 1 st row.
If you define target load type as "Bulk" session recovery is not possible.
=======================================
82.Informatica - If i done any modifications for my
table in back
end does it reflect in informatca warehouse or mapi
QUESTION #82 NO. Informatica is not at all concern with back
end data base.It
displays u all the information
that is to be stored in repository.If want to reflect back end
changes to
informatica screens,
file:///C|/Perl/bin/result.html (89 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
again u have to import from back end to informatica by valid
connection.And u
have to replace the existing files with imported files.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
August 04, 2006 05:35:19 #1
vidyanand
RE: If i done any modifications for my table in back e...
=======================================
Yes It will be reflected once u refresh the mapping once again.
=======================================
It does matter if you have SQL override - say in the SQ or in a Lookup you
override the default sql.
Then if you make a change to the underlying table in the database that makes the
override SQL
incorrect for the modified table the session will fail.
If you change a table - say rename a column that is in the sql override statement
then session will fail.
But if you added a column to the underlying table after the last column then the sql
statement in the
override will still be valid. If you make change to the size of columns the sql will
still be valid although
you may get truncation of data if the database column has larger size (more
characters) than the SQ or
subsequent transformation.
=======================================
83.Informatica - After draging the ports of three
sources(sql server,
oracle,informix) to a single source qualifier, c
QUESTION #83 NO.Unless and until u join those three ports in
source qualifier
u cannot map them directly. Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
December 14, 2005 10:37:10 #1
rishi
file:///C|/Perl/bin/result.html (90 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: After draging the ports of three sources(sql serve...
=======================================
if u drag three hetrogenous sources and populated to target without any join means
you are entertaining
Carteisn product. If you don't use join means not only diffrent sources but
homegeous sources are show
same error.
If you are not interested to use joins at source qualifier level u can add some joins
sepratly.
=======================================
Yes it possible...
=======================================
I don't think dragging three heterogeneous sources in a single source qualifier is
valid.
Whenever we drag multiple sources in same source qualifier
1. There must be a joining key between the tables.
2. The SQL needs to be executed in the database to join the three tables.
To use a single source qualifier for multiple sources the data source for all the
sources should be same.
For Heterogeneous join a Joiner transformation has to be used.
The first part of the question itself is not possible.
=======================================
Sources from heterogenous databases cannot be pulled into a single source
qualifier. They can only be
joined using a joiner. And then can be written to the target
=======================================
84.Informatica - What is Data cleansing..?
QUESTION #84 The process of finding and removing or
correcting data that is
incorrect, out-of-date, redundant, incomplete, or formatted
incorrectly. Click Here
to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
April 27, 2005 11:12:05 #1
neetha
RE: What is Data cleansing..?
file:///C|/Perl/bin/result.html (91 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Data cleansing is a two step process including DETECTION and then
CORRECTION of errors in a
data set.
=======================================
This is nothing but polising of data. For example of one of the sub system store the
Gender as M and F.
The other may store it as MALE and FEMALE. So we need to polish this data
clean it before it is add
to Datawarehouse. Other typical example can be Addresses. The all sub systesms
maintinns the
customer address can be different. We might need a address cleansing to tool to
have the customers
addresses in clean and neat form.
=======================================
Data cleansing means remove the inconistance data and transfer the data correct
way and correct manner
=======================================
it means
process of removing data inconsistancies and reducing data in accuracies.
=======================================
85.Informatica - how can we partition a session in
Informatica?
QUESTION #85
No best answer available. Please pick the good answer available
or submit your
answer.
July 08, 2005 18:12:42 #1
Kevin B
RE: how can we partition a session in Informatica?
file:///C|/Perl/bin/result.html (92 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
The Informatica PowerCenter Partitioning option optimizes parallel processing on
multiprocessor
hardware by providing a thread-based architecture and built-in data partitioning.
GUI-based tools reduce the development effort necessary to create data partitions
and streamline
ongoing troubleshooting and performance tuning tasks while ensuring data
integrity throughout the
execution process. As the amount of data within an organization expands and real-
time demand for
information grows the PowerCenter Partitioning option
enables hardware and applications to provide outstanding performance and jointly
scale to handle large
volumes of data and users.
=======================================
Download the Document
=======================================
86.Informatica - what is a time dimension? give an
example.
QUESTION #86
No best answer available. Please pick the good answer available
or submit your
answer.
August 04, 2005 07:48:36 #1
Sakthi
RE: what is a time dimension? give an example.
Click Here to view complete document
Time dimension is one of important in Datawarehouse. Whenever u genetated the
report that time u
access all data from thro time dimension.
eg. employee time dimension
Fields : Date key full date day of wek day month quarter fiscal year
=======================================
In a relational data model for normalization purposes year lookup quarter lookup
month lookup and
week lookups are not merged as a single table. In a dimensional data modeling(star
schema) these tables
would be merged as a single table called TIME DIMENSION for performance and
slicing data.
This dimensions helps to find the sales done on date weekly monthly and yearly
basis. We can have a
trend analysis by comparing this year sales with the previous year or this week
sales with the previous
week.
=======================================
A TIME DIMENSION is a table that contains the detail information of the time at
which a particular
file:///C|/Perl/bin/result.html (93 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
'transaction' or 'sale' (event) has taken place.
The TIME DIMENSION has the details of
DAY WEEK MONTH QUARTER YEAR
=======================================
87.Informatica - Diff between informatica repositry
server &
informatica server
QUESTION #87
No best answer available. Please pick the good answer available
or submit your
answer.
August 11, 2005 02:05:13 #1
Nagi R Anumandla
RE: Diff between informatica repositry server & informatica server
Click Here to view complete document
Informatica Repository Server:It's manages connections to the repository from
client application.
Informatica Server:It's extracts the source data performs the data transformation
and loads the
transformed data into the target
=======================================
88.Informatica - Explain the informatica
Architecture in detail
QUESTION #88
No best answer available. Please pick the good answer available
or submit your
answer.
January 13, 2006 07:47:31 #1
chiranth Member Since: December 2005 Contribution: 1
RE: Explain the informatica Architecture in detail...
file:///C|/Perl/bin/result.html (94 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
informatica server connects source data and target data using native
odbc drivers
again it connect to the repository for running sessions and retriveing metadata
information
source------>informatica server--------->target
|
|
REPOSITORY
=======================================
repositorynRepositoryRepository ser.adm. control
server

sourceninformatica servertarget
-------------

designer w.f.manager w.f.monitor
=======================================
89.Informatica - Discuss the advantages &
Disadvantages of star
& snowflake schema?
QUESTION #89
No best answer available. Please pick the good answer available
or submit your
file:///C|/Perl/bin/result.html (95 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
answer.
August 25, 2005 02:24:19 #1
prasad nallapati
RE: Discuss the advantages & Disadvantages of star & snowflake schema?
Click Here to view complete document
In a star schema every dimension will have a primary key.
In a star schema a dimension table will not have any parent table.
Whereas in a snow flake schema a dimension table will have one or more parent
tables.
Hierarchies for the dimensions are stored in the dimensional table itself in star
schema.
Whereas hierachies are broken into separate tables in snow flake schema. These
hierachies helps to drill
down the data from topmost hierachies to the lowermost hierarchies.
=======================================
In a STAR schema there is no relation between any two dimension tables whereas
in a SNOWFLAKE
schema there is a possible relation between the dimension tables.
=======================================
star schema consists of single fact table surrounded by some dimensional table.In
snowflake schema the
dimension tables are connected with some subdimension table.
In starflake dimensional ables r denormalized in snowflake dimension tables r
normalized.
star schema is used for report generation snowflake schema is used for cube.
The advantage of snowflake schema is that the normalized tables r easier to
maintain.it also saves the
storage space.
The disadvantage of snowflake schema is that it reduces the effectiveness of
navigation across the
tables due to large no of joins between them.
=======================================
It depends upon the clients which they are following whether snowflake or star
schema.
file:///C|/Perl/bin/result.html (96 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Snowflakes are an addition to the Kimball Dimensional system to enable that
system to handle
hierarchial data. When Kimball proposed the dimensional data warehouse it was
not first recogonized
that hierarchial data could not be stored.
Commonly every attempt is made not to use snowflakes by flattening hierarchies
but when either this is
not possible or practical the snowflake design solves the problem.
Snowflake tables are ofter called "outlyers" by data modelers because they must
share a key with a
diminsion that directly connects to a fact table.
SD2 can have "outlyers" but these are very difficult to instantiate.
=======================================
90.Informatica - Waht are main advantages and
purposeof
usingNormalizer Transformation in Informatica?
QUESTION #90
No best answer available. Please pick the good answer available
or submit your
answer.
August 25, 2005 02:27:10 #1
prasad nallapati
RE: Waht are main advantages and purpose of using Normalizer
Transformation in Informatica?
Click Here to view complete document
Narmalizer Transformation is used mainly with COBOL sources where most of the
time data is stored
in de-normalized format. Also Normalizer transformation can be used to create
multiple rows from a
single row of data
=======================================
hi By using Normalizer transformation we can conver rows into columns and
columns into rows and
also we can collect multile rows from one row
=======================================
vamshidhar
How do u convert rows to columns in Normalizer? could you explain us??
Normally its used to convert columns to rows but for converting rows to columns
we need an
aggregator and expression and little effort is needed for coding. Denormalization is
not possible with a
Normalizer transformation.
file:///C|/Perl/bin/result.html (97 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
91.Informatica - How to read rejected data or bad
data from bad
file and reload it to target?
QUESTION #91
No best answer available. Please pick the good answer available
or submit your
answer.
October 04, 2005 23:16:28 #1
ravi kumar guturi
RE: How to read rejected data or bad data from bad fil...
Click Here to view complete document
correction the rejected data and send to target relational tables using loadorder
utility. Find out the
rejected data by using column indicatior and row indicator.
=======================================
Design a trap to a file or table by the use of a filter transformation or a router
transformation. Router
works well for this.
=======================================
92.Informatica - How do youtransfert the data from
data
warehouse to flatfile?
QUESTION #92
No best answer available. Please pick the good answer available
or submit your
answer.
November 09, 2005 17:02:07 #1
paul luthra
RE: How do you transfert the data from data wareh...
file:///C|/Perl/bin/result.html (98 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
You can write a mapping with the flat file as a target using a
DUMMY_CONNECTION. A flat file
target is built by pulling a source into target space using Warehouse Designer tool.
=======================================
93.Informatica - At the max how many
tranformations can be us
in a mapping?
QUESTION #93
No best answer available. Please pick the good answer available
or submit your
answer.
September 27, 2005 12:32:26 #1
sangeetha
RE: At the max how many tranformations can be us in a ...
Click Here to view complete document
n number of transformations
=======================================
22 transformation expression joiner aggregator router stored procedure etc. You
can find on Informatica
transformation tool bar.
=======================================
In a mapping we can use any number of transformations depending on the project
and the included
transformations in the perticular related transformatons.
=======================================
There is no such limitation to use this number of transformations. But in
performance point of view
using too many transformations will reduce the session performance.
My idea is if needed more tranformations to use in a mapping its better to go for
some stored procedure.
=======================================
Always remember when designing a mapping: less for more
design with the least number of transformations that can do the most jobs.
file:///C|/Perl/bin/result.html (99 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
94.Informatica - What is the difference between
Narmal load and
Bulk load?
QUESTION #94
No best answer available. Please pick the good answer available
or submit your
answer.
September 09, 2005 09:01:19 #1
suresh
RE: What is the difference between Narmal load and Bulk load?
Click Here to view complete document
what is the difference between powermart and power center?
=======================================
when we go for unconnected lookup transformation?
=======================================
bulk load is faster than normal load. In case of bulk load informatica server by
passes the data base log
file so we can not roll bac the transactions. Bulk load is also called direct loading.
=======================================
Normal Load: Normal load will write information to the database log file so that if
any recorvery is
needed it is will be helpful. when the source file is a text file and loading data to a
table in such cases
we should you normal load only else the session will be failed.
Bulk Mode: Bulk load will not write information to the database log file so that if
any recorvery is
needed we can't do any thing in such cases.
compartivly Bulk load is pretty faster than normal load.
=======================================
Rule of thumb
For small number of rows use Normal load
file:///C|/Perl/bin/result.html (100 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
For volume of data use bulk load
=======================================
also remember that not all databases support bulk loading. and bulk loading fails a
session if your
mapping has primary keys
=======================================
95.Informatica - what is a junk dimension
QUESTION #95
No best answer available. Please pick the good answer available
or submit your
answer.
October 17, 2005 06:22:53 #1
prasad Nallapati
RE: what is a junk dimension
Click Here to view complete document
A junk dimension is a collection of random transactional codes flags and/or text
attributes that are
unrelated to any particular dimension. The junk dimension is simply a structure
that provides a
convenient place to store the junk attributes. A good example would be a trade fact
in a company that
brokers equity trades.
=======================================
A junk dimension is a used for constrain queary purpose based on text and flag
values.
Some times a few dimensions discarded in a major dimensions That time we kept
in to one place the all
discarded dimensions that is called junk dimensions.
=======================================
Junk dimensions are particularly useful in Snowflake schema and one of the
reasons why snowfalke is
preferred over the star schema.
There are dimensions that are frequently updated. So from the base set of the
dimensions that are
already existing we pull out the dimensions that are frequently updated and put
them into a separate
table.
This dimension table is called the junk dimension.
=======================================
96.Informatica - can we lookup a table from a source
qualifer
file:///C|/Perl/bin/result.html (101 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
transformation-unconnected lookup
QUESTION #96
No best answer available. Please pick the good answer available
or submit your
answer.
November 22, 2005 01:29:07 #1
nandam Member Since: November 2005 Contribution: 1
RE: can we lookup a table from a source qualifer trans...
Click Here to view complete document
no we cant lookup data
=======================================
No. we can't do.
I will explain you why.
1) Unless you assign the output of the source qualifier to another transformation or
to target no way it
will include the feild in the query.
2) source qualifier don't have any variables feilds to utalize as expression.
=======================================
No it's not possible. source qualifier don't have any variables fields to utilize as
expression.
=======================================
97.Informatica - how to get the first 100 rows from
the flat file
into the target?
QUESTION #97
No best answer available. Please pick the good answer available
or submit your
answer.
October 04, 2005 01:07:33 #1
ravi
RE: how to get the first 100 rows from the flat file i...
file:///C|/Perl/bin/result.html (102 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
by usning shell script
=======================================
please check this one
task ----->(link) session (workflow manager)
double click on link and type $$source sucsess rows(parameter in session
variables) 100
it should automatically stops session.
=======================================
I'd copy first 100 records to new file and load.
Just add this Unix command in session properties --> Components --> Pre-session
Command
head -100 <source file path> > <new file name>
Mention new file name and path in the Session --> Source properties.
=======================================
1. Use test download option if you want to use it for testing.
2. Put counter/sequence generator in mapping and perform it.
Hope it helps.
=======================================
Use a sequence generator set properties as reset and then use filter put condition as
NEXTVAL< 100 .
=======================================
98.Informatica - can we modify the data in flat file?
QUESTION #98
No best answer available. Please pick the good answer available
or submit your
answer.
October 04, 2005 23:19:25 #1
ravikumar guturi
file:///C|/Perl/bin/result.html (103 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: can we modify the data in flat file?
Click Here to view complete document
can't
=======================================
Manually Yes.
=======================================
yes
=======================================
yes
=======================================
Can you please explian how to do this without modifying the flat file manually?
=======================================
Just open the text file with notepad change what ever you want (but datatype
should be the same)
Cheers
sithu
=======================================
yes by open a text file and edit
=======================================
You can generate a flat file with program....mean not manually.
=======================================
Let's not discuss about manually modifying the data of flat file.
Let's assume that the target is a flat file. I want to update the data in the flat file
target based on the input
source rows. Like we use update strategy/ target properties in case of relational
targets for update; do
we have any options in the session or maaping to perform a similar task for a flat
file target?
I have heard about the append option in INFA 8.x. This may be helpful for
incremental load in the flat
file.
But this is not a workaround for updating the rows.
Please post your views.
=======================================
You can modify the flat file using shell scripting in unix ( awk grep sed ).
Hope this helps.
file:///C|/Perl/bin/result.html (104 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
99.Informatica - difference between summary filter
and details
filter?
QUESTION #99
No best answer available. Please pick the good answer available
or submit your
answer.
December 01, 2005 09:52:34 #1
renuka
RE: difference between summary filter and details filt...
Click Here to view complete document
Hi
Summary Filter --- we can apply records group by that contain common values.
Detail Filter --- we can apply to each and every record in a database.
=======================================
Will it be correct to say that
Summary Filter ----> HAVING Clause in SQL
Details Filter ------> WHERE Clause in SQL
=======================================
100.Informatica - what are the difference between
view and
materialized view?
QUESTION #100
No best answer available. Please pick the good answer available
or submit your
answer.
Sorting Options Page 1 of 3 First 1 2 3 > Last
file:///C|/Perl/bin/result.html (105 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
September 30, 2005 00:34:20 #1
khadarbasha Member Since: September 2005 Contribution: 2
RE: what are the difference between view and materiali...
Click Here to view complete document
Materialized views are schema objects that can be used to summarize precompute
replicate and
distribute data. E.g. to construct a data warehouse.
A materialized view provides indirect access to table data by storing the results of
a query in a separate
schema object. Unlike an ordinary view which does not take up any storage space
or contain any data
=======================================
view is a tailriad representaion of data its access data from existing table it have
logical structure cant
space occupation.
but meterailzedview stores precaluculated data its have physical structure space
occupation
=======================================
view is a tailraid representation of data but metereialized view is stores
precaluculated data view is a
logical structure but mview is physical structure view is cant occupie the space bu
mview is occpies
space.
=======================================
Diffence between View Materialized view
If you (Change) update or insert in view the corresponding table will affect. but
changes will not affect
materialized view.
=======================================
materialized views to store copies of data or aggregations.Materialized views can
be used to replicate
all or part of a single table
or part of single table or to replicate the result of a query against multiple
tables.refreshes of the
replicated daa can be done automatically by the database at time intervals.
file:///C|/Perl/bin/result.html (106 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
A view do no derive the change made to it master table after the view is created.
A materialized view immediately carries the change done to its mater table even
after the materialized
view is created.
=======================================
view- the select query is stored in the db. whenever u use select from view the
stored query is executed.
Effectively u r calling the stored query. In case u want use the query repetadly or
complex queries we
store the queries in Db using View.
where as materialized view stores the data as well. like table. here storage
parameters are required.
=======================================
A view is just a stored query and has no physical part. Once a view is instantiated
performance can be
quite good until it is aged out of the cache. A materialized view has a physical
table associated with it; it
doesn't have to resolve the query each time it is queried. Depending on how large a
result set and how
complex the query a materialized view should perform better.
=======================================
In materialized view we cann't perform DML operation but the reverse is true in
case of simple view.
=======================================
In case of materialised view we can perform DML but reverse is not true in case of
simple view.
=======================================
101.Informatica - What is the difference between
summary filter
and detail filter
QUESTION #101
No best answer available. Please pick the good answer available
or submit your
answer.
November 23, 2005 15:07:50 #1
sir
RE: what is the difference between summary filter and ...
file:///C|/Perl/bin/result.html (107 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
summary filter can be applieid on a group of rows that contain a common
value.where as detail filters
can be applied on each and every rec of the data base.
=======================================
102.Informatica - CompareData Warehousing Top-
Down
approach withBottom-up approach
QUESTION #102
No best answer available. Please pick the good answer available
or submit your
answer.
October 04, 2005 12:22:13 #1
ravi kumar guturi
RE: in datawarehousing approach(top/down) or (bottom/u...
Click Here to view complete document
bottom approach is the best because in 3 tier architecture datatier is the bottom one.
=======================================
Top/down approach is better in datawarehousing
=======================================
Rajesh: Bottom/Up approach is better
=======================================
At the time of software intragartion buttom/up is good but implimentatino time
top/down is good.
=======================================
top down
ODS-->ETL-->Datawarehouse-->Datamart-->OLAP
Bottom up
ODS-->ETL-->Datamart-->Datawarehouse-->OLAP
Cheers
file:///C|/Perl/bin/result.html (108 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Sithu
=======================================
in top down approch: first we have to build dataware house then we will build
data marts. which will
need more crossfunctional skills and timetaking process also costly.
in bottom up approach: first we will build data marts then data warehuse. the
data mart that is first
build will remain as a proff of concept for the others. less time as compared to
above and less cost.
=======================================
Nothing wrong with any of these approaches. It all depends on your business
requirements and what is
in place already at your company. Lot of folks have a hybrid approach. For more
info read Kimball vs
Inmon..
=======================================
103.Informatica - Discuss which is better among
incremental load,
Normal Load and Bulk load
QUESTION #103
No best answer available. Please pick the good answer available
or submit your
answer.
October 20, 2005 03:06:53 #1
ravi guturi
RE: incremental loading ? normal load and bulk load?
file:///C|/Perl/bin/result.html (109 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
normal load is the best.
=======================================
normal is always preffered over bulk.
=======================================
It depends on the requirement. Otherwise Incremental load which can be better as
it takes onle that data
which is not available previously on the target.
=======================================
If the database supports bulk load option from Infromatica then using BULK
LOAD for intial loading
the tables is recommended.
Depending upon the requirment we should choose between Normal and
incremental loading strategies.
=======================================
Normal Loading is Better
=======================================
Rajesh:Normal load is Better
=======================================
RE: incremental loading ? normal load and bulk load?<b...
=======================================
if supported by the database bulk load can do the loading faster than normal
load.(incremental load
concept is differnt dont merge with bulk load mormal load)
=======================================
104.Informatica - What is the difference between
connected and
unconnected stored procedures.
QUESTION #104
No best answer available. Please pick the good answer available
or submit your
answer.
September 25, 2005 20:02:14 #1
file:///C|/Perl/bin/result.html (110 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
sangeetha
RE: What is the difference between connected and uncon...
Click Here to view complete document
Unconnected:
The unconnected Stored Procedure transformation is not connected directly to the
flow of the mapping.
It either runs before or after the session or is called by an expression in another
transformation in the
mapping.
connected:
The flow of data through a mapping in connected mode also passes through the
Stored Procedure
transformation. All data entering the transformation through the input ports affects
the stored procedure.
You should use a connected Stored Procedure transformation when you need data
from an input port
sent as an input parameter to the stored procedure or the results of a stored
procedure sent as an output
parameter to another transformation.
=======================================
Run a stored procedure before or after your session. Unconnected
Run a stored procedure once during your mapping such as pre- or postsession.
Unconnected
Run a stored procedure every time a row passes through the Stored
Procedure transformation.
Connected or Unconnected
Run a stored procedure based on data that passes through the mapping
such as when a specific port does not contain a null value.
Unconnected
Pass parameters to the stored procedure and receive a single output
parameter.
Connected or Unconnected
Pass parameters to the stored procedure and receive multiple output
parameters.
Note: To get multiple output parameters from an unconnected Stored
Procedure transformation you must create variables for each output
parameter. For details see Calling a Stored Procedure From an
Expression.
Connected or Unconnected
Run nested stored procedures. Unconnected
file:///C|/Perl/bin/result.html (111 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Call multiple times within a mapping. Unconnected
Cheers
Sithu
=======================================
105.Informatica - Differences between Informatica
6.2 and
Informatica 7.0Yours sincerely,Rushi.
QUESTION #105
No best answer available. Please pick the good answer available
or submit your
answer.
October 04, 2005 01:17:06 #1
ravi
RE: Differences between Informatica 6.2 and Informati...
Click Here to view complete document
in 7.0 intorduce custom transfermation and union transfermation and also flat file
lookup condition.
=======================================
Features in 7.1 are :
1.union and custom transformation
2.lookup on flat file
3.grid servers working on different operating systems can coexist on same server
4.we can use pmcmdrep
5.we can export independent and dependent rep objects
6.we ca move mapping in any web application
7.version controlling
file:///C|/Perl/bin/result.html (112 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
8.data profilling
=======================================
Can someone guide me on
1) Data profiling.
2) Exporting independent and dependent objects.
Thanks in advance.
-Azhar
=======================================
106.Informatica - whats the diff between
Informatica powercenter
server, repositoryserver and repository?
QUESTION #106 Powercenter server contains the sheduled runs
at which time
data should load from source to target
Repository contains all the definitions of the mappings done in
designer.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
November 08, 2005 01:59:02 #1
Gokulnath_J Member Since: November 2005 Contribution: 3
RE: whats the diff between Informatica powercenter ser...
file:///C|/Perl/bin/result.html (113 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Repository is a database in which all informatica componets are stored in the form
of tables. The
reposiitory server controls the repository and maintains the data integrity and
Consistency across the
repository when multiple users use Informatica. Powercenter Server/Infa Server is
responsible for
execution of the components (sessions) stored in the repository.
=======================================
hi
Repository is nothing but a set of tables created in a DB.it stores all metadata of the
infa objects.
Repository server is one which communicates with the repository i.e DB. all the
metadata is retrived
from the DB through Rep server.All the client tools communicate with the DB
through Rep server.
Infa server is one which is responsible for running the WF tasks etc... Infa server
also communicates
with the DB through Rep server.
=======================================
power center server-power center server does the extraction from the source and
loaded it to the target.
repository server-it takes care of the connection between the power center client
and repository.
repository-it is a place where all the metadata information is stored.repository
server and power center
server access the repository for managing the data.
=======================================
107.Informatica - how to create the staging area in
your database
QUESTION #107 client having database throught that data base
u get all
sources Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
November 02, 2005 11:56:42 #1
Chandran
file:///C|/Perl/bin/result.html (114 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: how to create the staging area in your database
=======================================
If You have defined all the staging tables as tragets use option Targets--> Generate
sql in warehouse
designer
=======================================
A Staging area in a DW is used as a temporary space to hold all the records from
the source system. So
more or less it should be exact replica of the source systems except for the laod
startegy where we use
truncate and reload options.
So create using the same layout as in your source tables or using the Generate SQL
option in the
Warehouse Designer tab.
=======================================
creating of staging tables/area is the work of data modellor/dba.just like create
table <tablename>......
the tables will be created. they will have some name to identified as staging like
dwc_tmp_asset_eval.
tmp-----> indicate temparary tables nothing but staging
=======================================
108.Informatica - what does the expression n filter
transformations do in Informatica Slowly growing
target wizard?
QUESTION #108
No best answer available. Please pick the good answer available
or submit your
answer.
November 02, 2005 23:10:06 #1
sivapreddy Member Since: November 2005 Contribution: 1
RE: what does the expression n filter transformations ...
file:///C|/Perl/bin/result.html (115 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
dontknow
=======================================
EXPESSION transformation detects and flags the rows from source.
Filter transformation filters the rows that are not flagged and passes the flagged
rows to the Update
strategy transformation
=======================================
Expression finds the Primary key is or not and calculates new flag
Based on that New Flag filter transformation filters the Data
Cheers
Sithu
=======================================
You can use the Expression transformation to calculate values in a single row
before you write to the
target. For example you might need to adjust employee salaries concatenate first
and last names or
convert strings to numbers.
=======================================
109.Informatica - Briefly explian the Versioning
Concept in
Power Center 7.1.
QUESTION #109
No best answer available. Please pick the good answer available
or submit your
answer.
November 29, 2005 11:29:11 #1
Manoj Kumar Panigrahi
RE: Briefly explian the Versioning Concept in Power Ce...
file:///C|/Perl/bin/result.html (116 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
In power center 7.1 use 9 Tem server i.e add in Look up. But in power center 6.x
use only 8 tem server.
and add 5 transformation . in 6.x anly 17 transformation but 7.x use 22
transformation.
=======================================
Hi manoj
I appreciate ur response But can u be a bit clear
thanks
sri
=======================================
When you create a version of a folder referenced by shortcuts all shortcuts
continue to reference their
original object in the original version. They do not automatically update to the
current folder version.
For example if you have a shortcut to a source definition in the Marketing folder
version 1.0.0 then you
create a new folder version 1.5.0 the shortcut continues to point to the source
definition in version 1.0.0.
Maintaining versions of shared folders can result in shortcuts pointing to different
versions of the
folder. Though shortcuts to different versions do not affect the server they might
prove more difficult to
maintain. To avoid this you can recreate shortcuts pointing to earlier versions but
this solution is not
practical for much-used objects. Therefore when possible do not version folders
referenced by shortcuts.
Cheers
Sithu
=======================================
110.Informatica - How to join two tables without
using the Joiner
Transformation.
QUESTION #110
No best answer available. Please pick the good answer available
or submit your
answer.
December 01, 2005 07:49:58 #1
file:///C|/Perl/bin/result.html (117 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
thiyagarajanc Member Since: November 2005 Contribution: 4
RE: How to join two tables without using the Joiner Tr...
Click Here to view complete document
Itz possible to join the two or more tables by using source qualifier.But provided
the tables should have
relationship.
When u drag n drop the tables u will getting the source qualifier for each
table.Delete all the source
qualifiers.Add a common source qualifier for all.Right click on the source qualifier
u will find EDIT
click on it.Click on the properties tab u will find sql query in that u can write ur
sqls
=======================================
The Joiner transformation requires two input transformations from two separate
pipelines. An input
transformation is any transformation connected to the input ports of the current
transformation.
Cheers
Sithu
=======================================
can do using source qualifer but some limitations are there.
Cheers
Sithu
=======================================
joiner transformation is used to join n (n>1) tables from same or different
databases but source qualifier
transformation is used to join only n tables from same database .
=======================================
simple
In the session property user defined options is there by using this we can join with
out joiner
file:///C|/Perl/bin/result.html (118 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
use Source Qualifier transformation to join tables on the SAME database. Under its
properties tab you
can specify the user-defined join. Any select statement you can run on a database..
you can do also in
Source Qualifier.
Note: you can only join 2 tables with Joiner Transformation but you can join two
tables from different
databases.
Cheers
Ray Anthony
=======================================
hi
u can join 2 RDBMS sources of same database using a SQ by specifying user
defined joins.
u can also join two tables of same kind using a lookup.
=======================================
111.Informatica - Identifying bottlenecks in various
components
of Informatica and resolving them.
QUESTION #111
No best answer available. Please pick the good answer available
or submit your
answer.
December 20, 2005 08:13:47 #1
kalyan
RE: Identifying bottlenecks in various components of I...
file:///C|/Perl/bin/result.html (119 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
hai
The best way to find out bottlenecks is writing to flat file and see where the bottle
neck is .
kalyan
=======================================
112.Informatica - How do you decide whether you
need ti do
aggregations at database level or at Informatica
level?
QUESTION #112
No best answer available. Please pick the good answer available
or submit your
answer.
December 05, 2005 04:45:35 #1
Rishi
RE: How do you decide whether you need ti do aggregati...
Click Here to view complete document
It depends upon our requirment only.If you have good processing database you can
create aggregation
table or view at database level else its better to use informatica. Here i'm explaing
why we need to use
informatica.
what ever it may be informatica is a thrid party tool so it will take more time to
process aggregation
compared to the database but in Informatica an option we called Incremental
aggregation which will
help you to update the current values with current values +new values. No
necessary to process entire
values again and again. Unless this can be done if nobody deleted that cache files.
If that happend total
aggregation we need to execute on informatica also.
In database we don't have Incremental aggregation facility.
=======================================
hi
file:///C|/Perl/bin/result.html (120 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
see informatica is basically a integration tool.it all depends on the source u have
and ur requirment.if u
have a EMS Q or flat file or any source other than RDBMS u need info to do any
kind of agg functions.
if ur source is a RDBMS u r not only doing the aggregation using informatica
right?? there will be a
bussiness logic behind it. and u need to do some other things like looking up
against some table or
joining the agg result with the actual source. etc...
if in informatica if u r asking whether to do it in the mapping level or at DB level
then fine its always
better to do agg at the DB level by using SQL over ride in SQ if only aggr is the
main purpose of ur
mapping. it definetly improves the performance.
=======================================
113.Informatica - Source table has 1000 rows. In
session
configuration --- target Load option-- \
QUESTION #113
No best answer available. Please pick the good answer available
or submit your
answer.
April 10, 2007 23:51:51 #1
Surendra Kumar
RE: Source table has 1000 rows. In session configurati...
Click Here to view complete document
If you use bulk mode then it will be fast to load...
=======================================
If ur database supported bulk option Then choose the bulk option. other wise go for
normal option
=======================================
114.Informatica - what is the procedure to write the
query to list
the highest salary of three employees?
QUESTION #114
No best answer available. Please pick the good answer available
or submit your
answer.
December 01, 2005 07:31:41 #1
thiyagarajanc Member Since: November 2005 Contribution: 4
file:///C|/Perl/bin/result.html (121 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: what is the procedure to write the query to list t...
Click Here to view complete document
select
sal
from
(select sal from emp order by sal desc)
where
rownum < 3;
=======================================
SELECT sal
FROM (SELECT sal FROM my_table ORDER BY sal DESC)
WHERE ROWNUM < 4;
=======================================
hai
there is max function in Informatica use it.
kalyan
=======================================
since this is informatica.. you might as well use the Rank transformation. check out
the help file on how
to use it.
Cheers
Ray Anthony
file:///C|/Perl/bin/result.html (122 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
select max(sal) from emp;
=======================================
the following is the query to find out the top three salaries
in ORACLE:--(take emp table)
select * from emp e where 3>(select count (*) from emp where
e.sal>emp.sal) order by sal desc.
in SQL Server:-(take emp table)
select top 10 sal from emp
=======================================
You can write the query as follows.SQL> select * from 2 (select ename sal from
emp order by sal desc)
3 where rownum< 3;
=======================================
115.Informatica - which objects are required by the
debugger to
create a valid debug session?
QUESTION #115
No best answer available. Please pick the good answer available
or submit your
answer.
December 05, 2005 03:15:32 #1
Rishi
RE: which objects are required by the debugger to crea...
file:///C|/Perl/bin/result.html (123 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Intially the session should be valid session.
source target lookups expressions should be availble min 1 break point should be
available for debugger
to debug your session.
=======================================
hi
We can create a valid debug session even without a single break-point. But we
have to give valid
database connection details for sources targets and lookups used in the mapping
and it should contain
valid mapplets (if any in the mapping).
=======================================
Informatica server must run
=======================================
Informatica Server Object is must.
Cheers
Sithu
=======================================
116.Informatica - What is the limit to the number of
sources and
targets you can have in a mapping
QUESTION #116
No best answer available. Please pick the good answer available
or submit your
answer.
December 05, 2005 03:21:24 #1
Rishi
RE: What is the limit to the number of sources and tar...
file:///C|/Perl/bin/result.html (124 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
As per my knowledge there is no such restriction to use this number of sources or
targets inside a
mapping.
Question is if you make N number of tables to participate at a time in processing
what is the position of
your database. I orginzation point of view it is never encouraged to use N number
of tables at a time It
reduces database and informatica server performance
=======================================
the restriction is only on the database side. how many concurrent threads r u
allowed to run on the db
server?
=======================================
there is one formula..
no.of bloccks 0.9*( DTM buffer size/block size)*no.of partitions.
here no.of blocks (source+targets)*2
=======================================
117.Informatica - What is difference between IIF and
DECODE
function
QUESTION #117
No best answer available. Please pick the good answer available
or submit your
answer.
December 16, 2005 10:27:07 #1
VJ
RE: What is difference between IIF and DECODE function...
file:///C|/Perl/bin/result.html (125 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
You can use nested IIF statements to test multiple conditions. The following
example tests for various
conditions and returns 0 if sales is zero or negative:
IIF( SALES > 0 IIF( SALES < 50 SALARY1 IIF( SALES < 100 SALARY2 IIF(
SALES < 200
SALARY3 BONUS))) 0 )
You can use DECODE instead of IIF in many cases. DECODE may improve
readability. The following
shows how you can use DECODE instead of IIF :
SALES > 0 and SALES < 50 SALARY1
SALES > 49 AND SALES < 100 SALARY2
SALES > 99 AND SALES < 200 SALARY3
SALES > 199 BONUS)
=======================================
u can use decode in conditioning coloumns also while we cann't use iff but u can
use case. but by using
decode retrieveing data is quick
=======================================
Decode can be used in select statement whereas IIF cannot be used.
=======================================
118.Informatica - What are variable ports and list
two situations
when they can be used?
QUESTION #118
No best answer available. Please pick the good answer available
or submit your
answer.
December 19, 2005 20:41:34 #1
Rajesh
RE: What are variable ports and list two situations wh...
file:///C|/Perl/bin/result.html (126 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
We have mainly tree ports Inport Outport Variable port. Inport represents data is
flowing into
transformation. Outport is used when data is mapped to next transformation.
Variable port is used when
we mathematical caluculations are required. If any addition i will be more than
happy if you can share.
=======================================
you can also use as for example consider price and quantity and total as a varaible
we can mak a sum on
the total_amt by giving
sum(tatal_amt)
=======================================
For example if you are trying to calculate bonus from emp table
Bonus sal*0.2
Totalsal sal+comm.+bonus
=======================================
variable port is used to break the complex expression into simpler
and also it is used to store intermediate values
=======================================
Variable Ports usually carry intermediate data (values) and can be used in
Expression transformation.
=======================================
119.Informatica - How does the server recognise the
source and
target databases?
QUESTION #119
No best answer available. Please pick the good answer available
or submit your
answer.
January 01, 2006 00:53:24 #1
reddeppa
file:///C|/Perl/bin/result.html (127 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: How does the server recognise the source and targe...
Click Here to view complete document
by using ODBC connection.if it is relational.if is flat file FTP connection..see we
can make sure with
connection in the properties of session both sources && targets.
=======================================
120.Informatica - How to retrive the records from a
rejected file.
explane with syntax or example
QUESTION #120
No best answer available. Please pick the good answer available
or submit your
answer.
January 01, 2006 00:51:13 #1
reddeppa
RE: How to retrive the records from a rejected file. e...
Click Here to view complete document
there is one utility called reject Loader where we can findout the reject records.and
able to refine and
reload the rejected records..
=======================================
ya. every time u run the session one reject file will be created and all the reject files
will be there in the
reject file. u can modify the records and correct the things in the records and u can
load them to the
target directly from the reject file using Regect loader.if i am wrong pls.
correct.kumar.V
=======================================
can you explain how to load rejected rows thro informatica
=======================================
During the execution of workflow all the rejected rows will be stored in bad
files(where your
informatica server get installed;C:\Program Files\Informatica PowerCenter
7.1\Server) These bad files
can be imported as flat a file in source then thro' direct maping we can load these
files in desired format.
=======================================
121.Informatica - How to lookup the data on
multiple tabels.
QUESTION #121
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (128 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
January 05, 2006 12:05:06 #1
reddeppa
How to lookup the data on multiple tabels.
Click Here to view complete document
why using SQL override..we can lookup the Data on multiple tables.See in the
properties..tab..
=======================================
Hi
Thanks for your responce. But my question is
I have two sources or target tables i want to lookup that two sources or target
tables. How can i. It is
possible to SQL Override.
=======================================
just check with import option
=======================================
How to lookup the data on multiple tabels.
=======================================
if u want to lookup data on multiple tables at a time u can do one thing join the
tables which u want then
lookup that joined table. informatica provieds lookup on joined tables hats off to
informatica.
=======================================
Hi
You can do it.
When you create lookup transformation that time INFA asks for table name so you
can choose either
source target import and skip.
So click skip and the use the sql overide property in properties tab to join two table
for lookup.
=======================================
file:///C|/Perl/bin/result.html (129 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
join the two source by using the joiner transformation and then apply a look up on
the resaulting table
=======================================
hi
what ever my friends have answered earlier is correct. to be more specific
if the two tables are relational then u can use the SQL lookup over ride option to
join the two tables in
the lookup properties.u cannot join a flat file and a relatioanl table.
eg: lookup default query will be select lookup table column_names from
lookup_table. u can now
continue this query. add column_names of the 2nd table with the qualifier and a
where clause. if u want
to use a order by then use -- at the end of the order by.
hope this is more clear
=======================================
122.Informatica - What is the procedure to load the
fact table.Give
in detail?
QUESTION #122
No best answer available. Please pick the good answer available
or submit your
answer.
January 19, 2006 14:26:22 #1
Guest
RE: What is the procedure to load the fact table.Give ...
file:///C|/Perl/bin/result.html (130 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Based on the requirement to your fact table choose the sources and data and
transform it based on
your business needs. For the fact table you need a primary key so use a sequence
generator
transformation to generate a unique key and pipe it to the target (fact) table with
the foreign keys
from the source tables.
Please correct if I am wrong.
=======================================
we use the 2 wizards (i.e) the getting started wizard and slowly changing
dimension wizard to load the
fact and dimension tables by using these 2 wizards we can create different types of
mappings according
to the business requirements and load into the star schemas(fact and dimension
tables).
=======================================
first dimenstion tables need to be loaded then according to the specifications the
fact tables should be
loaded. dont think that fact tables r different in case of loading it is general
mapping as we do for other
tables. specifications will play important role for loading the fact.
=======================================
hi
usually source records are looked up with the records in the dimension table.DIM
tables are called
lookup or reference table. all the possible values are stored in DIM table. e.g
product all the existing
prod_id will be in DIM table. when data from source is looked up against the dim
table the
corresponding keys are sent to the fact table.this is not the fixed rule to be followed
it may vary as per
ur requirments and methods u follow.some times only the existance check will be
done and the prod_id
itself will be sent to the fact.
=======================================
123.Informatica - What is the use of incremental
aggregation?
Explain me in brief with an example.
QUESTION #123
No best answer available. Please pick the good answer available
or submit your
answer.
January 29, 2006 11:58:51 #1
file:///C|/Perl/bin/result.html (131 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
gazulas Member Since: January 2006 Contribution: 17
RE: What is the use of incremental aggregation? Explai...
Click Here to view complete document
its a session option. when the informatica server performs incremental aggr. it
passes new source data
through the mapping and uses historical chache data to perform new aggregation
caluculations
incrementaly. for performance we will use it.
=======================================
Incremental aggregation is in session properties i have 500 records in my source
and again i got 300
records if u r not using incremental aggregation what are calculation r using on 500
records again that
calculation will be done on 500+ 300 records if u r using incremental aggregation
calculation will be
done one only what are new records (300) that will be calculated dur to this one
performance will
increasing.
=======================================
124.Informatica - How to delete duplicate rows in
flat files source
is any option in informatica
QUESTION #124 Submitted by: gazulas
use a sorter transformation , in that u will have a "distinct"
option make use of
it .
Above answer was rated as good by the following members:
sn3508 Click Here to view complete document
use a sorter transformation in that u will have a distinct option make use of it .
=======================================
file:///C|/Perl/bin/result.html (132 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
hi
u can use a dynamic lookup or an aggregator or a sorter for doing this.
=======================================
Instead we can use 'select distinct' query in Source qualifier of the source Flat file.
Correct me if I am
wrong.
=======================================
You cannot write SQL override for flat file
=======================================
125.Informatica - how to use mapping parameters
and what is
their use
QUESTION #125
No best answer available. Please pick the good answer available
or submit your
answer.
January 29, 2006 11:47:14 #1
gazulas Member Since: January 2006 Contribution: 17
RE: how to use mapping parameters and what is their us...
Click Here to view complete document
in designer u will find the mapping parameters and variables options.u can assign a
value to them in
designer. comming to there uses suppose u r doing incremental extractions daily.
suppose ur source
system contains the day column. so every day u have to go to that mapping and
change the day so that
the particular data will be extracted . if we do that it will be like a layman's work.
there comes the
concept of mapping parameters and variables. once if u assign a value to a
mapping variable then it will
change between sessions.
=======================================
mapping parameters and variables make the use of mappings more flexible.and
also it avoids creating
of multiple mappings. it helps in adding incremental data.mapping parameters and
variables has to
create in the mapping designer by choosing the menu option as Mapping ---->
parameters and variables
and the enter the name for the variable or parameter but it has to be preceded by
$$. and choose type as
parameter/variable datatypeonce defined the variable/parameter is in the any
expression for example in
SQ transformation in the source filter prop[erties tab. just enter filter condition and
finally create a
parameter file to assgn the value for the variable / parameter and configigure the
session properties.
however the final step is optional. if ther parameter is npt present it uses the initial
value which is
file:///C|/Perl/bin/result.html (133 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
assigned at the time of creating the variable
=======================================
126.Informatica - Can we use aggregator/active
transformation
after update strategy transformation
QUESTION #126
No best answer available. Please pick the good answer available
or submit your
answer.
January 30, 2006 05:06:08 #1
jawahar
RE: Can we use aggregator/active transformation after ...
Click Here to view complete document
we can use but the update flag will not be remain.but we can use passive
transformation
=======================================
I guess no update can be placed just before to the target qs per my knowledge
=======================================
You can use aggregator after update strategy. The problem will be once you
perform the update strategy
say you had flagged some rows to be deleted and you had performed aggregator
transformation for all
rows say you are using SUM function then the deleted rows will be subtracted
from this aggregator
transformation.
=======================================
127.Informatica - why dimenstion tables are
denormalized in
nature ?
QUESTION #127
No best answer available. Please pick the good answer available
or submit your
answer.
January 31, 2006 04:05:43 #1
Rahman
RE: why dimenstion tables are denormalized in nature ?...
file:///C|/Perl/bin/result.html (134 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Because in Data warehousing historical data should be maintained to maintain
historical data means
suppose one employee details like where previously he worked and now where he
is working all details
should be maintain in one table if u maintain primary key it won't allow the
duplicate records with same
employee id. so to maintain historical data we are all going for concept data
warehousing by using
surrogate keys we can achieve the historical data(using oracle sequence for critical
column).
so all the dimensions are marinating historical data they are de normalized because
of duplicate entry
means not exactly duplicate record with same employee number another record is
maintaining in the
table.
=======================================
dear reham thanks for ur responce First of all i want to tell one thing to all users
who r using this site.
please give answers only if u r confident about it. refer it once again in the manual
its not wrong. If we
give wrong answers lot of people who did't know the answer thought it as the
correct answer and may
fail in the interview. the site must be helpfull to other please keep that in the mind.
regarding why dimenstion tables r in denormalised in nature.
i had discussed with my project manager about this. what he told is :->
The attributes in a dimension tables are used over again and again in queries. for
efficient query
performance it is best if the query picks up an attribute from the dimension table
and goes directly to the
fact table and do not thru the intermediate tables. if we normalized the dimension
table we will create
such intermediate tables and that will not be efficient
=======================================
Yes what your manager told is correct. Apart from this we maintain Hierarchy in
these tables.
Maintaining Hierarchy is pretty important in the dwh environment. For example if
there is a child table
and then a parent table. if both child and parent are kept in different tables one has
to every time join or
query both these tables to get the parent child relation. so if we have both child and
parent in the same
table we can always refer immediately. this may be a case.
Similary if we have a hierarchy something like this...county > city > state >
territory > division > region
> nation
If we have different tables for all it would be a waste of database space and also we
need to query all
file:///C|/Perl/bin/result.html (135 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
these tables everytime. Thats why we maintain hierarchy in dimension tables and
based on the business
we decide whether to maintain in the same table or different tables.
=======================================
hello everyone
i don't know the answer of this question but i ve to tell u that how can we say that
dimension table is denormalized
because in snowflake schema we normalized all the dimension tables.
what would be ur comment on this??
=======================================
I am a beginner to DW
but as I know fact tables - Denormalized
And Dimension Tables - Normalized.
If I am wrong please correct.
=======================================
De-normalization is basically the concept of keeping all the dimension hierarchies
in a single
dimensions tables. This causes less number of joins while retriving data from
dimensions and hence
faster data retrival. This is why dimensions in OLAP systems are de-normalized.
=======================================
128.Informatica - In a sequential Batch how can we
stop single
session?
QUESTION #128 Submitted by: prasadns26
hi,
we can stop it using PMCMD command or in the monitor right
click on that
perticular session and select stop.this will stop the current
session and the
sessions next to it.
file:///C|/Perl/bin/result.html (136 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Above answer was rated as good by the following members:
sn3508 Click Here to view complete document
we have a task called wait event using that we can stop.
we start using raise event.
this is as per my knowledge.
=======================================
hi
we can stop it using PMCMD command or in the monitor right click on that
perticular session and
select stop.this will stop the current session and the sessions next to it.
=======================================
129.Informatica - How do you handle decimal places
while
importing a flatfile into informatica?
QUESTION #129
No best answer available. Please pick the good answer available
or submit your
answer.
February 11, 2006 20:44:03 #1
rajendar
RE: How do you handle decimal places while importing a...
file:///C|/Perl/bin/result.html (137 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
while geeting the data from flat file in informatica ie import data from flat file it
will ask for the
precision just enter that
=======================================
while importing the flat file the flat file wizard helps in configuring the properties
of the file so that
select the numeric column and just enter the precision value and the scale.
precision includes the scale
for example if the number is 98888.654 enter precision as 8 and scale as 3 and
width as 10 for fixed
width flat file
=======================================
you can handle that by simply using the source analyzer window and then go to the
ports of that flat file
representations and changing the precision and scales.
=======================================
hi
while importing flat file definetion just specify the scale for a neumaric data type.
in the mapping the
flat file source supports only number datatype(no decimal and integer). In the SQ
associated with that
source will have a data type as decimal for that number port of the source.
source ->number datatype port ->SQ -> decimal datatype.Integer is not supported.
hence decimal is
taken care.
=======================================
130.Informatica - If you have four lookup tables in
the workflow.
How do you troubleshoot to improve performance?
QUESTION #130
No best answer available. Please pick the good answer available
or submit your
answer.
February 10, 2006 15:51:01 #1
swapna
Use shared cache
file:///C|/Perl/bin/result.html (138 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
When a workflow has multiple lookup tables use shered cache .
=======================================
there r many ways to improve the mapping which has multiple lookups.
1) we can create an index for the lookup table if we have permissions(staging
area).
2) divide the lookup mapping into two (a) dedicate one for insert means: source -
target these r new
rows . only the new rows will come to mapping and the process will be fast . (b)
dedicate the second
one to update : source target these r existing rows. only the rows which exists
allready will come into
the mapping.
3)we can increase the chache size of the lookup.
=======================================
131.Informatica - How do I import VSAM files from
source to
target. Do I need a special plugin
QUESTION #131
No best answer available. Please pick the good answer available
or submit your
answer.
February 13, 2006 08:52:18 #1
swati
RE: How do I import VSAM files from source to target. ...
Click Here to view complete document
As far my knowledge by using power exchange tool convert vsam file to oracle
tables then do mapping
as usual to the target table.
=======================================
Hi
In mapping Designer we have direct option to import files from VSAM Navigation
: Sources > Import
from file > file from COBOL.
Thanks
file:///C|/Perl/bin/result.html (139 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Farid khan Pathan.
=======================================
Yes you will need PowerExchange. With the product you can read from and write
to VSAM. I have
used it to read VSAM from one mainframe platform and write to a different
platform. Have worked on
KSDS and ESDS file types. You will need PowerExchange client on your platform
and a
PowerExchange listener on each of the mainframe platform(s) that you wish to
work on.
=======================================
PowerExchange does not need to copy your VSAM file to Oracle unless you want
to do that. It can do a
direct read/write to VSAM.
=======================================
132.Informatica - Differences between Normalizer
and
Normalizer transformation.
QUESTION #132
No best answer available. Please pick the good answer available
or submit your
answer.
March 08, 2006 06:03:58 #1
ravi kumar guturi
RE: Differences between Normalizer and Normalizer tran...
Click Here to view complete document
Normalizer: It is a transormation mainly using for cobol sources
it's change the rows into coloums and columns into rows
Normalization:To remove the retundancy and inconsitecy
=======================================
133.Informatica - What is IQD file?
QUESTION #133
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (140 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
February 27, 2006 05:41:52 #1
sathyanath.gopi Member Since: February 2006 Contribution: 2
RE: What is IQD file?
Click Here to view complete document
IQD file is nothing but Impromptu Query Definetion This file is maily used in
Cognos Impromptu tool
after creating a imr( report) we save the imr as IQD file which is used while
creating a cube in power
play transformer.In data source type we select Impromptu Query Definetion.
=======================================
IQD file is nothing but Impromptu Query Definition.This file used for creating
cubes in COGNOS
Powerplay Transformer.
=======================================
134.Informatica - What is data merging, data
cleansing, sampling?
QUESTION #134
No best answer available. Please pick the good answer available
or submit your
answer.
March 08, 2006 06:01:26 #1
ravi kumar guturi
RE: What is data merging, data cleansing, sampling?
Click Here to view complete document
simply man
Cleansing:---TO identify and remove the retundacy and inconsistency
sampling: just smaple the data throug send the data from source to target
=======================================
135.Informatica - How to import oracle sequence
into Informatica.
QUESTION #135
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (141 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
February 15, 2006 05:39:04 #1
sunil kumar
RE: How to import oracle sequence into Informatica.
Click Here to view complete document
CREATE ONE PROCEDURE AND DECLARE THE SEQUENCE INSIDE THE
PROCEDURE
FINALLY CALL THE PROCEDURE IN INFORMATICA WITH THE HELP OF
STORED
PROCEDURE TRANSFORMATION.if still is there any problem please contact
me.Thanks Good
Luck!
=======================================
Hi sunil
I got a problem with this...Can you jsut tell me a procedure to generate sequence
number in SQl like...
if i give n no of emplyees it should generate seq.and i want to use them in
informatica...using stored
procedure and load them into target..
can u help me with this..
thanks
=======================================
136.Informatica - With out using Updatestretagy and
sessons
options, how we can do the update our target table?
QUESTION #136
No best answer available. Please pick the good answer available
or submit your
answer.
February 14, 2006 23:48:35 #1
Saritha
RE: With out using Updatestretagy and sessons options,...
file:///C|/Perl/bin/result.html (142 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
you can use this by using update override in target properties
=======================================
using update override in target option.
=======================================
In session properties There is an option
insert
update
insert as update
update as update
like that
by using this we will easily solve
=======================================
By default all the rows in the session is set as insert flag you can change it in the
session general
properties -- Treate source rows as :update
so all the incoming rows will be set with update flag.now you can update the rows
in the target table
=======================================
hi
if ur database is teradata we can do it with a tpump or mload external loader.
update override in target properties is used basically for updating the target table
based on a non key
column.e.g update by ename.its not a key column in the EMP table.But if u use a
UPD or session level
properties it necessarily should have a PK.
=======================================
file:///C|/Perl/bin/result.html (143 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
137.Informatica - Two relational tables are
connected to SQ Trans,
what are the possible errors it will be thrown?
QUESTION #137
No best answer available. Please pick the good answer available
or submit your
answer.
February 18, 2006 08:45:49 #1
geek_78 Member Since: February 2006 Contribution: 1
RE: Two relational tables are connected to SQ Trans,wh...
Click Here to view complete document
We can connect two relational table in one sq Transformation.No errors will be
perform
Regards
R.Karthikeyan
=======================================
The only two possibilities as of I know is
1. Both the table should have primary key/foreign key relation ship
2. Both the table should be available in the same schema or same database
=======================================
138.Informatica - what are partition points?
QUESTION #138 Submitted by: saritha
Partition points mark the thread boundaries in a source pipeline and divide
the pipeline into stages.
file:///C|/Perl/bin/result.html (144 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Above answer was rated as good by the following members:
sn3508 Click Here to view complete document
Partition points mark the thread boundaries in a source pipeline and divide
the pipeline into stages.
=======================================
139.Informatica - what are cost based and rule based
approaches
and the difference
QUESTION #139
No best answer available. Please pick the good answer available
or submit your
answer.
March 02, 2006 17:18:19 #1
Gayathri
RE: what are cost based and rule based approaches and ...
Click Here to view complete document
Cost based and rule based approaches are the optimization techniques which are
used in related to
databases where we need to optimize a sql query.
Basically Oracle provides Two types of Optimizers (indeed 3 but we use only these
two techniques. bcz
the third has some disadvantages.)
When ever you process any sql query in Oracle what oracle engine internally does
is it reads the query
and decides which will the best possible way for executing the query. So in this
process Oracle follows
these optimization techniques.
1. cost based Optimizer(CBO): If a sql query can be executed in 2 different ways (
like may have path 1
and path2 for same query) then What CBO does is it basically calculates the cost of
each path and the
analyses for which path the cost of execution is less and then executes that path so
that it can optimize
the quey execution.
2. Rule base optimizer(RBO): this basically follows the rules which are needed for
executing a query.
So depending on the number of rules which are to be applied the optimzer runs the
query.
file:///C|/Perl/bin/result.html (145 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Use:
If the table you are trying to query is already analysed then oracle will go with
CBO.
If the table is not analysed the Oracle follows RBO.
For the first time if table is not analysed Oracle will go with full table scan.
=======================================
140.Informatica - what is mystery dimention?
QUESTION #140
No best answer available. Please pick the good answer available
or submit your
answer.
March 05, 2006 23:55:59 #1
Reddy
RE: what is mystery dimention?
Click Here to view complete document
using Mystery Dimension ur maitaining the mystery data in ur Project.
=======================================
Plz explain me Clearly what is meant by Mystery Dimension?
=======================================
Also known as Junk Dimensions
Making sense of the rogue fields in your fact table..
Please read the article
http://www.intelligententerprise.com/000320/webhouse.jhtml
=======================================
141.Informatica - what is difference b/w Informatica
7.1 and
Abinitio
QUESTION #141
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (146 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
February 24, 2006 01:25:58 #1
Niraj Kumar
RE: what is difference b/w Informatica 7.1 and Abiniti...
Click Here to view complete document
in Informatica there is the concept of co-operating system which makes the
mapping in parallel fashion
which is not in Informatica
=======================================
There is a lot of diffrence between informatica an Ab Initio
In Ab Initio we r using 3 parllalisim
but Informatica using 1 parllalisim
In Ab Initio no scheduling option we can scheduled manully or pl/sql script
but informatica contains 4 scheduling options
Ab Inition contains co-operating system
but informatica is not
Ramp time is very quickly in Ab Initio campare than Informatica
Ab Initio is userfriendly than Informatica
=======================================
142.Informatica - What is Micro Strategy? Why is it
used for? Can
any one explain in detail about it?
QUESTION #142
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (147 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
March 11, 2006 03:28:52 #1
kundan
RE: What is Micro Strategy? Why is it used for? Can an...
Click Here to view complete document
Micro strategy is again an BI tool whicl is a HOLAP... u can create 2 dimensional
report and also cubes
in here.......basically a reporting tool. IT HAS A FULL RANGE OF REPORTING
ON WEB ALSO IN
WINDOWS.
=======================================
143.Informatica - Can i start and stop single session
in concurent
bstch?
QUESTION #143
No best answer available. Please pick the good answer available
or submit your
answer.
March 08, 2006 05:50:15 #1
ravi kumar guturi
RE: Can i start and stop single session in concurent b...
Click Here to view complete document
ya shoor Just right click on the particular session and going to recovery option
or
by using event wait and event rise
=======================================
144.Informatica - what is the gap analysis?
QUESTION #144
No best answer available. Please pick the good answer available
or submit your
answer.
April 11, 2007 10:55:26 #1
file:///C|/Perl/bin/result.html (148 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
vizaik Member Since: March 2007 Contribution: 30
RE: what is the gap analysis?
Click Here to view complete document
For a project there will be:
1.BRD(Business Requirement Document)-BA
2.SSSD(Source System Study Document)-BA
BRD consists of the requirements of the client.
SSSD consists of the source system study.
The source that does not the meet the requiremnts specified in BRD using the
source given in the SSSD
is treated as gap analysis. or in one word the difference between 1 and 2 is called
gap analysis.
=======================================
145.Informatica - what is the difference between
stop and abort
QUESTION #145
No best answer available. Please pick the good answer available
or submit your
answer.
March 02, 2006 15:17:45 #1
Sirisha
RE: what is the difference between stop and abort
Click Here to view complete document
The PowerCenter Server handles the abort command for the Session task like the
stop command except
it has a timeout period of 60 seconds. If the PowerCenter Server cannot finish
processing and
committing data within the timeout period it kills the DTM process and terminates
the session.
=======================================
stop: _______If the session u want to stop is a part of batch you must stop the
batch
if the batch is part of nested batch Stop the outer most bacth\
Abort:----
You can issue the abort command it is similar to stop command except it has 60
second time out .
file:///C|/Perl/bin/result.html (149 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
If the server cannot finish processing and commiting data with in 60 sec
=======================================
Stop: In this case data query from source databases is stopped immediately but
whatever data has been
loaded into buffer there transformations and loading contunes.Abort: Same as Stop
but in this case
maximum time allowed for buffered data is 60 Seconds.
=======================================
Stop: In this case data query from source databases is stopped immediately but
whatever data has been
loaded into buffer there transformations and loading contunes. Abort: Same as
Stop but in this case
maximum time allowed for buffered data is 60 Seconds.
=======================================
146.Informatica - can we run a group of sessions
without using
workflow manager
QUESTION #146
No best answer available. Please pick the good answer available
or submit your
answer.
March 05, 2006 23:48:38 #1
Reddy
RE: can we run a group of sessions without using workf...
Click Here to view complete document
ya Its Posible using pmcmd Command with out using the workflow Manager run
the group of session.
as per my knowledge i give the answer.
=======================================
147.Informatica - what is meant by complex
mapping,
QUESTION #147
No best answer available. Please pick the good answer available
or submit your
answer.
March 13, 2006 02:46:29 #1
satyam_un Member Since: March 2006 Contribution: 5
file:///C|/Perl/bin/result.html (150 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: what is meant by complex mapping,
Click Here to view complete document
complex mapping means having more business rules .
=======================================
Complex maping means involved in more logic and more business rules.
Actually in my project complex mapping is
In my bank project I involved in construct a 1 dataware house
Meny customer is there in my bank project They r after taking loans relocated in to
another place
that time i feel to diffcult maintain both prvious and current adresses
in the sense i am using scd2
This is an simple example of complex mapping
=======================================
148.Informatica - explain use of update strategy
transformation
QUESTION #148
No best answer available. Please pick the good answer available
or submit your
answer.
March 22, 2006 00:00:52 #1
satyambabu
RE: explain use of update strategy transformation
file:///C|/Perl/bin/result.html (151 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
maintain the history data and maintain the most recent changaes data.
=======================================
To flag source records as INSERT DELETE UPDATE or REJECT for target
database. Default flag is
Insert. This is must for Incremental Data Loading.
=======================================
149.Informatica - what are mapping parameters and
varibles in
which situation we can use it
QUESTION #149
No best answer available. Please pick the good answer available
or submit your
answer.
March 16, 2006 06:27:48 #1
Girish
RE: what are mapping parameters and varibles in which ...
Click Here to view complete document
Mapping parameters have a constant value through out the session
whereas in mapping variable the values change and the informatica server saves
the values in the
repository and uses next time when u run the session.
=======================================
If we need to change certain attributes of a mapping after every time the session is
run it will be very
difficult to edit the mapping and then change the attribute. So we use mapping
parameters and variables
and define the values in a parameter file. Then we could edit the parameter file to
change the attribute
values. This makes the process simple.
Mapping parameter values remain constant. If we need to change the parameter
value then we need to
edit the parameter file .
But value of mapping variables can be changed by using variable function. If we
need to increment the
attribute value by 1 after every session run then we can use mapping variables .
In a mapping parameter we need to manually edit the attribute value in the
parameter file after every
session run.
file:///C|/Perl/bin/result.html (152 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
How can you edit the parameter file? Once you setup a mapping variable how can
you define them in a
parameter file?
=======================================
150.Informatica - what is worklet and what use of
worklet and in
which situation we can use it
QUESTION #150 Submitted by: SSekar
Worklet is a set of tasks. If a certain set of task has to be reused
in many
workflows then we use worklets. To execute a Worklet, it has to
be placed inside
a workflow.
The use of worklet in a workflow is similar to the use of mapplet
in a mapping.
Above answer was rated as good by the following members:
sn3508 Click Here to view complete document
A set of worlflow tasks is called worklet
Workflow tasks means
1)timer2)decesion3)command4)eventwait5)eventrise6)mail etc......
But we r use diffrent situations by using this only
=======================================
Worklet is a set of tasks. If a certain set of task has to be reused in many workflows
then we use
worklets. To execute a Worklet it has to be placed inside a workflow.
The use of worklet in a workflow is similar to the use of mapplet in a mapping.
=======================================
file:///C|/Perl/bin/result.html (153 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Worklet is reusable workflows. It might contain more than on task in it. We can
use these worklets in
other workflows.
=======================================
Besides the reusability of a worklet as mentioned above we can also use a worklet
to group related
sessions together in a very big workflow. Suppose we have to extract a file and
then load a fact table in
the workflow we can use one worklet to load/update dimensions.
=======================================
151.Informatica - How do you configure mapping in
informatica
QUESTION #151
No best answer available. Please pick the good answer available
or submit your
answer.
March 17, 2006 05:34:39 #1
suresh
RE: How do you configure mapping in informatica
Click Here to view complete document
You should configure the mapping with the least number of transformations and
expressions to do the
most amount of work possible. You should minimize the amount of data moved by
deleting
unnecessary links between transformations.
For transformations that use data cache (such as Aggregator Joiner Rank and
Lookup transformations)
limit connected input/output or output ports. Limiting the number of connected
input/output or output
ports reduces the amount of data the transformations store in the data cache.
You can also perform the following tasks to optimize the mapping:
l Configure single-pass reading.
l Optimize datatype conversions.
l Eliminate transformation errors.
l Optimize transformations.
l Optimize expressions. You should configure the mapping with the least number of
transformations and expressions to do the most amount of work possible. You
should minimize
the amount of data moved by deleting unnecessary links between transformations.
For transformations that use data cache (such as Aggregator Joiner Rank and
Lookup
transformations) limit connected input/output or output ports. Limiting the number
of connected
input/output or output ports reduces the amount of data the transformations store in
the data
cache.
file:///C|/Perl/bin/result.html (154 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
You can also perform the following tasks to optimize the mapping:
m Configure single-pass reading.
m Optimize datatype conversions.
m Eliminate transformation errors.
m Optimize transformations.
m Optimize expressions.
=======================================
152.Informatica - Can i use a session Bulk loading
option that
time can i make a recovery to the session?
QUESTION #152 Submitted by: SSekar
If the session is configured to use in bulk mode it will not write
recovery
information to recovery tables. So Bulk loading will not perform
the recovery as
required.
Above answer was rated as good by the following members:
sn3508 Click Here to view complete document
It's not possible
=======================================
If the session is configured to use in bulk mode it will not write recovery
information to recovery tables.
So Bulk loading will not perform the recovery as required.
=======================================
no why because in bulk load u wont create redo log file when u normal load we
create redo log file
but in bulk load session performance increases
=======================================
153.Informatica - what is difference between COM
& DCOM?
QUESTION #153
No best answer available. Please pick the good answer available
or submit your
answer.
October 05, 2006 01:23:48 #1
balaji
file:///C|/Perl/bin/result.html (155 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: what is difference between COM & DCOM?
Click Here to view complete document
Hai
COM is technology developed by Microsoft which is based on OO Design .COM
Componenets
exposes its interfaces at interface pointer where client access Components
intrerface.
DCOM is the protocol which enables the s/w componenets in different machine to
communicate with
each other through n/w .
=======================================
154.Informatica - what are the enhancements made
to Informatica
7.1.1 version when compared to 6.2.2 version?
QUESTION #154
No best answer available. Please pick the good answer available
or submit your
answer.
April 04, 2006 01:07:29 #1
sn3508 Member Since: April 2006 Contribution: 20
RE: what are the enhancements made to Informatica 7.1....
Click Here to view complete document
I'm a newbie. Correct me if I'm wrong.
In 7+ versions
- we can lookup a flat file
- union and custom transformation
- there is propogate option i.e if we change any datatype of a field all the linked
columns will reflect
that change
file:///C|/Perl/bin/result.html (156 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
- we can write to XML target.
- we can use upto 64 partitions
=======================================
1.union & custom transformation
2.lookup on flatfile
3.we can use pmcmd command
4.we can export independent&dependent repository objects
5.version controlling
6.data proffiling
7.supporting of 64mb architecture
8.ldap authentication
=======================================
155.Informatica - how do you create a mapping
using multiple
lookup transformation?
QUESTION #155
No best answer available. Please pick the good answer available
or submit your
answer.
March 30, 2006 16:26:57 #1
Sri
RE: how do you create a mapping using multiple lookup ...
file:///C|/Perl/bin/result.html (157 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Use Multiple Lookups in the mapping
=======================================
Use unconnected lookup if same lookup repeats multiple times.
=======================================
u mean only look up trf and not any other transformations? be clear? if not one
senario is that we can
use multiple connected loopup trf depending upon the target.if u have different
wh_keys in ur target
table and ur souce table only has its columns but not wh_keys.so inorder to connect
coloumn to wh_key
coloumn eg:sales_branch...>wh_sales_branch.we use multiple look ups.depending
upon the targets
=======================================
156.Informatica - what is the exact meaning of
domain?
QUESTION #156
No best answer available. Please pick the good answer available
or submit your
answer.
May 01, 2006 16:51:12 #1
kalyan
RE: what is the exact meaning of domain?
Click Here to view complete document
a particular environment or a name that identifies one or more IP
addressesexample gov - Government
agencies edu - Educational institutions org - Organizations (nonprofit) mil -
Military com - commercial
business net - Network organizations ca - Canada th - Thailand in - India
=======================================
Domain is nothing but give a comlete information on a particular subject area..
like sales domain telecom domain..etc
=======================================
Domain in Informatica means - A central Global repository (GDR) along with the
registered Local
repositories (LDR) to this GDR. This is possible only in PowerCenter and not
PowerMart.
=======================================
In Database parlance you can define a domain a set of all possible permissible
values for any attribute .
Like you can say the domain for attribute Credit Card No# consists of all possible
valid 16 digit
numbers.
Thanks.
file:///C|/Perl/bin/result.html (158 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
157.Informatica - what is the hierarchies in DWH
QUESTION #157
No best answer available. Please pick the good answer available
or submit your
answer.
May 01, 2006 16:37:54 #1
kalyan
RE: what is the hierarchies in DWH
Click Here to view complete document
Data sources ---> Data acquisition ---> Warehouse ---> Front end tools --->
Metadata management --->
Data warehouse operation management
=======================================
Hierarchy in DWH is nothing but an ordered series of related dimensions grouped
together to perform
multidimensional analysis.
=======================================
158.Informatica - Difference between Rank and
Dense Rank?
QUESTION #158 Submitted by: sm1506
Rank:
1
2<--2nd position
2<--3rd position
4
5
Same Rank is assigned to same totals/numbers. Rank is followed
by the
Position. Golf game ususally Ranks this way. This is usually a
Gold Ranking.
Dense Rank:
1
2<--2nd position
2<--3rd position
file:///C|/Perl/bin/result.html (159 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
3
4
Same ranks are assigned to same totals/numbers/names. the next
rank follows
the serial number.
Above answer was rated as good by the following members:
sn3508 Click Here to view complete document
Rank:
1
2<--2nd position
2<--3rd position
4
5
Same Rank is assigned to same totals/numbers. Rank is followed by the Position.
Golf game ususally
Ranks this way. This is usually a Gold Ranking.
Dense Rank:
1
2<--2nd position
2<--3rd position
3
4
Same ranks are assigned to same totals/numbers/names. the next rank follows the
serial number.
=======================================
159.Informatica - can anyone explain about
incremental
aggregation with an example?
QUESTION #159
No best answer available. Please pick the good answer available
or submit your
answer.
April 09, 2006 21:15:24 #1
file:///C|/Perl/bin/result.html (160 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
maverickwild Member Since: November 2005 Contribution: 3
RE: can anyone explain about incremental aggregation w...
Click Here to view complete document
When you use aggregator transformation to aggregate it creates index and data
caches to store the data 1.
Of group By columns 2. Of aggreagte columns
the incremental aggreagtion is used when we have historical data in place which
will be used in
aggregation incremental aggregation uses the cache which contains the historical
data and for each
group by column value already present in cache it add the data value to its
corresponding data cache
value and outputs the row in case of a incoming value having no match in index
cache the new values
for group by and output ports are inserted into the cache .
=======================================
Incremental aggregation is specially used for tune the performance of the
aggregator. It captures the
change each time (incrementally) you run the transformation and then performs the
aggregation
function to the changed rows and not to the entire rows. This improves the
performance because you are
not reading the entire source each time you run the session.
=======================================
160.Informatica - What is meant by Junk Attribute
in Informatica?
QUESTION #160
No best answer available. Please pick the good answer available
or submit your
answer.
April 17, 2006 10:10:23 #1
raghavendra
RE: What is meant by Junk Attribute in Informatica?
file:///C|/Perl/bin/result.html (161 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Junk Dimension A Dimension is called junk dimension if it contains attribute
which are rarely changed
ormodified. example In Banking Domain we can fetch four attributes accounting to
a junk dimensions
like from the Overall_Transaction_master table tput flag tcmp flag del flag
advance flag all these
attributes can be a part of a junk dimensions. thankxregards raghavendra
=======================================
Hi
In the requirment collection phase all the attributes that are likely to be used in any
dimension will be
gathered. while creating a dimension we use all the related attributes of that
dimesion from the gathered
list. At the last a dimension will be created with all the left over attributes which is
usually called as
JUNK Dimension and the attributes are called JUNK Attributes.
=======================================
161.Informatica - Informatica Live Interview
Questions
QUESTION #161 here are some of the interview questions i could
not answer,
any body can help giving answers for others also.
thanks in advance.
Explain grouped cross tab?
Explain reference cursor
What are parallel query's and query hints
What is meta data and system catalog
What is factless fact schema
What is confirmed dimension
Which kind of index is preferred in DWH
Why do we use DSS database for OLAP tools Click Here to view
complete document
No best answer available. Please pick the good answer available or submit your
answer.
April 17, 2006 07:27:07 #1
binoy_pa Member Since: April 2006 Contribution: 5
RE: Informatica Live Interview Questions
file:///C|/Perl/bin/result.html (162 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
confirmed dimension one dimension that shares with two fact table
factless means fact table without measures only contains foreign keys-two types of
factless table one is
event tracking and other is covergae table
Bit map indexes preffered in the data ware housing
Metedata is data about data here every thing is stored example-mapping sessions
privileges other data
in informatica we can see the metedata in the repository.
System catalog that we used in the cognos that also contains data tables privileges
predefined filter etc
using this catalog we generate reports
group cross tab is a type of report in cognos where we have to assign 3 measures
for getting the result
=======================================
Hi Bin
I doubt your answer about the Grouped Cross Tab where you said 3 measure are to
be specified which i
feel is wrong.
I think that grouped cross tab has only one measure but the side and row headers
are grouped like
India China
Mah | Goa XYZ | PQR
2000 20K 30K 45K 55K
2001 39K 60K 34K 66K
Here the cross tab is grouped on Country and then State.
Similary even we can go further to drill year to Quarters.
And this is known gruped Cross tab.
file:///C|/Perl/bin/result.html (163 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
The cursor which is not declared in the declaration section but in executable
section where we can give
the table name dynamically there.so that the cursor can fetch the data from that
table
=======================================
grouped cross tab is the single report which contains number of crosstab report
based on the grouped
items. like
Here countries are groupe items.
INDIA
M1 M2
Banglore 542 542
Hyderabad 255 458
Chennai 45 254
USA
M1 M2
LA 578 5876
Chicago 4785 546
Washington DC 548 556
PAKISTAN
M1 M2
Lahore 457 875
Karachi 458 687
Islamabad 7894 64
Thanks
file:///C|/Perl/bin/result.html (164 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Hi
Rest of the answers are given by friends earlier.
DSS->Decision Support System.
The purpose of a DWH is to provide users data through which they can make their
critical besiness
decisions.
DSS data base is nothing but a DWH. OLAP tools obviously use data from a DWH
which is
transformed to generate reports. These reports are used by the users analysts to
extract strategic
information which helps in decision making.
=======================================
The Default Index type for DWH is bitmap(non-unique)index
=======================================
162.Informatica - how do we remove the staging area
QUESTION #162
No best answer available. Please pick the good answer available
or submit your
answer.
June 08, 2006 12:39:29 #1
Hanu
RE: how do we remove the staging area
file:///C|/Perl/bin/result.html (165 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Are u talking in the DW or in Informatica?
If u want any staging area dont create staging DB. Load data directly into Target
Hanu.
=======================================
hi
this question is logically not correct. staging area is just a set of intermediate
tables.u can create or
maintain these tables in the same database as of ur DWH or in a different
DB.These tables will be used
to store data from the source which will be cleaned transformed and undergo some
business logic.Once
the source data is done with the above process data from STG will be populated to
the final Fact table
through a simple one to one mapping.
=======================================
Hi
1)At DB level we can simply DROP the stagin table.
2)then Create Target table at DB level.
3) Directly LOAD the records into target table.
NOTE: It is recommended to use staging area.
=======================================
163.Informatica - what is polling?
QUESTION #163 It displays update information about the
session in the
monitor window. Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
May 01, 2006 16:32:38 #1
kalyan
RE: what is polling?
file:///C|/Perl/bin/result.html (166 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
It displays the updated information about the session in the monitor window. The
monitor window
displays the status of each session when you poll the Informatica server.
=======================================
164.Informatica - What is Transaction?
QUESTION #164
No best answer available. Please pick the good answer available
or submit your
answer.
April 14, 2006 09:08:18 #1
vishali
RE: What is Transaction?
Click Here to view complete document
A transaction can be define as DML operation.
means it can be insertion modification or deletion of data performed by users/
analysts/applicators
=======================================
transaction is nothing but changing one window to another window during process
=======================================
Transaction is a set of rows bound by commit or rollback.
=======================================
Hi
Transaction is any event that indicates some action.
In DB terms any commited changes occured in the database is said to be
transaction.
=======================================
165.Informatica - what happens if you try to create a
shortcut to a
non-shared folder?
QUESTION #165
No best answer available. Please pick the good answer available
or submit your
file:///C|/Perl/bin/result.html (167 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
answer.
April 14, 2006 14:59:17 #1
sunil_reddy Member Since: April 2006 Contribution: 2
RE: what happens if you try to create a shortcut to a ...
Click Here to view complete document
It only creates a copy of it..
=======================================
166.Informatica - In a joiner trasformation, you
should specify the
source with fewer rows as the master source. Why?
QUESTION #166
No best answer available. Please pick the good answer available
or submit your
answer.
April 14, 2006 09:02:38 #1
vishali
RE: In a joiner trasformation, you should specify the ...
Click Here to view complete document
in joinner transformation informatica server reads all the records from master
source builds index and
data caches based on master table rows.after building the caches the joiner
transformation reads records
from the detail source and perform joins
=======================================
Joiner transformation compares each row of the master source against the detail
source. The fewer
unique rows in the master the fewer iterations of the join comparison occur which
speeds the join
process.
=======================================
167.Informatica - Where is the cache stored in
informatica?
QUESTION #167
No best answer available. Please pick the good answer available
or submit your
answer.
April 15, 2006 02:14:01 #1
file:///C|/Perl/bin/result.html (168 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
shekar25_g Member Since: April 2006 Contribution: 2
RE: Where is the cache stored in informatica?
Click Here to view complete document
cache stored in informatica is in informatica server.
=======================================
Hi
Cache is stored in the Informatica server memory and over flowed data is stored on
the disk in file
format which will be automatically deleted after the successful completion of the
session run. If you
want to store that data you have to use a persistant cache.
=======================================
168.Informatica - can batches be copied/stopped
from server
manager?
QUESTION #168
No best answer available. Please pick the good answer available
or submit your
answer.
May 08, 2006 05:24:58 #1
MOOTATI RAGHAVENDROA REDDY
RE: can batches be copied/stopped from server manager?...
Click Here to view complete document
yes we can stop the batches using server manager or pmcmd commnad
=======================================
169.Informatica - what is rank transformation?where
can we use
this transformation?
QUESTION #169
No best answer available. Please pick the good answer available
or submit your
answer.
April 18, 2006 01:56:38 #1
madhan
file:///C|/Perl/bin/result.html (169 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: what is rank transformation?where can we use this ...
Click Here to view complete document
Rank transformation is used to find the status.ex if we have one sales table and in
this if we find more
employees selling the same product and we are in need to find the first 5 0r 10
employee who is selling
more products.we can go for rank transformation.
=======================================
It is used to filter the data from top/from buttom according to the condition.
=======================================
To arrange records in Hierarchical Order and to selecte TOP or BOTTOM records.
It is same as
START WITH and CONNECT BY PRIOR clauses.
=======================================
It is an active transformation which is used to identify the top and bottom values
based on the numerics .
by deafult it will create a rankindex port to caliculate the rank
=======================================
170.Informatica - Can Informatica load
heterogeneous targets
from heterogeneous sources?
QUESTION #170
No best answer available. Please pick the good answer available
or submit your
answer.
April 26, 2006 01:22:19 #1
Anant
RE: Can Informatica load heterogeneous targets from he...
file:///C|/Perl/bin/result.html (170 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Yes it can. For example...Flat File and Relations sources are joined in the mapping
and later Flat File
and relational targets are loaded.
=======================================
yes informatica for load the data form the heterogineous sources to heterogeneous
target.
=======================================
171.Informatica - how do you load the time
dimension.
QUESTION #171
No best answer available. Please pick the good answer available
or submit your
answer.
April 25, 2006 08:32:33 #1
Appadu Dora P
RE: how do you load the time dimension.
Click Here to view complete document
Time Dimension will generally load manually by using PL/SQL shell scripts proc
C etc......
=======================================
create a procedure to load data into Time Dimension. The procedure needs to run
only once to popullate
all the data. For eg the code below fills up till 2015. You can modify the code to
suit the feilds in ur
table.
create or replace procedure QISODS.Insert_W_DAY_D_PR as
LastSeqID number default 0;
loaddate Date default to_date('12/31/1979' 'mm/dd/yyyy');
begin
Loop
LastSeqID : LastSeqID + 1;
loaddate : loaddate + 1;
INSERT into QISODS.W_DAY_D values(
file:///C|/Perl/bin/result.html (171 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
LastSeqID
Trunc(loaddate)
Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2)
)
TO_FLOAT(TO_CHAR(loaddate 'MM'))
TO_FLOAT(TO_CHAR(loaddate 'Q'))
trunc((ROUND(TO_DECIMAL(to_char(loaddate 'DDD'))) +
ROUND(TO_DECIMAL(to_char(trunc(loaddate 'YYYY') 'D')))+ 5) / 7)
TO_FLOAT(TO_CHAR(loaddate 'YYYY'))
TO_FLOAT(TO_CHAR(loaddate 'DD'))
TO_FLOAT(TO_CHAR(loaddate 'D'))
TO_FLOAT(TO_CHAR(loaddate 'DDD'))
1
1
1
1
1
TO_FLOAT(TO_CHAR(loaddate 'J'))
((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 12) +
TO_number(TO_CHAR(loaddate 'MM'))
((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 4) +
TO_number(TO_CHAR(loaddate 'Q'))
TO_FLOAT(TO_CHAR(loaddate 'J'))/7
TO_FLOAT (TO_CHAR (loaddate 'YYYY')) + 4713
TO_CHAR(load_date 'Day')
TO_CHAR(loaddate 'Month')
Decode(To_Char(loaddate 'D') '7' 'weekend' '6' 'weekend' 'weekday')
Trunc(loaddate 'DAY') + 1
Decode(Last_Day(loaddate) loaddate 'y' 'n')
to_char(loaddate 'YYYYMM')
to_char(loaddate 'YYYY') || ' Half' ||
Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2)
)
TO_CHAR(loaddate 'YYYY / MM')
TO_CHAR(loaddate 'YYYY') ||' Q ' ||TRUNC(TO_number( TO_CHAR(loaddate
'Q')) )
TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number(
TO_CHAR(loaddate
'WW')))
TO_CHAR(loaddate 'YYYY'));
If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') Then
Exit;
End If;
End Loop;
file:///C|/Perl/bin/result.html (172 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
commit;
end Insert_W_DAY_D_PR;
=======================================
172.Informatica - what is hash table informatica?
QUESTION #172
No best answer available. Please pick the good answer available
or submit your
answer.
May 03, 2006 15:13:30 #1
uma bojja
RE: what is hash table informatica?
Click Here to view complete document
Hash partitioning is the type of partition that is supported by Informatica where the
hash user keys are
specified .
Can you please explain more on this?
=======================================
I donot know exact answer uma but i am telling as per my knowledge.Hash table is
used to extract the
data through Java Virtual Machine.If u know more about this plz send to me
=======================================
Hash partitions are some what similar to database partitions. This will allow user to
partition the data by
rage which is fetched from source.
This will be handy while handling partitioned tables.
--Kr
=======================================
when you want the Informatica Server to distribute rows to the partitions by group.
=======================================
In hash partitioning the Informatica Server uses a hash function to group rows of
data among partitions.
file:///C|/Perl/bin/result.html (173 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
The Informatica Server groups the data based on a partition key.Use hash
partitioning when you want
the Informatica Server to distribute rows to the partitions by group. For example
you need to sort items
by item ID but you do not know how many items have a particular ID number.
cheers karthik
=======================================
173.Informatica - What is meant by EDW?
QUESTION #173
No best answer available. Please pick the good answer available
or submit your
answer.
May 04, 2006 10:06:21 #1
Uma Bojja Member Since: May 2006 Contribution: 7
RE: What is meant by EDW?
Click Here to view complete document
EDW
~~~~~
Its a big data warehouses OR centralized data warehousing OR the old style of
warehouse.
Its a single enterprise data warehouse (EDW) with no associated data marts or
operational data store
(ODS) systems.
=======================================
If the warehouse is build across particular vertical of the
company it is called as enterprise data warehouse.It is limited to particular
verticle.For example if the warehouse is build across sales vertical then it is
termed as EDW for sales hierachy
file:///C|/Perl/bin/result.html (174 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
EDW is Enterprise Datawarehouse which means that its a centralised DW for the
whole organization.
this apporach is the apporach on Imon which relies on the point of having a single
warehouse/
centralised where the kimball apporach says to have seperate data marts for each
vertical/department.
Advantages of having a EDW:
1. Golbal view of the Data
2. Same point of source of data for all the users acroos the organization.
3. able to perform consistent analysis on a single Data Warehouse.
to over come is the time it takes to develop and also the management that is
required to build a
centralised database.
Thanks
Yugandhar
=======================================
174.Informatica - how to load the data from people
soft hrm to
people soft erm using informatica?
QUESTION #174
No best answer available. Please pick the good answer available
or submit your
answer.
May 08, 2006 14:00:35 #1
Uma Bojja Member Since: May 2006 Contribution: 7
RE: how to load the data from people soft hrm to peopl...
file:///C|/Perl/bin/result.html (175 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Following are necessary
1.Power Connect license
2.Import the source and target from people soft using ODBC connections
3.Define connection under Application Connection Browser for the people soft
source/target in
workflow manager .
select the proper connection (people soft with oracle sybase db2 and informix)
and execute like a normal session.
=======================================
175.Informatica - what are the measure objects
QUESTION #175
No best answer available. Please pick the good answer available
or submit your
answer.
May 15, 2006 00:50:56 #1
karthikeyan
RE: what are the measure objects
Click Here to view complete document
Aggregate calculation like sum avg max min these are the measure objetcs.
=======================================
176.Informatica - what is the diff b/w STOP &
ABORT in
INFORMATICA sess level ?
QUESTION #176
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (176 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
May 18, 2006 05:13:07 #1
rkarthikeyan
RE: what is the diff b/w STOP & ABORT in INFORMATICA s...
Click Here to view complete document
Stop:We can Restart the session
Abort:WE cant restart the session.We should truncate all the pipeline after that
start the session
=======================================
Stop : After issuing stop PCS processes all those records which it got from source
qualifier and writes
to the target.
Abort: It works in the same way as stop but there is a time out period of 60sec.
=======================================
177.Informatica - what is surrogatekey ? In ur
project in which
situation u has used ? explain with example ?
QUESTION #177
No best answer available. Please pick the good answer available
or submit your
answer.
May 22, 2006 09:14:55 #1
afzal
RE: what is surrogatekey ? In ur project in which situ...
file:///C|/Perl/bin/result.html (177 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
A surrogate key is system genrated/artificial key /sequence number or A surrogate
key is a substitution
for the natural primary key.It is just a unique identifier or number for each row that
can be used for the
primary key to the table. The only requirement for a surrogate primary key is that it
is unique for each
row in the tableI it is useful because the natural primary key (i.e. Customer
Number in Customer table)
can change and this makes updates more difficult.but In my project I felt that the
primary reason for the
surrogate keys was to record the changing context of the dimension
attributes.(particulaly for scd )The
reason for them being integer and integer joins are faster. Unlike other
=======================================
Surrogate key is a Unique identifier for eatch row it can be used as a primary key
for DWH.The DWH
does not depends on primary keys generated by OLTP systems for internally
identifying the recods.
When the new record is inserting into DWH primary keys are autimatically
generated such type od keys
are called SURROGATE KEY.Advantages1. Have a flexible mechanisam for
handling S.C.D's2. we
can save substantial storage space with integer valued surrogate keys.
=======================================
178.Informatica - what is Partitioning ? where we
can use
Partition? wht is advantages?Is it nessisary?
QUESTION #178
No best answer available. Please pick the good answer available
or submit your
answer.
May 22, 2006 08:51:38 #1
afzal
RE: what is Partitioning ? where we can use Partition?...
Click Here to view complete document
The Partitioning Option increases PowerCenters performance through parallel
data processing and this
option provides a thread-based architecture and automatic data partitioning that
optimizes parallel
processing on multiprocessor and grid-based hardware environments.
=======================================
partitions are used to optimize the session performance
we can select in sesstion propetys for partiotions
types
default----passthrough partition
file:///C|/Perl/bin/result.html (178 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
key range partion
round robin partion
hash partiotion
=======================================
Hi
In informatica we can tune performance in 5 different levels that is at source level
target level mapping
level session level and at network level.
So to tune the performance at session level we go for partitioning and again we
have 4 types of
partitioning
those are pass through hash round robin key range.
pass through is the default one.
In hash again we have 2 types that is userdefined and automatic.
round robin can not be applied at source level it can be used at some
transformation level
key range can be applied at both source or target levels.
if you want me to explain each partioning level in detail the i can .
=======================================
hi nimmi please explain me regarding complete partition. I need clear picture. what
tranmission it will
ristrict how it will ristric where we have to give.
thanks
madhu.
=======================================
file:///C|/Perl/bin/result.html (179 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
179.Informatica - hwo can we eliminate duplicate
rows from flat
file?
QUESTION #179
No best answer available. Please pick the good answer available
or submit your
answer.
May 22, 2006 04:26:30 #1
Karthikeya
RE: hwo can we eliminate duplicate rows from flat file...
Click Here to view complete document
keep aggregator between source qualifier and target and choose group by field key
it will eliminate the
duplicate records.
=======================================
Hi Before loading to target use an aggregator transformation and make use of
group by function to
eleminate the duplicates on columns .Nanda
=======================================
Use Sorter Transformation. When you configure the Sorter Transformation to treat
output rows as
distinct it configures all ports as part of the sort key. It therefore discards duplicate
rows compared
during the sort operation
=======================================
Hi Before loading to target Use an aggregator transformation and use group by
clause to eliminate the
duplicate in columns.Nanda
=======================================
Use sorter transformation select distinct option duplicate rows will be eliminated.
=======================================
if u want to delete the duplicate rows in flat files then we go for rank
transformation or oracle external
procedure tranfornation
select all group by ports and select one field for rank its easily dupliuctee now
=======================================
file:///C|/Perl/bin/result.html (180 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Hi
using Sorter Transformation we can eliminate the Duplicate Rows from Flat file
Thanks
N.Sai
=======================================
to eliminate the duplicate in flatfiles we have distinct property in sorter
transformation. If we enable that
property automatically it will remove duplicate rows in flatfiles.
=======================================
180.Informatica - How to Generate the Metadata
Reports in
Informatica?
QUESTION #180
No best answer available. Please pick the good answer available
or submit your
answer.
June 01, 2006 07:27:14 #1
balanagdara Member Since: April 2006 Contribution: 4
RE: How to Generate the Metadata Reports in Informatic...
Click Here to view complete document
Hi Venkatesan
You can generate PowerCenter Metadata Reporter from a browser on any
workstation.
Bala Dara
=======================================
Hi
You can generate PowerCenter Metadata Reporter from a browser on any
workstation even a
workstation that does not have PowerCenter tools installed.
file:///C|/Perl/bin/result.html (181 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Bala Dara
=======================================
hey bala can you be more specific about that how to generate the metadata report
in informantica ???
=======================================
yes we can generate reports using Metadata Reporter... It is a web based
application used only for
creating metadata reports in informatica.
Using metadata reporter we can connect to repository and get the metadata without
the knowledge of
Sql and technical skills.
I thik this is my answers if it is yes.... reply me..
kumar
=======================================
181.Informatica - Can u tell me how to go for SCD's
and its types.
Where do we use them mostly
QUESTION #181
No best answer available. Please pick the good answer available
or submit your
answer.
June 08, 2006 08:53:46 #1
priyamayee Member Since: June 2006 Contribution: 3
RE: Can u tell me how to go for SCD's and its types.Wh...
file:///C|/Perl/bin/result.html (182 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Hii It depends on the business requirement u have. We use this SCD's to maintain
history(changes/
updates) in the dimensions. Each SCD type has it's own way of
storing/updating/maintaining the
history. For example A customer dimension is a SCD because the customer can
change his/her address
contact name or anything. These types of changes in dimensions are know a SCD's.
And we use the 3
different SCD TYPES to handle these changes and historyu.byeMayee
=======================================
The Slowly Changing Dimension problem is a common one particular to data
warehousing. In a
nutshell this applies to cases where the attribute for a record varies over time. We
give an example
below: Christina is a customer with ABC Inc. She first lived in Chicago Illinois. So
the original entry in
the customer lookup table has the following record: Customer Key Name State
1001 Christina
IllinoisAt a later date she moved to Los Angeles California on January 2003. How
should ABC Inc.
now modify its customer table to reflect this change? This is the Slowly Changing
Dimension problem.
There are in general three ways to solve this type of problem and they are
categorized as follows: In
Type 1 Slowly Changing Dimension the new information simply overwrites the
original information. In
other words no history is kept. In our example recall we originally have the
following table: Customer
Key Name State 1001 Christina IllinoisAfter Christina moved from Illinois to
California the new
information replaces the new record and we have the following table: Customer
Key Name State 1001
Christina CaliforniaAdvantages: - This is the easiest way to handle the Slowly
Changing Dimension
problem since there is no need to keep track of the old information. Disadvantages:
- All history is lost.
By applying this methodology it is not possible to trace back in history. For
example in this case the
company would not be able to know that Christina lived in Illinois before. Usage:
About 50 of the time.
When to use Type 1: Type 1 slowly changing dimension should be used when it is
not necessary for the
data warehouse to keep track of historical changes. In Type 2 Slowly Changing
Dimension a new
record is added to the table to represent the new information. Therefore both the
original and the new
record will be present. The new record gets its own primary key. In our example
recall we originally
have the following table: Customer Key Name State 1001 Christina IllinoisAfter
Christina moved from
Illinois to California we add the new information as a new row into the table:
Customer Key Name
State 1001 Christina Illinois 1005 Christina CaliforniaAdvantages: - This allows us
to accurately keep
all historical information. Disadvantages: - This will cause the size of the table to
grow fast. In cases
where the number of rows for the table is very high to start with storage and
performance can become a
concern. - This necessarily complicates the ETL process. Usage: About 50 of the
time. When to use
Type 2: Type 2 slowly changing dimension should be used when it is necessary for
the data warehouse
to track historical changes. In Type 3 Slowly Changing Dimension there will be
two columns to
indicate the particular attribute of interest one indicating the original value and one
indicating the
current value. There will also be a column that indicates when the current value
becomes active. In our
example recall we originally have the following table: Customer Key Name
State1001 Christina
IllinoisTo accomodate Type 3 Slowly Changing Dimension we will now have the
following columns:
Customer Key Name Original State Current State Effective Date After Christina
moved from Illinois to
California the original information gets updated and we have the following table
(assuming the
effective date of change is January 15 2003): Customer Key Name Original State
Current State
Effective Date 1001 Christina Illinois California 15-JAN-2003Advantages: - This
does not increase the
size of the table since new information is updated. - This allows us to keep some
part of history.
file:///C|/Perl/bin/result.html (183 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Disadvantages: - Type 3 will not be able to keep all history where an attribute is
changed more than
once. For example if Christina later moves to Texas on December 15 2003 the
California information
will be lost. Usage: Type 3 is rarely used in actual practice. When to use Type 3:
Type III slowly
changing dimension should only be used when it is necessary for the data
warehouse to track historical
changes and when such changes will only occur for a finite number of time.
=======================================
182.Informatica - How to export mappings to the
production
environment?
QUESTION #182
No best answer available. Please pick the good answer available
or submit your
answer.
June 13, 2006 19:15:18 #1
UmaBojja
RE: How to export mappings to the production environme...
Click Here to view complete document
In the designer go to the main menu and one can see the export/import options.
Import the exported mapping in to the production repository with replace options.
Thanks
Uma
=======================================
183.Informatica - how u will create header and
footer in target
using informatica?
QUESTION #183
No best answer available. Please pick the good answer available
or submit your
answer.
June 13, 2006 19:05:25 #1
UmaBojja
file:///C|/Perl/bin/result.html (184 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: how u will create header and footer in target usin...
Click Here to view complete document
If you are focus is about the flat files then one can set it in file properties while
creating a mapping or at
the session level in session properties
Thanks
Uma
=======================================
hi uma
thanks for the answer i want the complete explanation regarding to this question
how to create header
and footer in target?
=======================================
you can always create a header and a trailer in the target file using an aggregator
transformation.
Take the number of records as count in the aggregator transformation.
create three separate files in a single pipeline.
One will be your header and other will be your trailer coming from aggregator.
The third will be your main file.
Concatenate the header and the main file in post session command usnig shell
script.
=======================================
184.Informatica - what is the difference between
constraind base
load ordering and target load plan
QUESTION #184
file:///C|/Perl/bin/result.html (185 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
No best answer available. Please pick the good answer available
or submit your
answer.
June 16, 2006 14:16:55 #1
Uma Bojja Member Since: May 2006 Contribution: 7
RE: what is the difference between constraind base loa...
Click Here to view complete document
Constraint based load ordering
example:
Table 1---Master
Tabke 2---Detail
If the data in table1 is dependent on the data in table2 then table2 should be loaded
first.In such cases to
control the load order of the tables we need some conditional loading which is
nothing but constraint
based load
In Informatica this feature is implemented by just one check box at the session
level.
Thanks
Uma
=======================================
Target load order comes in the designer property..Click mappings tab in designer
and then target load
plan.It will show all the target load groups in the particular mapping. You specify
the order there the
server will loadto the target accordingly.
A target load group is a set of source-source qulifier-transformations and target.
Where as constraint based loading is a session proerty. Here the multiple targets
must be generated from
one source qualifier. The target tables must posess primary/foreign key
relationships. So that the server
loads according to the key relation irrespective of the Target load order plan.
file:///C|/Perl/bin/result.html (186 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
If you have only one source it s loading into multiple target means you have to use
Constraint based
loading. But the target tables should have key relationships between them.
If you have multiple source qualifiers it has to be loaded into multiple target you
have to use Target
load order.
=======================================
Constraint based loading : If your mapping contains single pipeline(flow) with
morethan one target
(If target tables contain Master -Child relationship) you need to use constraint
based load in session
level.
Target Load plan : If your mapping contains multipe pipeline(flow) (specify
execution order one by
one.example pipeline 1 need to execute first then pipeline 2 then pipeline 3) this is
purly based on
pipeline dependency
=======================================
185.Informatica - How do we analyse the data at
database level?
QUESTION #185
No best answer available. Please pick the good answer available
or submit your
answer.
June 16, 2006 14:20:55 #1
Uma Bojja Member Since: May 2006 Contribution: 7
RE: How do we analyse the data at database level?
file:///C|/Perl/bin/result.html (187 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Data can be viewed using Informatica's designer tool.
If you want to view the data on source/target we can preview the data but with
some limitations.
We can use data profiling too.
Thanks
Uma
=======================================
186.Informatica - why sorter transformation is an
active
transformation?
QUESTION #186
No best answer available. Please pick the good answer available
or submit your
answer.
June 16, 2006 15:02:41 #1
Kiran Kumar Cholleti
RE: why sorter transformation is an active transformat...
Click Here to view complete document
It allows to sort data either in ascending or descending order according to a
specified field. Also used to
configure for case-sensitive sorting and specify whether the output rows should
be distinct. then it
will not return all the rows
=======================================
becz it will change the rowid of the records transformed.
active transformation:is no of records and thier rowid that pass through the
transformation will differ.
=======================================
file:///C|/Perl/bin/result.html (188 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
This is type of active transformation which is responsible for sorting the data either
in the ascending
order or descending order according to the key specifier. the port on which the
sorting takes place is
called as sortkeyport
properties
if u select distinct eliminate duplicates
case sensitive valid for strings to sort the data
null treated low null values are given least priority
=======================================
If any transformation has the distinct option then it will be a active one bec active
transformation is
nothing but the transformation which will change the no. of o/p records.So distinct
always filters the
duplicate rows which inturn decrease the no of o/p records when compared to i/n
records.
One more thing is An active transformation can also behave like a passive
=======================================
187.Informatica - how is the union transformation
active
transformation?
QUESTION #187
No best answer available. Please pick the good answer available
or submit your
answer.
June 18, 2006 09:53:25 #1
zafar
RE: how is the union transformation active transformat...
file:///C|/Perl/bin/result.html (189 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Active Transformation is one which can change the number of rows i.e input rows
and output rows
might not match. Number of rows coming out of Union transformation might not
match the incoming
rows.
Zafar
=======================================
Active Transformation: the transformation that change the no. of rows in the
Target.
Source (100 rows) ---> Active Transformation ---> Target (< or > 100 rows)
Passive Transformation: the transformation that does not change the no. of rows in
the Target.
Source (100 rows) ---> Passive Transformation ---> Target (100 rows)
Union Transformation: in Union Transformation we may combine the data from
two (or) more sources.
Assume Table-1 contains '10' rows and Table-2 contains '20' rows. If we combine
the rows of Table-1
and Table-2 we will get a total of '30' rows in the Target. So it is definetly an
Active Transformation.
=======================================
thank u very munch sai venkatesh for ur answer but inthat case look up
transformation should be an
active transformation but it is a passive transformation .
=======================================
Active transformation number of records passing through the transformation and
their rowid will be
different it depends on rowid also.
=======================================
This is a type of passive transformation which is responsible for merging the data
comming from
different sources. the union transformation functions very similar to union all
statement in oracle.
=======================================
Hi Saivenkatesh ur answer is very nice thanking you.
=======================================
Ya since the Union Trnsformation may lead a change to the no of rows incoming it
is definitely an
active type.
file:///C|/Perl/bin/result.html (190 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
In the other case Look-up in no way can change the no. of row that are passing thru
it. The
transformation just looks to the refering table. The no. of records increases or
decreases by the
transformations that follow the look-up transformation.
=======================================
Are you sure that Lookup is a passive trasformation
=======================================
Ya Surely Lookup is a passive one
=======================================
Hi u are saying source rows also 10+20 30 it passes all 30 rows to the target
according to active
definition while it passes the rows to the next t/r should be d/t but it passes all 30
rows.
I am confusing here can anyone clear me in detail. thanks in advance
=======================================
188.Informatica - what is tracing level?
QUESTION #188
No best answer available. Please pick the good answer available
or submit your
answer.
June 21, 2006 10:47:28 #1
rajesh
RE: what is tracing level?
Click Here to view complete document
Tracing level determines the amount of information that informatcia server writes
in a session log.
=======================================
Ya its the level of information storage in session log.
The option comes in the properties tab of transformations. By default it remains
Normal . Can be
Verbose Initialisation
Verbose Data
file:///C|/Perl/bin/result.html (191 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Normal
or Terse.
=======================================
189.Informatica - How can we join 3 database like
Flat File,
Oracle, Db2 in Informatrica..Thanks in advance
QUESTION #189
No best answer available. Please pick the good answer available
or submit your
answer.
June 24, 2006 18:28:27 #1
sandeep
RE: How can we join 3 database like Flat File, Oracle,...
Click Here to view complete document
hi
using join transformation
=======================================
You have to use two joiner transformations.fIRST one will join two tables and the
next one will join the
third with the resultant of the first joiner.
=======================================
190.Informatica - Is a fact table normalized or de-
normalized?
QUESTION #190 Submitted by: lakshmi
Hi
Dimension tables can be normalized or de-normalized. Facts are
always
normalized.
Regards
file:///C|/Perl/bin/result.html (192 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Above answer was rated as good by the following members:
Vamshidhar Click Here to view complete document
Flat table is always Normalised since no redundants!!
=======================================
Well!! A fact table is always DENORMALISED table. It consists of data from
dimension table
(Primary Key's) and Fact table has Foreign keys and measures.
Thanks!!
=======================================
the main funda of DW is de-normalizing the data for faster access by the reporting
tool...so if ur
building a DW ..90 it has to be de-normalized and off course the fact table has to
be de normalized...
Hope answer the question...
=======================================
the fact table is always DE-NORMALIZED.somebody answered it as
normalized.See if u dont know
the answers plz dont post them.Just dont make lottery by posting wrong answers.
=======================================
Hi
I read the above comments. I confused. then we should ask Kimball know. Here is
the comment..
Fable August 3 2005
Dimensional models are fully denormalized.
Fact
Dimensional models combine normalized and denormalized table structures. The
dimension tables of
descriptive information are highly denormalized with detailed and hierarchical roll-
up attributes in the
same table. Meanwhile the fact tables with performance metrics are typically
normalized. While we
advise against a fully normalized with snowflaked dimension attributes in separate
tables (creating
blizzard-like conditions for the business user) a single denormalized big wide table
containing both
metrics and descriptions in the same table is also ill-advised.
Ref:http://www.kimballgroup.com/html/commentarysub2.html
file:///C|/Perl/bin/result.html (193 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Hi
Dimension tables can be normalized or de-normalized. Facts are always
normalized.
Regards
=======================================
Hi
Dimension table can be normalized or de-normalized. But fact table is always
normalized
=======================================
Dimension table may be normalized or denormalized according to your schema but
Fact table always
will be denormalized.
=======================================
Hi all
Dimension table may be normalized or denormalized according to your schema but
Fact table always
will be denormalized.
Regards
Umesh
BSIL(Mumbai)
=======================================
hi
please see the following site:
http://72.14.253.104/search?q
cache:lkFjt6EmsxMJ:www.kimballgroup.com/html/commentarysub2.
file:///C|/Perl/bin/result.html (194 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
html+fact+table+normalized+or+denormalized&hl en&gl us&ct clnk&cd 1
I am highlighting what Kimball says here: Dimensional models combine
normalized and
denormalized table structures. The dimension tables of descriptive information
are highly
denormalized with detailed and hierarchical roll-up attributes in the same table.
Meanwhile the fact
tables with performance metrics are typically normalized. While we advise against
a fully normalized
with snowflaked dimension attributes in separate tables (creating blizzard-like
conditions for the
business user) a single denormalized big wide table containing both metrics and
descriptions in the
same table is also ill-advised.
Regards
lakshmi
=======================================
191.Informatica - What is the difference between
PowerCenter 7
and PowerCenter 8?
QUESTION #191
No best answer available. Please pick the good answer available
or submit your
answer.
August 03, 2006 11:31:21 #1
satish
RE: What is the difference between PowerCenter 7 and P...
Click Here to view complete document
the major difference is in 8 version some new transformations are added apart from
these the remaining
same.
=======================================
192.Informatica - What is the difference between
PowerCenter 6
and powercenter 7?
QUESTION #192
No best answer available. Please pick the good answer available
or submit your
answer.
June 30, 2006 10:28:09 #1
Gopal. P
file:///C|/Perl/bin/result.html (195 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: What is the difference between PowerCenter 6 and p...
Click Here to view complete document
1)lookup the flat files in informatica 7.X but we cann't lookup flat files in
informatica 6.X
2) External Stored Procedure Transformation is not available in informatica 7.X
but this transformation
included in informatica 6.X
=======================================
Also Union Transformation is not there in 6.x where as its there in 7.x
Pradeep
=======================================
l Also custom transformation is not available in 6.x
l The main difference is the version control available in 7.x
l Session level error handling is available in 7.x
l XML enhancements for data integration in 7.x
=======================================
Hi
Tell me some architectural difference between the two.
=======================================
in 7.x it have the more feature compare to 6.x like
transaction cotrol transformation
union transformation
midstream XML transformation
flat file lookup
file:///C|/Perl/bin/result.html (196 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
custom transformation
function like
soundex metaphone
=======================================
193.Informatica - How to move the mapping from
one database to
another?
QUESTION #193
No best answer available. Please pick the good answer available
or submit your
answer.
July 06, 2006 21:12:49 #1
martin
RE: How to move the mapping from one database to anoth...
Click Here to view complete document
Do you mean migration between repositories? There are 2 ways of doing this.
1. Open the mapping you want to migrate. Go to File Menu - Select 'Export
Objects' and give a name -
an XML file will be generated. Connect to the repository where you want to
migrate and then select File
Menu - 'Import Objects' and select the XML file name.
2. Connect to both the repositories. Go to the source folder and select mapping
name from the object
navigator and select 'copy' from 'Edit' menu. Now go to the target folder and select
'Paste' from 'Edit'
menu. Be sure you open the target folder.
=======================================
u can also do it this way. connect to both the repositories open the respective
folders. keep the
destination repository as active. from the navigator panel just drag and drop the
mapping to the work
area. it will ask whether to copy the mapping say YES. its done.
=======================================
file:///C|/Perl/bin/result.html (197 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
if we go by the direct meaning of your question....there is no need for a new
mapping for a new databse
you just need to change the connections in the workflow manager to run the
mapping on another
database
=======================================
194.Informatica - How do we do complex mapping
by using
flatfiles / relational database?
QUESTION #194
No best answer available. Please pick the good answer available
or submit your
answer.
September 28, 2006 05:56:56 #1
srinivas
RE: How do we do complex mapping by using flatfiles / ...
Click Here to view complete document
if we are using more business reules or more transformations then it is called
complex mapping. If we
have flat files then we use the flat as a sourse or we can take relational sources
depends on the
availability.
=======================================
195.Informatica - How to define Informatica server?
QUESTION #195
No best answer available. Please pick the good answer available
or submit your
answer.
July 06, 2006 02:45:14 #1
deeprekha
RE: How to define Informatica server?
file:///C|/Perl/bin/result.html (198 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
informatica server is the main server component in informatica product
family..Which is resonsible for
reads the data from various source system and tranforms the data according to
business rule and loads
the data into the target table
=======================================
196.Informatica - How to call stored Procedure from
Workflow
monitor in Informatica 7.1 version
QUESTION #196
No best answer available. Please pick the good answer available
or submit your
answer.
October 19, 2006 16:57:37 #1
Prasanth
RE: How to call stored Procedure from Workflow monitor...
Click Here to view complete document
Call stored procedure using a shell script.
Invoke that shell script using command task with pmcmd.
=======================================
197.Informatica - how can we store previous session
logs
QUESTION #197
No best answer available. Please pick the good answer available
or submit your
answer.
July 12, 2006 02:54:55 #1
Hareesh
RE: how can we store previous session logs
file:///C|/Perl/bin/result.html (199 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Just run the session in time stamp mode then automatically session log will not
overwrite current
session log.
Hareesh
=======================================
Hi
We can do this way also. using $PMSessionlogcount(specify the number of runs of
the session log to
save)
=======================================
Hi
Go to Session-->right click -->Select Edit Task then Goto -->Config Object then
set the property
Save Session Log By --Runs
Save Session Log for These Runs --->To Number of Historical Session logs you
want
=======================================
198.Informatica - how can we use pmcmd command
in a
workflow or to run a session
QUESTION #198
No best answer available. Please pick the good answer available
or submit your
answer.
July 14, 2006 02:31:34 #1
abc
RE: how can we use pmcmd command in a workflow or to r...
file:///C|/Perl/bin/result.html (200 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
By using command task.
=======================================
199.Informatica - What is change data capture?
QUESTION #199
No best answer available. Please pick the good answer available
or submit your
answer.
July 14, 2006 06:26:43 #1
Ajay
RE: What is change data capture?
Click Here to view complete document
Change data capture (CDC) is a set of software design patterns used to determine
the data that has
changed in a database so that action can be taken using the changed data.
=======================================
Can you eloberate on how do you this pls?
=======================================
Changed Data Capture (CDC) helps identify the data in the source system that has
changed since the
last extraction. With CDC data extraction takes place at the same time the insert
update or delete
operations occur in the source tables and the change data is stored inside the
database in change tables.
The change data thus captured is then made available to the target systems in a
controlled manner.
=======================================
200.Informatica - Wat is QTP in Data Warehousing?
QUESTION #200
No best answer available. Please pick the good answer available
or submit your
answer.
November 15, 2006 00:26:44 #1
srinuv_11 Member Since: October 2006 Contribution: 23
RE: Wat is QTP in Data Warehousing?
file:///C|/Perl/bin/result.html (201 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
I think this question is belongs to congnos.
=======================================
201.Informatica - How can I get distinct values while
mapping in
Informatica in insertion?
QUESTION #201
No best answer available. Please pick the good answer available
or submit your
answer.
July 17, 2006 13:03:40 #1
Manisha
RE: How can I get distinct values while mapping in Inf...
Click Here to view complete document
You can add an aggregator before insert and group by the feilds that need to be
distinct.
=======================================
IN the source qualifier write the query with distinct on key column
=======================================
Well
There are two methods to get distinct values:
If the sources are databases then we can go for SQL-override in source qualifier
by changing the
default SQL query. I mean selecting the check box called [] select distinct.
and
if the sources are heterogeneous i mean from different file systems then we can use
SORTER
Transformation and in transformation properties select the check box called []
select distinct same as in
source qualifier we can get distinct values.
=======================================
202.Informatica - what transformation you can use
inplace of
lookup?
QUESTION #202
No best answer available. Please pick the good answer available
or submit your
file:///C|/Perl/bin/result.html (202 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
answer.
July 17, 2006 05:52:56 #1
venkat
RE: what transformation you can use inplace of lookup?...
Click Here to view complete document
You can use the joiner transformation by setting as outer join of either master or
detail.
=======================================
Hi
Look-up transformation can serve in so many situations.
So if you can a bit particular about the scenarioo that you are talking about it will
be easy to interpret.
=======================================
Hi
lookup's either we can use first or last value. for suppose lookup have more than
one record matching
we need all matching records in that situation we can use master or detail outer join
instead of lookup.
(according to logic)
=======================================
You can use joiner in place of lookup
=======================================
You can join that table which you wanted to use in lookup in the source qualifier
using SQL override to
avoid using lookup transformation
=======================================
203.Informatica - Why and where we are using
factless fact table?
QUESTION #203
No best answer available. Please pick the good answer available
or submit your
answer.
July 18, 2006 05:08:53 #1
kumar
file:///C|/Perl/bin/result.html (203 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: Why and where we are using factless fact table?
Click Here to view complete document
Hi
Iam not sure but you can confrirm with other people.
Factless fact is nothing but Non-additive measures.
EX: Temperature in fact table will note it as Moderate Low High. This type of
things are called Nonadditive
measures.
Cheers
Kumar.
=======================================
Factless Fact Tables are the fact tables with no facts or measures(numerical data).
It contains only the
foriegn keys of corresponding Dimensions.
=======================================
such fact tables are required to avoid flaking of levels within dimension and to
define them as a separate
cube connected to the main cube.
=======================================
transaction can occur without the measure then it is factless fact table or coverage
table
for example product samples
=======================================
Fact table will contains metrics and FK's corresponding to the Dimesional table but
factless Fact table
will contains only FK's of corresponding dimensions without any metrics regards
rma
=======================================
204.Informatica - tell me one complecated mapping
QUESTION #204
file:///C|/Perl/bin/result.html (204 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
No best answer available. Please pick the good answer available
or submit your
answer.
September 14, 2006 02:44:20 #1
srinivas
RE: tell me one complecated mapping
Click Here to view complete document
if we are using more business rules or more transformations then it is a complex
mapping like SCD type
2(version no effective date range flag current date)
=======================================
Mapping is nothing but flow of data from source to Target we r giving instructions
to power center
server to move data from source to targets accourding to our business rules if more
business rules r
there in our mapping then its copmlex mapping.regards rma
=======================================
205.Informatica - how do we do unit testing in
informatica?how
do we load data in informatica ?
QUESTION #205
No best answer available. Please pick the good answer available
or submit your
answer.
July 22, 2006 04:25:39 #1
Praveen kumar
Testing
file:///C|/Perl/bin/result.html (205 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Unit testing are of two types
1. Quantitaive testing
2.Qualitative testing
Steps.
1.First validate the mapping
2.Create session on themapping and then run workflow.
Once the session is succeeded the right click on session and go for statistics tab.
There you can see how many number of source rows are applied and how many
number of rows loaded
in to targets and how many number of rows rejected.This is called Quantitative
testing.
If once rows are successfully loaded then we will go for qualitative testing.
Steps
1.Take the DATM(DATM means where all business rules are mentioned to the
corresponding source
columns) and check whether the data is loaded according to the DATM in to target
table.If any data is
not loaded according to the DATM then go and check in the code and rectify it.
This is called Qualitative testing.
This is what a devloper will do in Unit Testing.
=======================================
206.Informatica - how do we load data by using
period
dimension?
QUESTION #206
No best answer available. Please pick the good answer available
or submit your
answer.
September 23, 2006 07:46:32 #1
file:///C|/Perl/bin/result.html (206 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
calltomadhu Member Since: September 2006 Contribution: 34
RE: how do we load data by using period dimension?
Click Here to view complete document
hi
its very simple thourgh sheduleing
thanks
madhu
=======================================
207.Informatica - How many types of facts and what
are they?
QUESTION #207
No best answer available. Please pick the good answer available
or submit your
answer.
July 21, 2006 14:54:14 #1
Bala
RE: How many types of facts and what are they?
Click Here to view complete document
I know some there are Additive Facts Semi-Additive Non-Additive Accumulating
Facts Factless facts
Periodic fact table Transaction Fact table.
Thanks
Bala
=======================================
There are
Factless Facts:Facts without any measures.
file:///C|/Perl/bin/result.html (207 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Additive Facts:Fact data that can be additive/aggregative.
Non-Additive facts: Facts that are result of non-additon
Semi-Additive Facts: Only few colums data can be added.
Periodic Facts: That stores only one row per transaction that happend over a period
of time.
Accumulating Fact: stores row for entire lifetime of event.
There must be some more if some one knows pls add.
=======================================
hi
there are 3 types
additive
semi additive
non additive
thanks
madhu
=======================================
hai
there r three types of facts
Additive Fact-Fact which is used across all dimensions
semi Additive Fact- Fact which is used some dimesion and not with some
Non Additive Fact- Fact which is not used any of the dimesion
=======================================
1. Regular Fact - With numeric values
2.Factless Fact - Without numeric values
file:///C|/Perl/bin/result.html (208 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
208.Informatica - How can you say that union
Transormation is
Active transformation.
QUESTION #208
No best answer available. Please pick the good answer available
or submit your
answer.
July 22, 2006 07:26:42 #1
kirankumarvema
RE: How can you say that union Transormation is Active...
Click Here to view complete document
By using union transformation we can eleminate some rows....so this is active
transformation.bye
=======================================
Pls explain more. we are doing the union.how the union is eliminating the some
rows.
=======================================
By Definiation Active transformation is the transformation that changes the
number of rows that pass
through it...in union transformation the number of rows resulting from union can
be (are) different from
the actual number of rows.
=======================================
union is active becoz it eliminates duplicates from the sources
=======================================
Heloo Msr your answer is wrong.Union is not eliminating the duplicate rows.If
anybody knows please
give me reason. All are giving the above answers are not supporting that
question.Don't give the active
transofrmation defination.I want exact reasons.
=======================================
hi
We can merge multiple source qualier query records in union trans at same time its
not like expresion
trans (each row). so we can say it is active.
=======================================
Hi
Union Transformation is a active transformation because it changes the number of
rows through the
pipeline.
file:///C|/Perl/bin/result.html (209 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
It normally has multiple input groups to add on it compare to other
transformation.Before union
transformation was implement the funda on number of rows was right i.e before
7.0 but now its not the
exact benchmark to determine the active transformation
Thanks
Uma
=======================================
Hi
Union also Active tranformation.because it eliminates duplicate rows
When you select distinct option in union properties tab.
=======================================
Hi all
some people saying that a union T/R eliminates duplicates. but it is woring . As of
now it wont
eliminate duplicates.
The union T/R is active and also passive depends upon the property "is active"
which is present at union
T/R properties tab. this Specifies whether this transformation is an active or a
passive transformation.
When you enable this option the transformation can generate 0 1 or more output
rows for each input
row. Otherwise it can generate only 0 or 1 output row for each input row.
but this property is disabled in informatica 7.1.1. I think it may developed in
future.
regards
kiran
=======================================
209.Informatica - how many types of dimensions are
available in
informatica?
QUESTION #209
No best answer available. Please pick the good answer available
or submit your
answer.
Sorting Options Page 1 of 2 First 1 2 > Last
file:///C|/Perl/bin/result.html (210 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
July 31, 2006 00:18:27 #1
hello
RE: how many types of dimensions are available in info...
Click Here to view complete document
three types of dimensions are available
=======================================
What are they?Plz explain me.What is rapidly changing dimensions?
=======================================
no
=======================================
hi there r 3 types of dimensions
1.star schema
2.snowflake schema
3.glaxy schema
=======================================
i think there r 3 types of dimension table
1.stand-alone
2 local.
3.global
=======================================
There are 3 types of dimensions available according to my knowledge.
That are
1. General dimensions
file:///C|/Perl/bin/result.html (211 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
2. Confirmed dimensions
3. Junk Dimensions
=======================================
Using the Slowly Changing Dimensions Wizard
The Slowly Changing Dimensions Wizard creates mappings to load slowly
changing dimension tables:
l Type 1 Dimension mapping. Loads a slowly changing dimension table by
inserting new
dimensions and overwriting existing dimensions. Use this mapping when you do
not want a
history of previous dimension data.
l Type 2 Dimension/Version Data mapping. Loads a slowly changing dimension
table by
inserting new and changed dimensions using a version number and incremented
primary key to
track changes. Use this mapping when you want to keep a full history of dimension
data and to
track the progression of changes.
l Type 2 Dimension/Flag Current mapping. Loads a slowly changing dimension
table by
inserting new and changed dimensions using a flag to mark current dimension data
and an
incremented primary key to track changes. Use this mapping when you want to
keep a full
history of dimension data tracking the progression of changes while flagging only
the current
dimension.
l Type 2 Dimension/Effective Date Range mapping. Loads a slowly changing
dimension table
by inserting new and changed dimensions using a date range to define current
dimension data.
Use this mapping when you want to keep a full history of dimension data tracking
changes with
an exact effective date range.
l Type 3 Dimension mapping. Loads a slowly changing dimension table by
inserting new
dimensions and updating values in existing dimensions. Use this mapping when
you want to
keep the current and previous dimension values in your dimension table.
=======================================
I want each and every one of you who is answering please don't make fun out of
this.
some one gave the answer no . some one gave the answer star flake schema snow
flake schema etc how
can a schema come under a type of dimension
ANSWER:
file:///C|/Perl/bin/result.html (212 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
One major classification we use in our real time modelling is
Slowly Changing Dimensions
type1 SCD: If you want to load an updated row of previously existed row the
previous data will be
replaced. So we lose historical data.
type2 SCD: Here we will add a new row for updated data. So we have both current
and past records
which aggrees with the concept of datawarehousing maintaining historical data.
type3 SCD: Here we will add new columns.
but mostly used one is type2 SCD.
we have one more type of dimension that is
CONFORMED DIMENSION: The dimension which gives the same meaning
across different star
schemas is called Conformed dimension.
ex: Time dimension. where ever it was gives the same meaning
=======================================
wat r those three dimensions tht r available in inforamtica here we get multiple
answers cud anyone tell
me the exact once.........
thq
hari krishna
=======================================
casual dimension confimed dimension degenrate dim junk dimension raged dim
dirty dim
=======================================
210.Informatica - How can you improve the
performance of
Aggregate transformation?
QUESTION #210
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (213 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
August 06, 2006 06:20:23 #1
shanthi
RE: How can you improve the performance of Aggregate t...
Click Here to view complete document
By using sorter transformation before the aggregator transformation.
=======================================
by using sorted input
=======================================
hi
we can improve the agrregator performence in the following ways
1.send sorted input.
2.increase aggregator cache size.i.e Index cache and data cache.
3.Give input/output what you need in the transformation.i.e reduce number of input
and output ports.
=======================================
IN Aggregator transformations select the sorted input check list in the properties
tab and write sql query
in source qulifier.Its improoves the performance.
=======================================
Hi
we can improve the aggrifation performance by doing like this.
create group by condition only on numaric columns.
use sorter transimition before aggregator
give sorter input to aggreator.
file:///C|/Perl/bin/result.html (214 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
increate the cashe size of aggreator.
=======================================
211.Informatica - Why did you use stored procedure
in your ETL
Application?
QUESTION #211
No best answer available. Please pick the good answer available
or submit your
answer.
August 11, 2006 13:16:09 #1
sudha
RE: Why did you use stored procedure in your ETL Appli...
Click Here to view complete document
hi
usage of stored procedure has the following advantages
1checks the status of the target database
2drops and recreates indexes
3determines if enough space exists in the database
4performs aspecilized calculation
=======================================
Stored procedure in Informatica will be useful to impose complex business rules.
=======================================
to execute database procedures
=======================================
Hi
file:///C|/Perl/bin/result.html (215 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Using of stored procedures plays important role.Suppose ur using oracle database
where ur doing some
ETL changes you may use informatica .In this every row of the table pass should
pass through
informatica and it should undergo specified ETL changes mentioned in
transformations. If use stored
procedure i..e..oracle pl/sql package this will run on oracle database(which is the
databse where we
need to do changes) and it will be faster comapring to informatica because it is
runing on the oracle
databse.Some things which we cant do using tools we can do using packages.Some
jobs make take
hours to run ........in order to save time and database usage we can go for stored
procedures.
=======================================
212.Informatica - why did u use update stategy in
your
application?
QUESTION #212
No best answer available. Please pick the good answer available
or submit your
answer.
August 08, 2006 12:39:03 #1
angeletteeye
RE: why did u use update stategy in your application?
Click Here to view complete document
Update Strategy is used to drive the data to be Inert Update and Delete depending
upon some condition.
You can do this on session level tooo but there you cannot define any
condition.For eg: If you want to
do update and insert in one mapping...you will create two flows and will make one
as insert and one as
update depending upon some condition.Refer : Update Strategy in Transformation
Guide for more
information
=======================================
Update Strategy is the most important transformation of all Informatica
transformations.
The basic thing one should understand about this is it is essential transformation to
perform DML
operations on already data populated targets(i.e targets which contain some records
before this mapping
loads data)
It is used to perform DML operations.
Insertion Updation Deletion Rejection
when records come to this transformation depending on our requirement we can
decide whether to
file:///C|/Perl/bin/result.html (216 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
insert or update or reject the rows flowing in the mapping.
For example take an input row if it is already there in the target(we find this by
lookup transformation)
update it otherwise insert it.
We can also specify some conditions based on which we can derive which update
strategy we have to
use.
eg: iif(condition DD_INSERT DD_UPDATE)
if condition satisfies do DD_INSERT otherwise do DD_UPDATE
DD_INSERT DD_UPDATE DD_DELETE DD_REJECT are called as decode
options which can
perform the respective DML operations.
There is a function called DECODE to which we can arguments as 0 1 2 3
DECODE(0) DECODE(1) DECODE(2) DECODE(3) for insertion updation
deletion and rejection
=======================================
Update Strategy is the most important transformation of all Informatica
transformations.
The basic thing one should understand about this is it is essential transformation to
perform DML
operations on already data populated targets(i.e targets which contain some records
before this mapping
loads data)
It is used to perform DML operations.
Insertion Updation Deletion Rejection
when records come to this transformation depending on our requirement we can
decide whether to
insert or update or reject the rows flowing in the mapping.
For example take an input row if it is already there in the target(we find this by
lookup transformation)
update it otherwise insert it.
We can also specify some conditions based on which we can derive which update
strategy we have to
use.
file:///C|/Perl/bin/result.html (217 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
eg: iif(condition DD_INSERT DD_UPDATE)
if condition satisfies do DD_INSERT otherwise do DD_UPDATE
DD_INSERT DD_UPDATE DD_DELETE DD_REJECT are called as decode
options which can
perform the respective DML operations.
There is a function called DECODE to which we can arguments as 0 1 2 3
DECODE(0) DECODE(1) DECODE(2) DECODE(3) for insertion updation
deletion and rejection
=======================================
to perform dml operations
thanks
madhu
=======================================
213.Informatica - How do you create single lookup
transformation
using multiple tables?
QUESTION #213
No best answer available. Please pick the good answer available
or submit your
answer.
August 10, 2006 16:46:28 #1
Srinivas
RE: How do you create single lookup transformation usi...
file:///C|/Perl/bin/result.html (218 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Write a override sql query. Adjust the ports as per the sql query.
=======================================
no it is not possible to create single lookup on multiple tables. beacuse lookup is
created upon target
table.
=======================================
for connected lkp transformation1>create the lkp transformation.2>go for
skip.3>manually enter the
ports name that u want to lookup.4>connect with the i/p port from src table.5>give
the condition6>go
for generate sql then modify according to u'r requirement validateit will work....
=======================================
just we can create the view by using two table then we can take that view as lookup
table
=======================================
If you want single lookup values to be used in multiple target tables this can be
done !!!
For this we can use Unconnected lookup and can collect the values from source
table in any target table
depending upon the business rule ...
=======================================
214.Informatica - In update strategy target table or
flat file which
gives more performance ? why?
QUESTION #214
No best answer available. Please pick the good answer available
or submit your
answer.
August 10, 2006 04:10:37 #1
prasad.yandapalli
stored procedure
file:///C|/Perl/bin/result.html (219 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
we use stored procedure for populating and maintaing databases in our mapping
=======================================
Pros: Loading Sorting Merging operations will be faster as there is no index
concept and Data will be in
ASCII mode.
Cons: There is no concept of updating existing records in flat file.
As there is no indexes while lookups speed will be lesser.
=======================================
hi faltfile give more performance.
beacause there is no index cosept and there is no constraints
=======================================
215.Informatica - How to load time dimension?
QUESTION #215
No best answer available. Please pick the good answer available
or submit your
answer.
August 15, 2006 04:08:18 #1
Mahesh
RE: How to load time dimension?
file:///C|/Perl/bin/result.html (220 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
We can use SCD Type 1/2/3 to load any Dimensions based on the requirement.
=======================================
We can use SCD Type 1/2/3 to load data into any dimension tables as per the
requirement.
=======================================
U can load time dimension manually by writing scripts in PL/SQL to load the time
dimension table with
values for a period.
Ex:- M having my business data for 5 years from 2000 to 2004 then load all the
date starting from 1-1-
2000 to 31-12-2004 its around 1825 records. Which u can do it fast writing scripts.
Bhargav
=======================================
hi
thourgh type1 type2 type3 depending upon the codition
thanks
madhu
=======================================
For loading data in to other dimensions we have respective tables in the oltp
systems..
But for time dimension we have only one base in the OLTP database. Based on
that we have to load
time dimension. We can loan the time dimension using ETL procedures which
calls the procedure or
function created in the database. If the columns are more in the time dimension we
have to creat it
manually by using Excel sheet.
=======================================
create a procedure to load data into Time Dimension. The procedure needs to run
only once to popullate
all the data. For eg the code below fills up till 2015. You can modify the code to
suit the feilds in ur
table.
file:///C|/Perl/bin/result.html (221 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
create or replace procedure QISODS.Insert_W_DAY_D_PR as
LastSeqID number default 0;
loaddate Date default to_date('12/31/1979' 'mm/dd/yyyy');
begin
Loop
LastSeqID : LastSeqID + 1;
loaddate : loaddate + 1;
INSERT into QISODS.W_DAY_D values(
LastSeqID
Trunc(loaddate)
Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2)
)
TO_FLOAT(TO_CHAR(loaddate 'MM'))
TO_FLOAT(TO_CHAR(loaddate 'Q'))
trunc((ROUND(TO_DECIMAL(to_char(loaddate 'DDD'))) +
ROUND(TO_DECIMAL(to_char(trunc(loaddate 'YYYY') 'D')))+ 5) / 7)
TO_FLOAT(TO_CHAR(loaddate 'YYYY'))
TO_FLOAT(TO_CHAR(loaddate 'DD'))
TO_FLOAT(TO_CHAR(loaddate 'D'))
TO_FLOAT(TO_CHAR(loaddate 'DDD'))
1
1
1
1
1
TO_FLOAT(TO_CHAR(loaddate 'J'))
((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 12) +
TO_number(TO_CHAR(loaddate 'MM'))
((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 4) +
TO_number(TO_CHAR(loaddate 'Q'))
TO_FLOAT(TO_CHAR(loaddate 'J'))/7
TO_FLOAT (TO_CHAR (loaddate 'YYYY')) + 4713
TO_CHAR(load_date 'Day')
TO_CHAR(loaddate 'Month')
Decode(To_Char(loaddate 'D') '7' 'weekend' '6' 'weekend' 'weekday')
Trunc(loaddate 'DAY') + 1
Decode(Last_Day(loaddate) loaddate 'y' 'n')
to_char(loaddate 'YYYYMM')
to_char(loaddate 'YYYY') || ' Half' ||
Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2)
)
TO_CHAR(loaddate 'YYYY / MM')
TO_CHAR(loaddate 'YYYY') ||' Q ' ||TRUNC(TO_number( TO_CHAR(loaddate
file:///C|/Perl/bin/result.html (222 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
'Q')) )
TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number(
TO_CHAR(loaddate
'WW')))
TO_CHAR(loaddate 'YYYY'));
If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') Then
Exit;
End If;
End Loop;
commit;
end Insert_W_DAY_D_PR;
=======================================
216.Informatica - what is the architecture of any
Data
warehousing project? what is the flow?
QUESTION #216
No best answer available. Please pick the good answer available
or submit your
answer.
August 21, 2006 06:52:22 #1
satyaneerumalla Member Since: August 2006 Contribution: 16
RE: what is the architecture of any Data warehousing p...
Click Here to view complete document
1)The basic step of datawarehousing starts with datamodelling. i.e creation
dimensions and facts.
2)datawarehouse starts with collection of data from source systems such as OLTP
CRM ERPs etc
3)Cleansing and transformation process is done with ETL(Extraction
Transformation Loading) tool.
4)by the end of ETL process target databases(dimensions facts) are ready with data
which accomplishes
the business rules.
5)Now finally with the use of Reporting tools(OLAP) we can get the information
which is used for
decision support.
=======================================
file:///C|/Perl/bin/result.html (223 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
very nice answer
thanks
=======================================
nice answer i have more doubts and can u give me ur mail id
=======================================
217.Informatica - What are the questions asked in
PDM Round
(Final Hr round)
QUESTION #217
No best answer available. Please pick the good answer available
or submit your
answer.
November 14, 2006 02:08:00 #1
srinuv_11 Member Since: October 2006 Contribution: 23
RE: What are the questions asked in PDM Round(Final Hr...
Click Here to view complete document
We can't say which questions asking in which round that will depending on the
interviewer. In Hr round
they will ask what about ur current company tell some thing about ur company and
why ur preffering
this company and tell me some thing my company this they will ask any type of
question. These are the
sample questions.
=======================================
218.Informatica - What is the difference between
materialized
view and a data mart? Are they same?
QUESTION #218
No best answer available. Please pick the good answer available
or submit your
answer.
August 28, 2006 09:46:52 #1
satyaneerumalla Member Since: August 2006 Contribution: 16
RE: What is the difference between materialized view a...
file:///C|/Perl/bin/result.html (224 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Hi friend
please elaborate what do you mean by materialized view.
Then i think i can help you to clear your doubt.
mail me at satya.neerumalla@tcs.com
=======================================
A materialized view provides indirect access to table data by storing the results of
a query in a separate
schema object unlike an ordinary view which does not take up any storage space or
contain data.
Materialized views are schema objects that can be used to summarize precompute
replicate and
distribute data. E.g. to construct a data warehouse.
The definition of materialized view is very near to the concept of Cubes where we
keep summarized
data. But cubes occupy space.
Coming to datamart that is completely different concept. Datawarehouse contains
overall view of the
organization. But datamart is specific to a subjectarea like Finance etc...
we can combine different data marts of a compnay to form datawarehouse or we
can split a
datawarehouse into different data marts.
=======================================
hi
view direct connect to table
it wont contain data and what we ask it will fire on table and give the data.
ex: sum(sal).
materilized view are indirect connect to data and stored in separte scheema. it is
just like cube in dw. it
will have data. what ever we ask summerised information it will give and stores it.
when ever we ask
file:///C|/Perl/bin/result.html (225 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
next time it will directly from here itself.
performance wise this view are faster than normal view.
=======================================
219.Informatica - In workflow can we send multiple
email ?
QUESTION #219
No best answer available. Please pick the good answer available
or submit your
answer.
August 28, 2006 15:13:20 #1
prudhvi
RE: In workflow can we send multiple email ?
Click Here to view complete document
yes
we can send multiple e-mail in a workflow
=======================================
Yes only on the UNIX version of Workflow and not Windows based version.
=======================================
220.Informatica - How do we load from PL/SQL
script into
Informatica mapping?
QUESTION #220
No best answer available. Please pick the good answer available
or submit your
answer.
August 28, 2006 09:43:04 #1
satyaneerumalla Member Since: August 2006 Contribution: 16
RE: How do we load from PL/SQL script into Informati...
file:///C|/Perl/bin/result.html (226 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
You can use StoredProcedure transformation.
There you can specify the pl/sql procedure name.
when we run the session containing this transformation the pl/sql procedure will
get executed.
If you want more clarification.
Either elaborate your question or mail me at satya.neerumalla@tcs.com
=======================================
hi
for database procedure we have procedure transmission. we can use that one.
thanks
madhu
=======================================
You can actually create a view and import it as source in mapping ....
=======================================
221.Informatica - can any one tell why we are
populating time
dimension only with scripts not with mapping?
QUESTION #221
No best answer available. Please pick the good answer available
or submit your
answer.
September 23, 2006 07:07:31 #1
calltomadhu Member Since: September 2006 Contribution: 34
file:///C|/Perl/bin/result.html (227 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: can any one tell why we are populating time dimens...
Click Here to view complete document
hi
becuase time dimension is rapidly changing dimenction. if you use mapping it is
very big stuff and that
to be it very big problem in performence wise.
thanks
madhu
=======================================
How can time dimension be a rapidly chnging dimensions. Time dimension is one
table where you load
date and time related information so that the key can be used in facts. this way you
dont have to use
entire date in the fact and can rather use the time key. there are a number of
advantages in performance
and simplicity of design with this strategy.
You use a script to laod time dimension becuase you load it one time. as i said
earlier all it contains are
dates starting from one point of time say... 01/01/1800 to some date in future say
01/01/3001
=======================================
222.Informatica - What about rapidly changing
dimensions?Can u
analyze with an example?
QUESTION #222
No best answer available. Please pick the good answer available
or submit your
answer.
September 04, 2006 09:05:10 #1
satyaneerumalla Member Since: August 2006 Contribution: 16
RE: What about rapidly changing dimensions?Can u analy...
file:///C|/Perl/bin/result.html (228 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
hi
a rapidly changing dimensions are those in which values are changing continuously
and giving lot of
difficulty to maintain them.
i am giving one of the best real world example which i found in some website
while browsing. go
through it. i am sure you like it.
description of a rapidly changing dimension by that person:
I'm trying to model a retailing case . I'm having a SKU dimension
of around 150 000 unique products which is already a SCD Type 2 for
some attributes. In addition I'm willing to track changes of the sales
and purchase price. However these prices change almost daily for quite
a lot of these products leading to a huge dimensional table and requiring
continuous updations.
so a better option would be shift those attributes into a fact table as facts which
solves the problem.
=======================================
hi If you dont mine plz tell me how to creat rapidly changing dimensinons. and one
more question....
please tell me what is the use of custom transformation.. thanking u.bye.
=======================================
Rapidly changing dimension is that whre the dimensions changes quikly.
best example is ATM transactions(BANKS).The data being changes continuesly
and concurently for
each second so it is very difficult to capture this dimensions.
=======================================
hi
question itself there that data is quite freequently changing. Changing means it can
be modify or added.
Example.
The best example of this one is Sales Table.
file:///C|/Perl/bin/result.html (229 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
223.Informatica - What are Data driven Sessions?
QUESTION #223 The informatica server follows instructions
coded into update
strategy transformations with in the session mapping to
determine how to flag
records for insert,update,delete or reject. If you do not choose
data driven
optionn setting, the informatica server ignores all update
strategy
transformations in the mapping Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
September 07, 2006 07:38:33 #1
fazal
RE: What are Data driven Sessions?
=======================================
Once you load the data in your DW you can update the new data with the
following options in your
session properties:-
1.update 2.insert2.delete and datadriven and all these options are present in your
session properties now
if you select the datadriven option informatica takes the logic to update delete or
reject data from your
designer update strategy transformation.it will look some thing like this
IIF( JOB 'MANAGER' DD_DELETE DD_INSERT ) this expression marks jobs
with an ID of manager
for deletion and all other items for insertion.
Hope answer the question.
=======================================
224.Informatica - what are the transformations that
restrict the
partitioning of sessions?
QUESTION #224 *Advanced External procedure transformation
and External
procedure transformation:
This Transformation contains a check box on the properties tab
to allow
partitioning.
file:///C|/Perl/bin/result.html (230 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
*Aggregator Transformation:
If you use sorted ports you cannot partition the associated source
*Joiner Transformation:
you can not partition the master source for a joiner
transformation
*Normalizer Transformation
*XML targets.
Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
September 07, 2006 14:32:12 #1
Manasa
RE: what are the transformations that restrict the par...
=======================================
1)source defination
2)Sequence Generator
3)Unconnected Transformation
4)Xml Target defination
=======================================
Advanced External procedure transformation and External procedure
transformation:
This Transformation contains a check box on the properties tab to allow
partitioning.
Aggregator Transformation:
If you use sorted ports you cannot partition the associated source
Joiner Transformation:
you can not partition the master source for a joiner transformation
Normalizer Transformation
XML targets.
=======================================
225.Informatica - Wht is incremental loading?Wht is
versioning in
7.1?
QUESTION #225
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (231 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
September 11, 2006 11:06:34 #1
Anuj Agarwal
RE: Wht is incremental loading?Wht is versioning in 7....
Click Here to view complete document
Incremental loading in DWH means to load only the changed and new records.i.e.
not to load the ASIS
records which already exist.
Versioning in Informatica 7.1 is like a confugration management system where
you have every version
of the mapping you ever worked upon. Whenever you have checkedin and created
a lock noone else can
work on the same mapping version. This is very helpful in n environment where
you have several users
working on a single feature.
=======================================
Hi
The Type 2 Dimension/Version Data mapping filters source rows based on user-
defined comparisons
and inserts both new and changed dimensions into the target. Changes are tracked
in the target table by
versioning the primary key and creating a version number for each dimension in
the table. In the Type 2
Dimension/Version Data target the current version of a dimension has the highest
version number and
the highest incremented primary key of the dimension.
Use the Type 2 Dimension/Version Data mapping to update a slowly changing
dimension table when
you want to keep a full history of dimension data in the table. Version numbers and
versioned primary
keys track the order of changes to each dimension.
Shivaji Thaneru
=======================================
Hi
Please see the incremental loding answer in CDC. Now i will tell you about
versioning.
Simply if any body is working in Programming language then it is SOS. Means
source of site. Means
file:///C|/Perl/bin/result.html (232 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
where the hystory of data will be available.
If any body dont know about this consept read the below.
See i developed some software in one storing area. After that some enhancement is
happend then i will
download the data i will do the modification and keep that in same area but with
some othere file name.
If at all anothere developer wants this one simply he will download this data and he
will modify or add
again he will keep that source code in that same place but with other filename. so
like this hystory will
be maintained. If we found there is bug in previous version. so what we will do
simply we will revert
back the changes bye downloading the source .
thanks
madhu
=======================================
226.Informatica - What is ODS ?what data loaded
from it ? What
is DW architecture?
QUESTION #226
No best answer available. Please pick the good answer available
or submit your
answer.
September 11, 2006 10:58:25 #1
A Agarwal
RE: Wht is ODS ?wht data loaded from it ?Wht is DW ar...
Click Here to view complete document
ODS--Operational Data Source Normally in 3NF form. Data is stored with least
redundancy.
General architecture of DWH
OLTP System--> ODS ---> DWH( Denomalised Star or Snowflake vary case to
case)
=======================================
This consept is really good. I will tell u clearly.
file:///C|/Perl/bin/result.html (233 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Assume that i have one 24/7 company. Peak hours is 9-9. ok in this one per day
around 40000 records
are added or modified. Now at 9'o clock i had taken a backup and left. after 9 to 9
again instead of
stroing the data in the same server i will keep from 9 p.m to 9 a.m. data i will store
saperatly. Assume
that 10000 records are added in this time. so that next day moring when i am
dumping the data there is
no need to take 40000+10000. It is very slow in performance wise. so i can take
directly 10000 records.
this consepct what we call as ODS. Operational Data Source.
Arch can be in two ways.
ODS to WH
ODS StagingArea wh
Thanks & regards
madhu
=======================================
ODS is an Integrated view of Operational sources(OLTP).
=======================================
thq from ur answer i can come to one conclusion tht ODS is used to store the
current data.
i can assume tht by defalut it will add tht 40000 records to this current data 10000
records n gives the
reslut 50000.
cud u plz reply to this......
thq...
=======================================
227.Informatica - what are the type costing functions
in
informatica
QUESTION #227
No best answer available. Please pick the good answer available
or submit your
answer.
September 22, 2006 10:02:20 #1
file:///C|/Perl/bin/result.html (234 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
calltomadhu Member Since: September 2006 Contribution: 34
RE: what are the type costing functions in informatica...
Click Here to view complete document
question was not clear can u repeat this question with full descritpion because there
is no specific
costing functions in informatica.
thanks
madhu.
=======================================
228.Informatica - what is the repository agent?
QUESTION #228
No best answer available. Please pick the good answer available
or submit your
answer.
September 12, 2006 11:07:42 #1
Shivat Member Since: September 2006 Contribution: 9
RE: what is the repository agent?
Click Here to view complete document
Hi
The Repository Agent is a multi-threaded process that fetches inserts and updates
metadata in the
repository database tables. The Repository Agent uses object locking to ensure the
consistency of
metadata in the repository.
ShivajiThaneru
=======================================
Hi
file:///C|/Perl/bin/result.html (235 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
The Repository Server uses a process called Repository agent to access the tables
from Repository
database.The Repository sever uses multiple repository agent processes to manage
multiple repositories
on different machines on the network using native drivers.
=======================================
Hi
The Repository Server uses a process called Repository agent to access the tables
from Repository
database.The Repository sever uses multiple repository agent processes to manage
multiple repositories
on different machines on the network using native drivers.
chsrgeekI
=======================================
Hi
Name itself it is saying that agent means meadiator between and repositary server
and reposatary
database tables. simply reposatary agent means who speaks with reposatary.
thanks
madhu
=======================================
229.Informatica - what is the basic language of
informatica?
QUESTION #229
No best answer available. Please pick the good answer available
or submit your
answer.
September 15, 2006 14:38:58 #1
vick
RE: what is the basic language of informatica?
file:///C|/Perl/bin/result.html (236 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Sql plus
=======================================
Hi
basic language is Latin.
and it was developed in VC++.
Thanks & Regards
Madhu D.
=======================================
The basic language of Informatica is SQL plus.Then only it will under stand the
data base language.
=======================================
230.Informatica - What is CDC?
QUESTION #230
No best answer available. Please pick the good answer available
or submit your
answer.
September 18, 2006 08:42:07 #1
satyaneerumalla Member Since: August 2006 Contribution: 16
RE: Wht is CDC?
Click Here to view complete document
Changed Data Capture (CDC) helps identify the data in the source system that has
changed since the
last extraction. With CDC data extraction takes place at the same time the insert
update or delete
operations occur in the source tables and the change data is stored inside the
database in change tables.
The change data thus captured is then made available to the target systems in a
controlled manner.
mail me to discuss any thing related to informatica. it's my pleasure to discuss.
satya.neerumalla@tcs.com
file:///C|/Perl/bin/result.html (237 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
CDC Changed Data Capture. Name itself saying that if any data is changed it will
how to get the values.
for this one we have type1 and type2 and type3 cdc's are there. depending upon our
requirement we can
fallow.
thanks
madhu
=======================================
Whenever any source data is changed we need to capture it in the target system
also this can be
basically in 3 ways
Target record is completely replaced with new record(Type 1)
Complete changes can be captured as different records & stored in the target
table(Type 2)
Only last change & present data can be captured (Type 3)
CDC can be done generally by using a timestamp or version key
=======================================
231.Informatica - what r the mapping specifications?
how
versionzing of repository objects?
QUESTION #231
No best answer available. Please pick the good answer available
or submit your
answer.
September 19, 2006 02:45:04 #1
satyaneerumalla Member Since: August 2006 Contribution: 16
RE: what r the mapping specifications? how versionzing...
file:///C|/Perl/bin/result.html (238 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Mapping Specification
It is a metadata document of a mapping.
A typical Mapping specification contains:
1.Mapping Name
2.Business Requirement Information
3.Source System
Initial Rows
Short Description
Refresh Frequency
Preprocessing
Post Processing
Error Strategy
Reload Strategy
Unique Source Fields
4.Target System
Rows/Load
5.Sources
Tables
Table Name Schema/Owner Selection/Filter
Files
File Name File Owner Unique Key
6.Targets
file:///C|/Perl/bin/result.html (239 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Information about different targets. Some thing like above one.
7.Information about lookups
8.Source To Target Field Matrix
Target Table Target Column Data-type Source Table Source Column Data-type
Expression Default Value if Null Data Issues/Quality/Comments
Coming to Versioning of objects in repository...
In this we have to things
1.Checkout: when some user is modifying an object(source target mapping) he can
checkout it. That is
he can lock it. So that until he release no body can access it.
2.Checkin:
When you want to commit an object u use this checkin feature.
=======================================
Mapping Specification
It is a metadata document of a mapping.
A typical Mapping specification contains:
1.Mapping Name
2.Business Requirement Information
3.Source System
Initial Rows
Short Description
Refresh Frequency
Preprocessing
Post Processing
file:///C|/Perl/bin/result.html (240 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Error Strategy
Reload Strategy
Unique Source Fields
4.Target System
Rows/Load
5.Sources
Tables
Table Name Schema/Owner Selection/Filter
Files
File Name File Owner Unique Key
6.Targets
Information about different targets. Some thing like above one.
7.Information about lookups
8.Source To Target Field Matrix
Target Table Target Column Data-type Source Table Source Column Data-type
Expression Default Value if Null Data Issues/Quality/Comments
Coming to Versioning of objects in repository...
In this we have to things
1.Checkout: when some user is modifying an object(source target mapping) he can
checkout it. That is
he can lock it. So that until he release no body can access it.
file:///C|/Perl/bin/result.html (241 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
2.Checkin:
When you want to commit an object u use this checkin feature.
=======================================
Please let me know the answer for this question.
=======================================
hi
mapping is nothing but flow or work. where the data is coming and where data is
going. for this we need
mapping name
source table
target table
session.
thanks
madhu
=======================================
232.Informatica - what is bottleneck in informatica?
QUESTION #232
No best answer available. Please pick the good answer available
or submit your
answer.
September 26, 2006 09:46:17 #1
opbang Member Since: March 2006 Contribution: 46
RE: what is bottleneck in informatica?
file:///C|/Perl/bin/result.html (242 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Bottleneck in Informatica
Bottleneck in ETL Processing is the point by which the performance of the ETL
Process is slowr.
When ETL Process is in progress first thing login to workflow monitor and
observe performance
statistic. I.e. observe processing rows per second. In SSIS and Datastage when you
run the job you can
see at every level how many rows per second is processed by the server.
Mostly bottleneck occurs at source qualifier during fetching data from source
joiner aggregator Lookup
Cache Building Session.
Removing bottleneck is performance tuning.
=======================================
233.Informatica - What is the differance between
Local and
Global repositary?
QUESTION #233
No best answer available. Please pick the good answer available
or submit your
answer.
September 26, 2006 09:55:03 #1
opbang Member Since: March 2006 Contribution: 46
RE: What is the differance between Local and Global re...
Click Here to view complete document
You can develop global and local repositories to share metadata:
l Global repository. The global repository is the hub of the domain. Use the global
repository to
store common objects that multiple developers can use through shortcuts. These
objects may
include operational or Application source definitions reusable transformations
mapplets and
mappings.
l Local repositories. A local repository is within a domain that is not the global
repository. Use
local repositories for development. From a local repository you can create
shortcuts to objects in
shared folders in the global repository. These objects typically include source
definitions
common dimensions and lookups and enterprise standard transformations. You can
also create
file:///C|/Perl/bin/result.html (243 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
copies of objects in non-shared folders.
=======================================
234.Informatica - Explain in detail about Key Range
& Round
Robin partition with an example.
QUESTION #234
No best answer available. Please pick the good answer available
or submit your
answer.
October 11, 2006 02:03:38 #1
srinivas vadlakonda
RE: Explain in detail about Key Range & Round Robin pa...
Click Here to view complete document
key range: The informatica server distributes the rows of data based on the st of
ports that u specify as
the partition key.
Round robin: The informatica server distributes the equal no of rows for each and
every partition.
=======================================
235.Informatica - COMMITS: What is the use of
Source-based
commits ? PLease tell with an example ?
QUESTION #235
No best answer available. Please pick the good answer available
or submit your
answer.
September 29, 2006 02:20:42 #1
srinivas vadlakonda
RE: COMMITS: What is the use of Source-based commits ?...
file:///C|/Perl/bin/result.html (244 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
commits the data based on the course records
=======================================
if you selected commit type is target once the cache is holding some 10000 records
server will commit
the records here server will be least bother of the number of source records
processed.
if you selected commit type is source once after 10000 records are queried
immedialty server will
commit that here server will be least bother how many records inserted in the
target.
=======================================
236.Informatica - What Bulk & Normal load? Where
we use Bulk
and where Normal?
QUESTION #236
No best answer available. Please pick the good answer available
or submit your
answer.
October 01, 2006 10:10:23 #1
Vamsi Krishna.K
RE: What Bulk & Normal load? Where we use Bulk and whe...
Click Here to view complete document
Hello
when we try to load data in bulk mode there will be no entry in database log files
so it will be tough to
recover data if session got failed at some point. where as in case of normal mode
entry of every record
will be with database log file and with the informatica repository. so if the session
got failed it will be
easy for us to start data from last committed point.
Bulk mode is very fast compartively with normal mode.
we use bulk mode to load data in databases it won't work with text files using as
target where as normal
mode will work fine with all type of targets.
=======================================
file:///C|/Perl/bin/result.html (245 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
in case of bulk for group of records a dml statement will created and executed
but in the case of normal for every recorda a dml statement will created and
executed
if u selecting bulk performance will be increasing
=======================================
Bulk mode is used for Oracle/SQLserver/Sybase. This mode improves
performance by not writing to
the database log. As a result when using this mode recovery is unavailable. Further
this mode doesn't
work when update transformation is used and there shouldn't be any indexes or
constraints on the table.
Ofcourse one can use the pre-session and post-session SQLs to drop and rebuild
indexes/constraints.
=======================================
237.Informatica - Which transformation has the most
complexity
Lookup or Joiner?
QUESTION #237
No best answer available. Please pick the good answer available
or submit your
answer.
October 11, 2006 01:51:17 #1
srinivas vadlakonda
RE: Which transformation has the most complexity Looku...
Click Here to view complete document
lookup trans is check the condition like joiner but it has the more feature like
get a related value
update slowly chaning dimention
=======================================
lookupbut it will reduse the complexity of the solutions and improves the
performance of the workflows.
=======================================
238.Informatica - Where we use Star Schema &
where Snowflake?
QUESTION #238
No best answer available. Please pick the good answer available
or submit your
file:///C|/Perl/bin/result.html (246 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
answer.
October 05, 2006 01:20:33 #1
Raj
RE: Where we use Star Schema & where Snowflake?
Click Here to view complete document
its depends for client requirements.initially we have implement high level design
.so the client want to
be normalized (snow flake schema) or de-normaliized data (star schema) which is
used for their
analysis.so we have implemented whatever their requirements
=======================================
239.Informatica - Can we create duplicate rows in
star schema?
QUESTION #239
No best answer available. Please pick the good answer available
or submit your
answer.
October 25, 2006 09:02:00 #1
Ashwani
RE: Can we create duplicate rows in star schema?
Click Here to view complete document
Duplicate row has nothing to do with StarSchema. StarSchema is mathodoly and
Duplicate row is part
of actual implementation in DB. If u look at Surrogate key conceot then NO beside
surrogate key other
columns can be duplicate...and in that case there is another special case if you
implemented Unique
inDex...Boom
=======================================
240.Informatica - Where persistent cache will be
stored?
QUESTION #240
No best answer available. Please pick the good answer available
or submit your
answer.
October 01, 2006 10:04:07 #1
Vamsi Krishna.K
file:///C|/Perl/bin/result.html (247 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
RE: Where persistent cache will be stored?
Click Here to view complete document
The informatica server saves the cache files for every session and reuses it for next
session by that the
query on the table will be reduced so that there will be some performance
increment will be there.
=======================================
presistent cache stored in the server folder
=======================================
241.Informatica - What is SQL override? In which
transformation
we use override?
QUESTION #241
No best answer available. Please pick the good answer available
or submit your
answer.
October 01, 2006 10:01:32 #1
Vamsi Krishna.K
RE: What is SQL override? In which transformation we u...
Click Here to view complete document
Hello
By default informatica server will generate sql query for every action if that query
is not able to perform
the exact task we can modify that query or we can genrate new once with new
conditions and with new
constraints.
1. source qualifier
2. lookup
3. target
=======================================
242.Informatica - Can you update the Target table?
QUESTION #242
No best answer available. Please pick the good answer available
or submit your
file:///C|/Perl/bin/result.html (248 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
answer.
October 01, 2006 09:55:26 #1
Vamsi Krishna.K
RE: Can you update the Target table?
Click Here to view complete document
hello
we have to update target table. if you are loading type-1 dimension type-2
dimension data at target surly
you have to do.
this update can be done by two type.
1. using update stratergy
2. target update override.
Thanks
vamsi.
=======================================
243.Informatica - At what frequent u load the data?
QUESTION #243
No best answer available. Please pick the good answer available
or submit your
answer.
October 04, 2006 01:39:58 #1
opbang Member Since: March 2006 Contribution: 46
RE: At what frequent u load the data?
file:///C|/Perl/bin/result.html (249 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Loading frequency depends on the requirement of Business Users It could be daily
during mid night...
or weekly... Depending on the frequency... ETL process should take care... how the
updated transaction
data will replace in fact tables...
Other factors How fast OLTP is updating Data Volume Available time window for
Extracting Data.
=======================================
244.Informatica - What is a Materialized view? Diff.
Between
Materialized view and view
QUESTION #244
No best answer available. Please pick the good answer available
or submit your
answer.
October 11, 2006 01:45:10 #1
srinivas.vadlakonda
RE: What is a Materialized view? Diff. Between Materia...
Click Here to view complete document
materialized views are used in datawarehousing to precompute and store
aggregated data such sum of
sales and it will used for increase the speed of the query when we are taking the
large data bases.
view nothing but it is a small table which meats our criteria it will not occupy the
space
=======================================
View doesn't occupy any storage space in table space but meterilized view will
occupy space
=======================================
Materialized view stores the result set of a query but normal view does not store
the same. We can
refresh the Materialised View when any changes are made in the master table.
Normal view is only for
view the records. We can perform DML operation and direct path insert operation
in Materialized View.
=======================================
245.Informatica - Is it possible to refresh the
Materialized view?
QUESTION #245
No best answer available. Please pick the good answer available
or submit your
file:///C|/Perl/bin/result.html (250 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
answer.
October 05, 2006 05:37:16 #1
lingesh
RE: Is it possible to refresh the Materialized view?
Click Here to view complete document
Yaa we can refresh materialized view. While creating of materialized views we can
give options such as
refresh fast and we can mention time so that it can refresh automatically and fetch
new data i..e
updateed data and inserted data. For active dataware house i mean inorder to have
real time dataware
house we can have materialized views.
EX:- see this
CREATE MATERIALIZED VIEW mv_emp_pk
REFRESH FAST START WITH SYSDATE
NEXT SYSDATE + 1/48
WITH PRIMARY KEY
AS SELECT * FROM emp@remote_db;
=======================================
246.Informatica - What are the common errors that
you face daily?
QUESTION #246
No best answer available. Please pick the good answer available
or submit your
answer.
October 30, 2006 04:44:35 #1
shiva
RE: What are the common errors that you face daily?
file:///C|/Perl/bin/result.html (251 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
mostly we are geting oracle fatal error.whilw the server is not able to connect the
oracle server.
=======================================
247.Informatica - What is Shortcut? What is use of
it?
QUESTION #247
No best answer available. Please pick the good answer available
or submit your
answer.
October 03, 2006 04:17:18 #1
Vamsi Krishna K.
RE: What is Shortcut? What is use of it?
Click Here to view complete document
Shortcut is a facility providing by informatica to share metadata objects across
folders without copying
the objects to every folder.we can create shortcuts for Source definitions Reusable
transformations
Mapplets Mappings Target definitions Business components.there are two diffrent
types of shortcuts 1.
local shortcut2. global shortcut
=======================================
248.Informatica - what is the use of Factless
Facttable?
QUESTION #248
No best answer available. Please pick the good answer available
or submit your
answer.
October 04, 2006 01:24:36 #1
opbang Member Since: March 2006 Contribution: 46
RE: what is the use of Factless Facttable?
file:///C|/Perl/bin/result.html (252 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Factless Fact table are fact table which is not having any measures.
For example - You want to store the attendance information of the student. This
table will give you
datewise whether the student has attended the class or not. But there is no measures
because fees paid
etc is not daily.
=======================================
transaction can occur without the measure
fore example victim id
=======================================
249.Informatica - while Running a Session, what are
the two files
it will create?
QUESTION #249
No best answer available. Please pick the good answer available
or submit your
answer.
October 04, 2006 01:20:21 #1
opbang Member Since: March 2006 Contribution: 46
RE: while Running a Session, what are the two files it...
Click Here to view complete document
Session Log file and Session Detail file
=======================================
Besides session log it also creates the following files if applicable - reject files
target output file
incremental aggregation file cache file
=======================================
250.Informatica - give me an scenario where flat
files are used?
QUESTION #250
No best answer available. Please pick the good answer available
or submit your
answer.
October 05, 2006 00:26:01 #1
file:///C|/Perl/bin/result.html (253 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
opbang Member Since: March 2006 Contribution: 46
RE: give me an scenario where flat files are used?
Click Here to view complete document
Loading data from flatfiles to database is faster. You are receiving a data from a
remote location. At
remote location the required data can be converted into flatfile and same you can
use at target location
for loading. This minimizes the requirement of bandwidth faster transmission.
=======================================
Hi Flat files have some advantages which normal table does not have. First one is
explained by first
post. Secondly Flat file can work with case sensitivity issue of data easily. where
normal table has
errors.Kapil Goyal
=======================================
251.Informatica - Architectural diff b/w informatica
7.1 and 5.1?
QUESTION #251
No best answer available. Please pick the good answer available
or submit your
answer.
October 12, 2006 14:57:59 #1
zskhan Member Since: June 2006 Contribution: 3
RE: Architectural diff b/w informatica 7.1 and 5.1?
Click Here to view complete document
1. v7 has repository server & pmserver v5 had pmserver only. pmserver does not
directly talks to
repository database it talks to repository server which in turn talks to database.
=======================================
252.Informatica - what r the types of data flows in
workflow
manager
QUESTION #252
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (254 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
October 10, 2006 13:03:29 #1
sravangopisetty
RE: what r the types of data flows in workflow manager...
Click Here to view complete document
Types of dataflows1.sequential execution.2.paralal execution3.control flow
execution
=======================================
253.Informatica - what r the types of target loads
QUESTION #253
No best answer available. Please pick the good answer available
or submit your
answer.
October 13, 2006 15:03:04 #1
Anonymous
RE: what r the types of target loads
Click Here to view complete document
hi
Target load plan is the process thru which u can decide the preference of the Target
laod.
lets say u have three S.Q n three instances of Targets by default informatica will
load the data in the
first target but using the Target load plan u can change this sequence ...by selecting
vich target u want to
be laoded first
=======================================
we define target load plans in 2 ways
1. in mapping
2. in session
=======================================
file:///C|/Perl/bin/result.html (255 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
There are two types of target load typesthey are1. bulk2. normal
=======================================
254.Informatica - Can we use lookup instead of join?
reason
QUESTION #254
No best answer available. Please pick the good answer available
or submit your
answer.
October 11, 2006 01:28:11 #1
srinivas vadlakonda
RE: Can we use lookup instead of join? reason
Click Here to view complete document
yes we can use the lookup transformation instead of joiner but we can take the
homogeous sources only.
If u r taking joiner we can join the heterogeneous sources.
=======================================
If the relationship to other table is a one to one join or many to one join then we
can use lookup to get
the required fields. In case the relationship is outer join joining both the tables will
give correct results
as Lookup returns only one row if multiple rows satisfy the join condition.
=======================================
255.Informatica - what is sql override where do we
use and which
transformations
QUESTION #255
No best answer available. Please pick the good answer available
or submit your
answer.
October 10, 2006 12:33:10 #1
sravangopisetty
RE: what is sql override where do we use and which tra...
file:///C|/Perl/bin/result.html (256 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
THe defualt Sql Return by the source qualifier can be over writne by using source
qualifier.
Transfermation is Source qualifier transfermation.
=======================================
2.look up transfermation3.Target
=======================================
Besides the answers above in Session properties -> transformations tab -> source
qualifier ->SQL query
- by default it will be the query from the mapping. If you over write it this is also
called 'SQL override'
=======================================
we use sql override for 3 transformationswhen the source is homo genious1. joiner
transformation2.
filter transformaton3. sorter transformation
=======================================
256.Informatica - In a flat file sql override will work
r not? what
is the extension of flatfile.
QUESTION #256
No best answer available. Please pick the good answer available
or submit your
answer.
October 06, 2006 09:39:08 #1
suri
RE: In a flat file sql override will work r not? what ...
Click Here to view complete document
Nope.
.out is the extension...
=======================================
In Flat file SQL Override will not work. We have different properties to set for a
flat file. If you are
talking about the flat file as a source it can be of any extension like .dat .doc etc.
Yes if it is an Target
file it will have extension as .out. Which can be altered in the Target Properties.
Regards
Rajesh
file:///C|/Perl/bin/result.html (257 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
=======================================
Using flat files sql override willnot work. the extension of the flat files are .txt .doc
.dat and the output
or target flatfile extension is .out
=======================================
257.Informatica - what is cost effective
transformation b/w lookup
and joiner
QUESTION #257
No best answer available. Please pick the good answer available
or submit your
answer.
October 11, 2006 11:14:23 #1
Myk
RE: what is cost effective transformation b/w lookup a...
Click Here to view complete document
Are you lookuping flat file or database table? Generaly sorted joiner is more
efective on flat files than
lookup because sorted joiner uses merge join and cashes less rows. Lookup cashes
always whole file. If
the file is not sorted it can be comparable.Lookups into database table can be
effective if the database
can return sorted data fast and the amount of data is small because lookup can
create whole cash in
memory. If database responses slowly or big amount of data are processed lookup
cache initialization
can be really slow (lookup waits for database and stores cashed data on discs).
Then it can be better use
sorted joiner which throws data to output as reads them on input.
=======================================
258.Informatica - WHAT IS THE DIFFERENCE
BETWEEN
LOGICAL DESIGN AND PHYSICAL DESIGN INA
DATAWAREHOUSE
QUESTION #258
No best answer available. Please pick the good answer available
or submit your
answer.
October 13, 2006 15:13:45 #1
Anonymous
RE: WHAT IS THE DIFFERENCE BETWEEN LOGICAL DESIG...
file:///C|/Perl/bin/result.html (258 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
During the logical design phase you defined a model for your data warehouse
consisting of entities
attributes and relationships. The entities are linked together using relationships.
During the physical design process you translate the expected schemas into actual
database structures.
At this time you have to map:
l Entities to tables
l Relationships to foreign key constraints
l Attributes to columns
l Primary unique identifiers to primary key constraints
l Unique identifiers to unique key constraints
n if u refer this diag u will understand better : -
=======================================
259.Informatica - what is surrogate key ? how many
surrogate key
used in ur dimensions?
QUESTION #259
No best answer available. Please pick the good answer available
or submit your
answer.
October 12, 2006 01:10:52 #1
srinivas vadlakonda
RE: what is surrogate key ? how many surrogate key use...
file:///C|/Perl/bin/result.html (259 of 363)4/1/2009 7:50:58 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
surrogate key is used to replacement of the primary key
DWH does not depends upon the primary key it is used to identify the internal
records each diemension
should have atleast one surrogate key.
=======================================
Surrogate key or warehouse key acts like a composite primary key. If the target
doesn't have a unique
key the surrogate key helps to address a particular row. It is like a primary key in
the target.
=======================================
260.Informatica - what r the advantages and
disadvantagesof a
star schema and snoflake schema.thanks in advance.
QUESTION #260
No best answer available. Please pick the good answer available
or submit your
answer.
October 16, 2006 02:14:54 #1
srinivas.vadlakonda
RE: what r the advantages and disadvantagesof a ...
Click Here to view complete document
Schemas are two types starflake and snowsflake schema In starflake fact table is in
normalized format
and dimention table is in denormalized format
In snowsflake both fact and dimention tables are in normalized format only
if u r taking snowsflake it requires more dimention table and more foreign keys it
will reduse the query
performance. It will reduce the redundency.
=======================================
Main advantage in starschema
file:///C|/Perl/bin/result.html (260 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
1)it supports drilling and drill down options
2)fewer tables
3)less database
In snowflake schema
1)degrade query peformance because more joins is there
2)savings small storage spaces
=======================================
261.Informatica - why do we use reusable
sequencegenerator
transformation only in mapplet?
QUESTION #261
No best answer available. Please pick the good answer available
or submit your
answer.
November 16, 2006 06:05:48 #1
Sarada
RE: why do we use reusable sequencegenerator transform...
Click Here to view complete document
The relation between reusable sequence generator and maplet is indirect.
Reusable Sequence generator are preferably used when we want the same sequence
(that is the next
value of the sequence) to be used in more than one mapping (may be because this
next value loading the
same field of same table in different mappings and to maintain the continuity its
required)
suppose there are two mappings using a reusable sequence generator and we run
the two mappings one
by one. here if the last value for the sequence generator for the mapping1 run is
999 then the sequence
generator value for second mapping will start from 1000.
this is how we relate reusable sequence generator and maplet.
file:///C|/Perl/bin/result.html (261 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
=======================================
Hi
Question is Are you sure its going to start with the 1000 number after the 999. By
default for reusable
SEQUENCE it has a cache value set to 1000 that's why it takes 1000 else if you
only have 595 records
for the 1st session and automatically the 2nd session will start with 1000 as the
sequence number bcos
the cache value is set to it.
I got another question.
Is it possible to change Number of Cached Values always to 1 instead of changing
it after each time the
session/sessions is/are run.
Thanks
Philip
=======================================
Hi
The solution provided is correct. I would like to add more information into it.
Reusable Sequence Generator is a must for a Mapplet:
Mapplet is basically to reuse a mapping as such if a non- reusable sequence
generator is used in mapplet
then the sequence of numbers it generates mismatches and creates problem. Thus it
is made a must.
=======================================
262.Informatica - in which particular situation we
use
unconnected lookup transformation?
QUESTION #262 Submitted by: sridhar39
hi,
both unconnected and connected will provide single output. if it
is the
case that we can use either unconnected or connected i prefer
unconnected why
because unconnected doesnot participate in the dataflow so
informatica server
creates a seperate cache for unconnected and processing takes
place parallely. so
performance increases.
file:///C|/Perl/bin/result.html (262 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Above answer was rated as good by the following members:
Vamshidhar Click Here to view complete document
We can use the unconnected lookup transformation when i need to return the only
one port at that time i
will use the unconnected lookup transformation instead of connected. We can also
use the connected to
return the one port but if u r taking unconnected lookup transformation it is not
connected to the other
transformation and it is not a part of data flow that why performance will increase.
=======================================
The major advantage of unconnected lookup is its reusability. We can call an
unconnected lookup
multiple times in the mapping unlike connected lookup.
=======================================
We can use the unconnected lookup transformation when we need to return the
output from a single
port.
If we want the output from a multiple ports at that time we have to use connected
lookup
Transformation.
=======================================
Use of connected and unconnected Lookup ic completely based on the logic which
we need.
However i just wanted to clear that we can get multiple rows data from an
Unconnected lookup also.
Just concatinate all the values which you want and get the result from the return
row of unconnected
lookup and then furthur split it in the expression.
However using Unconnected lookup takes more time as it breaks the flow and goes
to an unconnected
lookup to fetch the results.
=======================================
hi
both unconnected and connected will provide single output. if it is the case that we
can use either
unconnected or connected i prefer unconnected why because unconnected doesnot
participate in the
dataflow so informatica server creates a seperate cache for unconnected and
processing takes place
parallely. so performance increases.
=======================================
263.Informatica - in which particular situation we
use dynamic
file:///C|/Perl/bin/result.html (263 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
lookup?
QUESTION #263
No best answer available. Please pick the good answer available
or submit your
answer.
October 17, 2006 01:06:34 #1
srinivas vadlakonda
RE: in which particular situation we use dynamic looku...
Click Here to view complete document
Specific situation is not there to use the dynamic lookup if u r using the dynamic it
will increase the
performance and also it will minimize the transformation like sequence generator.
=======================================
If the no. of records are in hundreds one doesn't see much difference whether a
static cache is used or
dynamic cache. If there are thousands of records dynamic cache kills time because
it commits the
database for each insert or update it makes.
=======================================
264.Informatica - is there any relationship between
java &
inforematica?
QUESTION #264
No best answer available. Please pick the good answer available
or submit your
answer.
October 18, 2006 07:04:27 #1
phanimv Member Since: July 2006 Contribution: 41
RE: is there any relationship between java & inforemat...
file:///C|/Perl/bin/result.html (264 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Like java informatica is a plat-form independent portable and architechtural nutral.
=======================================
265.Informatica - How many types of flatfiles
available in
Informatica?
QUESTION #265
No best answer available. Please pick the good answer available
or submit your
answer.
October 16, 2006 04:08:09 #1
subbarao.g
RE: How many types of flatfiles available in Informati...
Click Here to view complete document
There are two types of flate files:
1.Delimtedwidth
2.Fixdwidth
=======================================
Thank you very much Subbarao..
=======================================
266.Informatica - what is the event-based
scheduling?
QUESTION #266
No best answer available. Please pick the good answer available
or submit your
answer.
October 25, 2006 15:42:32 #1
sn3508 Member Since: April 2006 Contribution: 20
RE: what is the event-based scheduling?
file:///C|/Perl/bin/result.html (265 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
In time based scheduling the jobs run at the specified time. In some situations
we've to run a job based
on some events like if a file arrives then only the job has to run whatever the time
it is. In such cases
event based scheduling is used.
=======================================
event based scheduling is using for row indicator file. when u dont no where is the
source data that time
we use shellcommand script batch file to send to the local directory of the
informatica
server is waiting for row indiactor file befor running the session.
=======================================
267.Informatica - what is the new lookup port in
look-up
transformation?
QUESTION #267
No best answer available. Please pick the good answer available
or submit your
answer.
October 25, 2006 15:37:38 #1
sn3508 Member Since: April 2006 Contribution: 20
RE: what is the new lookup port in look-up transformat...
Click Here to view complete document
I hope you're asking about the 'add a new port' button in the lookup transformation.
If the answer is yes
this button creates a port where we can enter the name datatype ...of a port. This is
mainly used when
using unconnected lookup this reflects the datatype of the input port.
=======================================
Seems u r talking newlook up row when u configure u lookup as dyanamic lkp
cache by default it will
generates newlkprow it tells the informatica whether
the row u got is existing or new row if it is new row it passes data. or else it
discards.
=======================================
file:///C|/Perl/bin/result.html (266 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
This port is added by PC Client :-'Designer to a Lookup transformation when ever
dynamic cache is
used.This port is a indicator to Infa server action whether it is inserts or upates the
dynamic cache
through a numeric value.[0 1 2].
=======================================
The new port in lookup transformation is Associative port
Cheers
Bobby
=======================================
268.Informatica - what is dynamic insert?
QUESTION #268
No best answer available. Please pick the good answer available
or submit your
answer.
November 15, 2006 00:22:38 #1
srinuv_11 Member Since: October 2006 Contribution: 23
RE: what is dynamic insert?
Click Here to view complete document
When we selecting the dynamic cache in look up transformation the informatica
server create the new
look up row port it will indicates the numeric value wheather the informatica
server inserts updates or
makes no changes to the look up cashe and if u associate a sequence id the
informatica server create a
sequence id for newly inserted records.
=======================================
Hi srinu
Can u explain how to associate sequence ?
Thanks
Sri.
=======================================
269.Informatica - how did you handle errors?(ETL-
Row-Errors)
QUESTION #269
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (267 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
November 29, 2006 02:02:06 #1
phani
RE: how did you handle errors?(ETL-Row-Errors)
Click Here to view complete document
Hi friend
If there is an error comes it stored it on target_table.bad file.
The error are in two type
1. row-based errors
2. column based errors
column based errors identified by
D-GOOD DATA N-NULL DATA O-OVERFLOW DATA R-REJECTED DATA
the data stored in .bad file
D1232234O877NDDDN23 Like that
if any doubt give msg to my mail id
=======================================
270.Informatica - HOw do u setup a schedule for
data loading
from scratch?
QUESTION #270
No best answer available. Please pick the good answer available
or submit your
answer.
December 12, 2006 17:08:30 #1
hanug Member Since: June 2006 Contribution: 24
file:///C|/Perl/bin/result.html (268 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
RE: HOw do u setup a schedule for data loading from sc...
Click Here to view complete document
whether you are loading data from scratch(for the first time) or subsequent loads
there is no changes to
the scheduling. The change is how to pickup the delta data.
Hanu.
=======================================
271.Informatica - HOw do u select duplicate rows
using
informatica?
QUESTION #271
No best answer available. Please pick the good answer available
or submit your
answer.
October 20, 2006 08:52:55 #1
Sharmila
RE: HOw do u select duplicate rows using informatica?
Click Here to view complete document
I thought we could identify dupilcates by using rank transformation.
=======================================
can u explain what are the steps for identifying the duplicates with the help of rank
transformation.
=======================================
can u explain detail?
=======================================
Hi
You can write SQL override in the source qualifier (to eliminate duplicates). For
that we can use
distinct keyword.
For example : consider a table dept(dept_no dept_name) and having duplicate
records in that. then write
file:///C|/Perl/bin/result.html (269 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
this following query in the Source Qualifier sql over ride.
1)select distinct(deptno) deptname from dept_test;
2)select avg(deptno) deptname from dept_test
group by deptname;
if you want to have only duplicate records then write the following query in the
Source Qualifier SQL
Override
select distinct(deptno) deptname from dept_test a where deptno in(
select deptno from dept_test b
group by deptno
having count(1)>1)
=======================================
i think we cant select duplicates from rank transformation if it is possible means
explain how to do it
=======================================
We can get the duplicate records by using the rank transformation.
=======================================
we can aso use sorter transfermation.seect distinct one check box.
=======================================
272.Informatica - How to load data to target where
the source and
targets are XML'S?
QUESTION #272
No best answer available. Please pick the good answer available
or submit your
answer.
October 25, 2006 15:23:31 #1
sn3508 Member Since: April 2006 Contribution: 20
RE: How to load data to target where the source and ta...
file:///C|/Perl/bin/result.html (270 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
l If you don't have the structures create the Source or Target structure of the XML
file by going to
Sources or Targets menu and selecting 'Import XML' and follow the steps
l Follow the regular steps you would do to create an ordinary mapping/session.
l In the session you've to mention the location and name of source/target.
l Once the session is success the xml file will be generated in the specified location.
=======================================
273.Informatica - What TOAD and for what purpose
it will be
used?
QUESTION #273
No best answer available. Please pick the good answer available
or submit your
answer.
October 24, 2006 04:54:29 #1
srinivas vadlakonda
RE: What TOAD and for what purpose it will be used?
Click Here to view complete document
After creating the mapping we can execute the session for that mapping after that
we can use the unit
testing to test data. And we can use the oracle to test how many records are loaded
into the target that
loaded records are correct or not for this purposes we can use the oracle or Toad.
=======================================
TOAD is a user friendly interface for relational databases like Oracle Sql Server
DB2. While unit
testing the development or query on tables one can use TOAD with basic
knowledge on database
commands.
=======================================
Toad is a application development tool built around an advanced SQL and PL/SQL
editor. Using Toad
you can build and test PL/SQL packages procedures triggers and functions. You
can create and edit
database tables views indexes constraints and users. The Schema Browser and
Project Manager
provides quick access to database objects.
Toad s SQL Editor provides an easy and efficient way to write and test scripts and
queries and its
powerful data grids provide an easy way to view and edit Oracle data.
With Toad you can:
file:///C|/Perl/bin/result.html (271 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
l View the Oracle Dictionary
l Create browse or alter objects
l Graphically build execute and tune queries
l Edit PL/SQL and profile stored procedures
l Manage your common DB tasks from one central window
l Find and fix database problems with constraints triggers extents indexes and
grants
l Create code from shortcuts and templates
l Create custom code templates
l Control code access and development (with or without a third party version
control product)
using Toad's cooperative source control feature.
=======================================
274.Informatica - What is target load order ?
QUESTION #274
No best answer available. Please pick the good answer available
or submit your
answer.
October 24, 2006 04:47:53 #1
ram gopal
RE: What is target load order ?
Click Here to view complete document
In a mapping if there are more than one target table then we need to give in which
order the target tables
should be loaded
example: suppose in our mapping there are 2 target table
1. customer
2. Audit table
first customer table should be populated than Audit table for that we use target load
order
Hope u undeerstood
=======================================
file:///C|/Perl/bin/result.html (272 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Target oad pan specifies the order in which the data being extracted from the
source qualifier.
=======================================
275.Informatica - How to extract 10 records out of
100 records in a
flat file
QUESTION #275
No best answer available. Please pick the good answer available
or submit your
answer.
October 31, 2006 16:52:24 #1
sridhar
RE: How to extract 10 records out of 100 records in a ...
Click Here to view complete document
1. create external directory
2. store the file in this external directory
3. create a external table corresponding to the file
4. query the external table to access records like u would do a normal table
=======================================
hi
For falt file sourec
source--sq--sequence generator tran.on new fileld id--filter tran. id< 10--traget
=======================================
276.Informatica - How many types of TASKS we
have in
Workflomanager? What r they?
QUESTION #276
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (273 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
October 31, 2006 00:08:06 #1
kumar
RE: How many types of TASKS we have in Workflomanager?...
Click Here to view complete document
session command email notification
=======================================
work flow task :
1)session 2)command 3)email 4)control 5)command 6)presession 7)post session
8)assigment
=======================================
1) session2) command 3) email4) event-wait5) event-raise6) assignment7)
control8) decision9)
timer10) worklet3) 8) 9) are self explanatory. 1) run mappings. 2) run OS
commands/scripts. 4 + 5)
raise user-defined or pre-defined events and wait for the the event to be raised. 6)
assign values to
workflow var 10) run worklets.
=======================================
The following Tasks we r having in Workflow manager Assignment Control
Command decision E-mail
Session Event-Wait Event-raise and Timer. The Tasks developed in the task
developer rreusable tasks
and taske which r developed by useing workflow or worklet r non reusable.
Among these tasks only
Session Command and E-mail r the reusable remaining tasks r non
reusable.Regards rma
=======================================
1. Session
=======================================
We have Session Command and Email tasks
Cheers
Thana
=======================================
277.Informatica - What is user defined
Transformation?
QUESTION #277
No best answer available. Please pick the good answer available
or submit
your answer.
November 02, 2006 15:51:39 #1
file:///C|/Perl/bin/result.html (274 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
n k rajkumar
RE: What is user defined Transformation?
Click Here to view complete document
User Defined Transformations Are
Stored Procedure and Sourcr Qualifier
=======================================
278.Informatica - what is the difference between
connected stored
procedure and unconnected stored procedure?
QUESTION #278
No best answer available. Please pick the good answer available
or submit your
answer.
November 15, 2006 00:11:33 #1
srinuv_11 Member Since: October 2006 Contribution: 23
RE: what is the difference between connected stored pr...
Click Here to view complete document
Connected stored procedure will execute when each row passed through the
mapping then it will
execute Unconnected stored procedure will execute when it is called by the another
transformation and
connected stored procedure is the part of a data flow in a pipe line but unconnected
is not.
=======================================
279.Informatica - what is semi additve measuresand
fully additive
measures
QUESTION #279
No best answer available. Please pick the good answer available
or submit your
answer.
November 03, 2006 06:17:52 #1
rameshr
file:///C|/Perl/bin/result.html (275 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
RE: what is semi additve measuresand fully addi...
Click Here to view complete document
hi....
this is ramesh.....(if any one feel there is some conceptual problem with my
solution plz let me now)
there are three types of facts...
1.additive
2.semi additive
3. non-additive
additve means when a any measure is queried of the fact table if the result relates
to all the diemension
table which are linked to the fact
semi-additve when a any measure is queried from the fact table the results relates
to some of the
diemension table
non-additive when a any measure is queried from the fact table if it does n't relate
to any of the
diemension and the result is driectly from the measures of the same fact table ex: to
calculate the total
percentage of loan just we take the value from the fact measure(loan) divide it with
100 we get it
without the diemension...
=======================================
hi
additive means it can be summrized by any other column.
semi additive means it can be summrized by some columns only.
=======================================
hi can u give one example of additive and semi additive facts it ll be better for
understand
file:///C|/Perl/bin/result.html (276 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Akhi
=======================================
average monthly balance of your bank account is a semi-additive fact.
=======================================
A measurable data on which simple addition can be performed is called fully
additive such measurable
data's no need to combine two or more dimensions for it's meaning
Ex:
Product wise total sales
Branch wise total sales
A measurable data on which simple addition can't be performed is called semi
additive such measurable
data's need to combine two or more dimensions for it's meaning
Ex: customer wise total sales amount ---------> Has no meaning
customer wise product total sales amount
If any one have doubt about this let me now
=======================================
280.Informatica - what is DTM process?
QUESTION #280
No best answer available. Please pick the good answer available
or submit your
answer.
November 06, 2006 04:40:39 #1
prasanna alur
RE: what is DTM process?
file:///C|/Perl/bin/result.html (277 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
DTM means data transformation manager.in informatica this is main back ground
process.it run after
complition of load manager.in this process informatica server search source and tgt
connection in
repository if it correct then informatica server fetch the data from source and load
it to target.
=======================================
dtm means data transmission manager. It is one of the component in informatica
architecture. it will
collect the data and load the data.
=======================================
Load manager Process: Starts the session creates the DTM process
and sends post-session email when the session completes.
The DTM process. Creates threads to initialize the session read
write and transform data and handle pre- and post-session operations.
=======================================
281.Informatica - what is Powermart and Power
Center?
QUESTION #281
No best answer available. Please pick the good answer available
or submit your
answer.
November 06, 2006 04:34:18 #1
prasanna alur
RE: what is Powermart and Power Center?
Click Here to view complete document
In power center we can register multiple server but in power mart its not
possible.another diff is in PC
we can create repository globle but it not in PM.
=======================================
pc we will use in production environment.
pm we will use in development environment
=======================================
Hi
power center will support global and local repositories and also it supports ERP
packages. but
file:///C|/Perl/bin/result.html (278 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Powermart will support local repositories only and it doesn't support ERP packages
.
Power center is High Cost where as power mart it is Low.
Power center normally used for enterprise data warehouses where as power mart
will use for Low/Mid
range data warehouses.
Thanks and Regards
Siva Prasad.
=======================================
Power Center supports Partitioning process where as Power mart only does simple
pass through.
=======================================
282.Informatica - what are the differences between
informatica6.1
and informatica7.1
QUESTION #282
No best answer available. Please pick the good answer available
or submit your
answer.
November 06, 2006 16:09:38 #1
calltomadhu Member Since: September 2006 Contribution: 34
RE: what are the differences between informatica6.1 a...
file:///C|/Perl/bin/result.html (279 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
load manager is a manager who will take care about the loading process from
source to target
=======================================
in informatica7.1
1. we can take flatfile as a target
2.flatfile as lookup table
3.dataprofiling&versioning
4.union transformation.it works like union all.
thanks ards
sivaprasad
=======================================
283.Informatica - hi, how we validate all the
mappings in the
repository at once
QUESTION #283
No best answer available. Please pick the good answer available
or submit your
answer.
November 07, 2006 00:03:46 #1
srinuv_11 Member Since: October 2006 Contribution: 23
RE: hi, how we validate all the mappings in the reposi...
file:///C|/Perl/bin/result.html (280 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
U r not able to validate all mappings at a time each time one mapping can be
validated.
=======================================
Hi
You can not validate all the mappings in one go. But you can validate all the
mappings in a folder in
one go and continue the process for all the folders.
For dooing this log on to the repository manager. Open the folder then the mapping
sub folder then
select all or some of the mappings(by pressing the shift or control key ctrl+A does
not work) and then
rightclick and validate.
=======================================
hi
you cant validate all maping at a time.
you should go one by one.
thanks
madhu
=======================================
Still we dont have such facility in informatica.
=======================================
Yes. We can validate all mappings using the Repo Manager.
=======================================
284.Informatica - hw to work with pmcmd on
windows platform
QUESTION #284
No best answer available. Please pick the good answer available
or submit your
answer.
November 08, 2006 15:31:22 #1
calltomadhu Member Since: September 2006 Contribution: 34
file:///C|/Perl/bin/result.html (281 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
RE: hw to work with pmcmd on windows platform
Click Here to view complete document
hi
in workflow manager take pmcmd command.
establish link between session and pmcmd. if session executes successfully pmcmd
command executes.
thanks
madhu
=======================================
Hi Frnd Can u plz tell me where is the PMCMD option in
workflomanager.ThanksSwati.
=======================================
Hi Swathi
In commandline only we can execute pmcmd pmrep commands. In workflow
manger we execute
directly session task.
=======================================
C:Program FilesInformatica PowerCenter 7.1.3Serverbinpmcmd.exe
=======================================
C:-->Program Files-->Informatica PowerCenter 7.1.3-->Server-->bin--
>pmcmd.exe
=======================================
285.Informatica - inteview questionwhy do u use a
reusable
sequence genator tranformation in mapplets?
QUESTION #285
No best answer available. Please pick the good answer available
or submit your
answer.
November 14, 2006 02:03:43 #1
file:///C|/Perl/bin/result.html (282 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
srinuv_11 Member Since: October 2006 Contribution: 23
RE: inteview questionwhy do u use a reusa...
Click Here to view complete document
If u r not using reusable sequence generator transformation duplicasy will occur
inorder to reduce this
we can use reusable sequence generator transformation.
=======================================
You can use sequence generator but is the usein the mapping it create not
unique value
=======================================
If we use same sequence generator for multiple times to load the data at same time
there will be DEAD
LLOCK occurs.
To avoid Dead locks we dont use same sequence generator
=======================================
286.Informatica - interview questiontell me what
would the size
of ur warehouse project?
QUESTION #286
No best answer available. Please pick the good answer available
or submit your
answer.
November 13, 2006 00:09:10 #1
srinuv_11 Member Since: October 2006 Contribution: 23
RE: interview questiontell me what would t...
Click Here to view complete document
U can say 900 mb
=======================================
The size of EDW will be on Terabyte. The Server will run on Either Unix or Linux
with SAN box
=======================================
U can say 600-900GB including Ur Marts.It varies depending up on ur Project
Structure and How many
data marts and EDWH
=======================================
Mr Srinuv by saying 900 MB r u kiddin with folks over here! Its a datawarehouse's
size not some client
server software.Thanks Vivek
=======================================
You can answer this question as the number of facts and dimension in ur
warehouse
For Example: Insurance Datawarehouse
2 Facts : Claims and policy
file:///C|/Perl/bin/result.html (283 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
8 Dimension : Coverage Customer Claim etc....
=======================================
I think the interviewer wants to the number of facts and dimensions in the
warehouse and not the size in
GB's or TB of the actual Database.
=======================================
287.Informatica - what is grouped cross tab?
QUESTION #287
No best answer available. Please pick the good answer available
or submit your
answer.
November 09, 2006 14:46:05 #1
calltomadhu Member Since: September 2006 Contribution: 34
RE: what is grouped cross tab?
Click Here to view complete document
its one kind of report. generally we will use this one in cognos
=======================================
288.Informatica - what is aggregate awareness?
QUESTION #288
No best answer available. Please pick the good answer available
or submit your
answer.
November 16, 2006 08:28:39 #1
satya
RE: what is aggregate awareness?
Click Here to view complete document
It is the ability to dynamically re-write SQL to the level of granularity neededto
answer a business
question
=======================================
289.Informatica - Can we revert back reusable
transformation to
normal transformation?
QUESTION #289
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (284 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
November 10, 2006 03:48:16 #1
satish
RE: Can we revert back reusable transformation to norm...
Click Here to view complete document
yes
=======================================
no
=======================================
no it is not reversible...
when you open a transformation in edit mode there is a check box named
REUSABLE... if you tick it
will give you a message saying that making reusable is not reversible...
=======================================
No. Once we declared a transformation as a Reusable we cant able to revert.
=======================================
Reverting to Original Reusable Transformation
If you change the properties of a reusable transformation in a mapping you can
revert to the original
reusable transformation properties by clicking the Revert button.
=======================================
No we CANNOT revert back the reusable transformation. There is a revert button
that can revert the
last changes made in the transformation.
=======================================
No We cant revert a resuble tranformation to Normal Trasnsformation.
Once we select reusable colun will be enabled.
=======================================
YES...we can.
1) Drag the reusable transformation from Repository Navigator into Mapping
Designer by pressing the
file:///C|/Perl/bin/result.html (285 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
left button of the mouse
2 ) and then press the Ctrl key before releasing the left button of mouse.
3) Release the left button of mouse.
4) Enjoy....:-)
Thanks
Santu
=======================================
the last answer though correct in a way is not completely correct. by using the ctrl
key we are making a
copy of the original transformation but not changing the original transformation
into a non-reusable one.
=======================================
I think if the transformation is created in Mapping designer and make it as reusable
then we can revert
back that one by the option "revert Back".
But if we create transformation in Mapplet we can make it as non reusable one
=======================================
290.Informatica - How do we delete staging area in
our project?
QUESTION #290
No best answer available. Please pick the good answer available
or submit your
answer.
November 15, 2006 03:59:08 #1
narayana
RE: How do we delete staging area in our project?
Click Here to view complete document
If your database is oracle then we can apply CDC(change data capture) and load
data which is only
changed recently after previous data load.
=======================================
if staging area is storing only incremental data ( means changed or new data with
respect to previous
load) then you can truncate the staging area.
But if you maintain historical information in staging area then you can not truncate
your staging area.
file:///C|/Perl/bin/result.html (286 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
=======================================
If we use Type 2 of slowly changing dimensions we can delete staging area.
because SCD Type 2 stores
previous data with version number and time stamp.
=======================================
291.Informatica - what is referential Intigrity error?
how ll u
rectify it?
QUESTION #291
No best answer available. Please pick the good answer available
or submit your
answer.
November 27, 2006 06:48:12 #1
Sravan
RE: what is referential Intigrity error? how ll u rect...
Click Here to view complete document
referential Intigrity is all about foreign key relationship between tables. Need to
check for the primary
and foreign key ralationship and the existing data if any.(see if child table has any
records which are
poiting to the master table records that are no more in master table.)
=======================================
You have set the session for constraint-based loading but the PowerCenter Server
is unable to determine
dependencies between target tables possibly due to errors such as circular key
relationships.
Action: Ensure the validity of dependencies between target tables
=======================================
292.Informatica - what is constraint based error?
how ll u clarify
it?
QUESTION #292
No best answer available. Please pick the good answer available
or submit your
answer.
November 13, 2006 00:07:44 #1
srinuv_11 Member Since: October 2006 Contribution: 23
RE: what is constraint based error? how ll u clarify i...
file:///C|/Perl/bin/result.html (287 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Constraint based error will occur if the pk is duplicate in a table
=======================================
it is a primary and forigen key relationship.
=======================================
hi when data from a single source needs to be loaded into multiple targets in that
situation we use
constraint based load ordering.Note: The target tables must have primary - foreign
key relationships.
regards Phani...
=======================================
1. when data from a single source needs to be loaded into multiple targets
=======================================
293.Informatica - why exactly the dynamic
lookup?plz can
any bady can clarify it?
QUESTION #293
No best answer available. Please pick the good answer available
or submit
your answer.
November 13, 2006 00:03:24 #1
srinuv_11 Member Since: October 2006 Contribution: 23
RE: why exactly the dynamic lookup?plz can any bady ca...
Click Here to view complete document
Dynamic look up means if the changes are applying to the look up chache then we
can say it the
dynamic look up.
=======================================
hii...
y dynamic lookup.....
suppose u r looking up a table that is changing frequently i.e u want to lookup
recent data then u have to
go for dynamic lookup......ex: online trancation data(ATM)
=======================================
Dynamic Lookup generally used for connect lkp transformation when the data is
cgenged then is
updating insert or its leave without changing .....
file:///C|/Perl/bin/result.html (288 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
=======================================
Dynamic lookup cache is used in case of connected lookups only It is also called as
read and write
cache when a new record is inserted into the target table then cache is also updated
with the new record
it is saved in the cache for faster lookup of data from the target. Generally we use
this in case of slowly
changing dimensions.
Pallavi
=======================================
294.Informatica - How many mappings you have
done in your
project(in a banking)?
QUESTION #294
No best answer available. Please pick the good answer available
or submit your
answer.
November 13, 2006 00:01:56 #1
srinuv_11 Member Since: October 2006 Contribution: 23
RE: How many mappings you have done in your project(in...
Click Here to view complete document
Depends on the dimensions for suppose if u r taking the banking porject it requires
the dimension like
time primary account holder branch like we can take any no of dimension
depending on the project and
we can create the mapping for cleancing the data or scrubbing the data for this also
we can create the
mappings we can't exact this many.
=======================================
It depend upon the user requirement.according to user requirement we can
configure the modelling .On
that modelling basis we can identify the no of mappings which includes simple
mapping to complex
mapping.
=======================================
Exactly 93.133
=======================================
295.Informatica - what are the UTP'S
QUESTION #295
No best answer available. Please pick the good answer available
or submit your
answer.
November 21, 2006 09:42:20 #1
file:///C|/Perl/bin/result.html (289 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
raja147 Member Since: November 2006 Contribution: 3
RE: what are the UTP'S
Click Here to view complete document
hiutps are done to check the mappings are done according to given business
rules.utp is the (unit test
plan ) done by deveploper.
=======================================
After creating the mapping each mapping can be tested by the developer
individualy
=======================================
296.Informatica - how can we delete duplicate rows
from flat
files ?
QUESTION #296
No best answer available. Please pick the good answer available
or submit your
answer.
December 02, 2006 07:42:09 #1
amarnath
RE: how can we delete duplicate rows from flat files ?...
Click Here to view complete document
by using aggregater
=======================================
We can delete duplicate rows from flat files by using Sorter transformation.
=======================================
Sorter Transofrormation do the records in sorting order(for better performence) .
iam asking how can
we delete duplicate rows
=======================================
use a lookup by primary key
=======================================
In the mapping read the flat file through a Source Definition and SQ. Apply a
Sorter Transformation in
the property tab select distinct . out put will give a sorter distinct data hence you
get rid of duplicates.
You can also use an Aggegator Transformation and group by the PK. Gives the
same result.
=======================================
Use Sorter Transformation and check Distinct option. It will remove the duplicates.
file:///C|/Perl/bin/result.html (290 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
=======================================
297.Informatica - 1.can u look up a flat file ? how
?2.what is test
load?
QUESTION #297
No best answer available. Please pick the good answer available
or submit your
answer.
December 03, 2006 22:03:36 #1
sravan kumar
RE: 1.can u look up a flat file ? how ?2.what i...
Click Here to view complete document
By Using Look up transformation we can look up A flat file.When u click Look up
transformation it
shows u the message.Follow that.
Test load is nothing but checking whether the data is moving correctly to the target
or not.
=======================================
Test load is the property we can set at the session property level by which
Informatica performs all pre
and post session tasks but does not save target data(in RDBMS target table it
writes the data to check
the constraints but rolls it back). If the target is flat file then it does not write
anything in the file. We
can specify number of source rows to test load the mapping. This is another way of
debugging the
mapping without loading the target.
=======================================
298.Informatica - what is auxiliary mapping ?
QUESTION #298
No best answer available. Please pick the good answer available
or submit your
answer.
December 26, 2006 03:27:18 #1
Kuldeep Kumar Verma
RE: what is auxiliary mapping ?
file:///C|/Perl/bin/result.html (291 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Auxiliary Mapping is used to reflect change in one table when ever there is a
change in the other table.
Example:
In Siebel we have S_SRV_REQ and S_EVT_ACT table Lets say that we have a
image table defined for
S_SRV_REQ from where our mappings read data. Now if there is any change in
S_EVT_ACT then it
wont be captured in S_SRV_REQ if our mappings are using image table for
S_SRV_REQ. To
overcome this we define a mapping beteen S_SRV_REQ and S_EVT_ACT such
that if there is any
change in second it will be reflected as an update in the first table.
=======================================
299.Informatica - what is authenticator ?
QUESTION #299
No best answer available. Please pick the good answer available
or submit your
answer.
December 18, 2006 02:38:20 #1
Reddappa C. Reddy
RE: what is authenticator ?
Click Here to view complete document
An authenticator is either a token of authentication (is the act of establishing or
confirming something
or someone) or one who authenticates
=======================================
Authentication requests validate user names and passwords to access the
PowerCenter repository.You
can use the following authentication requests to access the PowerCenter
repositories: Login Logout The
Login function authenticates user name and password for a specified repository.
This is the first
function a client application should call before calling any other functions. The
Logout function
disconnects you from the repository and its PowerCenter Server connections. You
can call this function
once you are done calling Metadata and Batch Web Services functions to release
resources at the Web
Services Hub.
=======================================
300.Informatica - how can we populate the data into
a time
dimension ?
QUESTION #300
No best answer available. Please pick the good answer available
or submit your
file:///C|/Perl/bin/result.html (292 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
answer.
December 03, 2006 22:04:48 #1
sravan kumar
RE: how can we populate the data into a time dimension...
Click Here to view complete document
By using Stored procedure Transformation.
=======================================
how ? can u explain.
=======================================
can u plz explain me about time dimension.
=======================================
301.Informatica - how to create primary key only on
odd numbers?
QUESTION #301
No best answer available. Please pick the good answer available
or submit your
answer.
December 07, 2006 02:50:14 #1
Pavan.m
RE: how to create primary key only on odd numbers?
Click Here to view complete document
Use sequence generator to generate warehouse keys and set the 'Increment by '
property of sequence
genrator to 2.
=======================================
Use sequence generator and set the 'Increment by' property in that with 2.
=======================================
302.Informatica - How to load fact table ?
QUESTION #302
No best answer available. Please pick the good answer available
or submit your
answer.
December 11, 2006 16:59:11 #1
hanug Member Since: June 2006 Contribution: 24
file:///C|/Perl/bin/result.html (293 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
RE: How to load fact table ?
Click Here to view complete document
HI:
There are two ways u have to load fact table.
1. First time load
2. Incremental load or delta load
In both cases u have to aggregate get the keys of the dimentional tables and load
into Fact.
In case of increments u use date value to pickup only delta data while loading into
fact.
Hanu.
=======================================
Fact tables always maintain the history records and mostly consists of keys and
measures So after all
teh dimension tables are populated teh fact tables can be loaded.
The load is always going to be a incremental load except for the first time which is
a history load.
=======================================
303.Informatica - How to load the time dimension
using
Informatica ?
QUESTION #303
No best answer available. Please pick the good answer available
or submit your
answer.
December 22, 2006 00:32:54 #1
srinivas
RE: How to load the time dimension using Informatica ?...
file:///C|/Perl/bin/result.html (294 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
HI
use scd-type2-effectivedate range.
=======================================
Hi
Use strored procedure tranformation to load time dimention
=======================================
Hi Kiran
Can U plz tell me in detail how do we do that?I am a fresher in informatica...plz
help.
thanks.
=======================================
304.Informatica - What is the process of loading the
time
dimension?
QUESTION #304
No best answer available. Please pick the good answer available
or submit your
answer.
December 29, 2006 08:20:34 #1
manisha.sinha Member Since: December 2006 Contribution: 30
RE: What is the process of loading the time dimension?...
file:///C|/Perl/bin/result.html (295 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
create a procedure to load data into Time Dimension. The procedure needs to run
only once to popullate
all the data. For eg the code below fills up till 2015. You can modify the code to
suit the feilds in ur
table.
create or replace procedure QISODS.Insert_W_DAY_D_PR as
LastSeqID number default 0;
loaddate Date default to_date('12/31/1979' 'mm/dd/yyyy');
begin
Loop
LastSeqID : LastSeqID + 1;
loaddate : loaddate + 1;
INSERT into QISODS.W_DAY_D values(
LastSeqID
Trunc(loaddate)
Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2)
)
TO_FLOAT(TO_CHAR(loaddate 'MM'))
TO_FLOAT(TO_CHAR(loaddate 'Q'))
trunc((ROUND(TO_DECIMAL(to_char(loaddate 'DDD'))) +
ROUND(TO_DECIMAL(to_char(trunc(loaddate 'YYYY') 'D')))+ 5) / 7)
TO_FLOAT(TO_CHAR(loaddate 'YYYY'))
TO_FLOAT(TO_CHAR(loaddate 'DD'))
TO_FLOAT(TO_CHAR(loaddate 'D'))
TO_FLOAT(TO_CHAR(loaddate 'DDD'))
1
1
1
1
1
TO_FLOAT(TO_CHAR(loaddate 'J'))
((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 12) +
TO_number(TO_CHAR(loaddate 'MM'))
((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 4) +
TO_number(TO_CHAR(loaddate 'Q'))
TO_FLOAT(TO_CHAR(loaddate 'J'))/7
TO_FLOAT (TO_CHAR (loaddate 'YYYY')) + 4713
TO_CHAR(load_date 'Day')
TO_CHAR(loaddate 'Month')
Decode(To_Char(loaddate 'D') '7' 'weekend' '6' 'weekend' 'weekday')
Trunc(loaddate 'DAY') + 1
Decode(Last_Day(loaddate) loaddate 'y' 'n')
file:///C|/Perl/bin/result.html (296 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
to_char(loaddate 'YYYYMM')
to_char(loaddate 'YYYY') || ' Half' ||
Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2)
)
TO_CHAR(loaddate 'YYYY / MM')
TO_CHAR(loaddate 'YYYY') ||' Q ' ||TRUNC(TO_number( TO_CHAR(loaddate
'Q')) )
TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number(
TO_CHAR(loaddate
'WW')))
TO_CHAR(loaddate 'YYYY'));
If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') Then
Exit;
End If;
End Loop;
commit;
end Insert_W_DAY_D_PR;
=======================================
305.Informatica - how can we remove/optmize
source bottlenecks
using "query hints"
QUESTION #305
No best answer available. Please pick the good answer available
or submit your
answer.
January 08, 2007 16:29:36 #1
creativehuang Member Since: January 2007 Contribution: 5
RE: how can we remove/optmize source bottlenecks using...
file:///C|/Perl/bin/result.html (297 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
it's SQL server question ?or informatica question?
=======================================
Create indexes for source table colums
=======================================
first u must have proper indexes and the table must be analyzed to gather stats to
use the cbo. u can get
free doc from oracle technet.
use the hints after and it is powerful so be careful with the hints.
=======================================
306.Informatica - how can we eliminate source
bottleneck using
query hint
QUESTION #306
No best answer available. Please pick the good answer available
or submit your
answer.
March 12, 2007 06:28:50 #1
sreedhark26 Member Since: January 2007 Contribution: 25
RE: how can we eliminate source bottleneck using query...
Click Here to view complete document
You can identify source bottlenecks by executing the read query directly against
the source database.
Copy the read query directly from the session log. Execute the query against the
source database with a
query tool such as isql. On Windows you can load the result of the query in a file.
On UNIX systems
you can load the result of the query in /dev/null.
Measure the query execution time and the time it takes for the query to return the
first row. If there is a
long delay between the two time measurements you can use an optimizer hint to
eliminate the source
bottleneck.
=======================================
307.Informatica - where from we get the source data
or how we
access the source data
QUESTION #307
No best answer available. Please pick the good answer available
or submit your
file:///C|/Perl/bin/result.html (298 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
answer.
January 03, 2007 00:58:49 #1
phani
RE: where from we get the source data or how we access...
Click Here to view complete document
Hi
We get souce data in the form of excel files flat files etc. By using source analyzer
we can access the
souce data
=======================================
hisource data exists in OLTP systems of any form (flat file relational database. xml
definitions.)u can
acces the source data with any of the source qualifier transformationsw and
normaliser tranaformation.
=======================================
308.Informatica - What are all the new features of
informatica 8.1?
QUESTION #308
No best answer available. Please pick the good answer available
or submit your
answer.
February 03, 2007 01:57:27 #1
Sonjoy
RE: What are all the new features of informatica 8.1?
file:///C|/Perl/bin/result.html (299 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
1. Java Custom Transformation support2. HTTP trans support3. New name of
Superglue as Metadata
manager4. New name of Power Analyzer as Data Analyzer5. Support Grid
Computing6. Push down
Optimization
=======================================
1. Power Center 8 release has "Append to Target file" feature.
2. ava transformation is introduced.
3. User defined functions
4. Midstream SQL transformation has been added in 8.1.1 not in 8.1.
5. Informatica has added a new web based administrative console.
6. Management is centralized that means services can be started and stopped on
nodes via a central
web interface
=======================================
309.Informatica - Explain the pipeline partition with
real time
example?
QUESTION #309
No best answer available. Please pick the good answer available
or submit your
answer.
January 11, 2007 14:56:16 #1
saibabu Member Since: January 2007 Contribution: 14
RE: Explain the pipeline partition with real time exam...
Click Here to view complete document
PIPELINE SPECIFIES THE FLOW OF DATA FROM SOURCE TO TARGET
.PIPELINE
PARTISON MEANS PARTISON THE DATA BASED ON SOME KEY
VALUES AND LOAD THE
DATA TO TARGET UNDER CONCURRENT MODE. WHICH INPROVES
THE SESSION
PERFORMANCE i.e data loading time reduces. in real time we have some
thousands of records exists
everyday to load the data to targets .so pipeline partisoning definetly reduces the
data loading time.
=======================================
310.Informatica - How to FTP a file to a remote
server?
QUESTION #310
No best answer available. Please pick the good answer available
or submit your
answer.
January 08, 2007 16:04:23 #1
file:///C|/Perl/bin/result.html (300 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
creativehuang Member Since: January 2007 Contribution: 5
RE: How to FTP a file to a remote server?
Click Here to view complete document
ftp targetaddress
go to your target directory
ftp> ascii or bin
ftp> put or get
=======================================
Hi U can transfer file from one server to other..............In unix there is an utililty
XCOMTCP which
transfer file from one server to other. But lot of constraints they are for this..... U
need to mention target
server name and directory name where u need to send.The server directory should
have write
permetion.....Check in detail in UNIX by typing MAN XCOMTCP command
which guides u i guess.
=======================================
311.Informatica - What's the difference between
source and target
object definitions in Informatica?
QUESTION #311
No best answer available. Please pick the good answer available
or submit your
answer.
January 05, 2007 09:06:50 #1
Sravan Kumar
RE: What's the difference between source and target ob...
file:///C|/Perl/bin/result.html (301 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Source system is the system which provides the business data is called as source
system
Target system is a system to which the data being loads
=======================================
source definition is the structure of the source database existed in the OLTP system
. using source
definition u can extrcact the transactional data from OLTP SYSTEMS.TARGET
DEFINITION IS THE
STRUCTURE GIVEN BY THE DBA's TO POPULATE THE DATA FROM
SOURCE DEFINITION
ACCORDING BUSINESS RULES FOR THE PURPOSE OF MAKING
EFFECTIVE DESITIONS
FOR THE ENTERPRISE.
=======================================
Hai what Saibabu wrote is correct.Source definition means defining the structure
of the source from
which we have to extract the data to transform and then load to the target.Target
definition means
defining the structure of the target (relation table or flat file)
=======================================
312.Informatica - how many types of sessions are
there in
informatica.please explain them.
QUESTION #312
No best answer available. Please pick the good answer available
or submit your
answer.
January 08, 2007 15:56:28 #1
creativehuang Member Since: January 2007 Contribution: 5
RE: how many types of sessions are there in informatic...
Click Here to view complete document
reusable nonusable session
=======================================
Total 10 SESSIONS
1. SESSION: FOR MAPPING EXECUTION
file:///C|/Perl/bin/result.html (302 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
2. EMAIL:TO SEND EMAILS
3. COMMAND: TO EXECUTE OS COMMANDS
4 CONTROL: FAIL STOP ABORT
5.EVENT WAIT: FOR PRE_DEFINED OR POST_DEFINED EVENTS
6 EVENT RAISE:TO RAISE ACTIVE USER_DEFINED EVENT
7. DECISSION :CONDITION TO BE EVALUATED FOR CONTROLING
FLOW OR PROCESS
8. TIMER: TO HALT THE PROCESS FOR SPECIFIC TIME
9.WORKLET TASK: REUSABLE TASK
10.ASSIGNEMENT: TO ASSIGN VALUES WORKLET OR WORK FLOW
VARIABLES
=======================================
Session is a type of workflow task and set of instructions that describe how to
move Data from Source
to targets using a mapping
There are two session in informatica
1. sequential: When Data moves one after another from source to target it is
sequential
2.Concurrent: When whole data moves simultaneously from source to target it is
Concurrent
=======================================
Hi
Vidya
Above Qestion is How many sessions? your answer is Bathes ?
=======================================
file:///C|/Perl/bin/result.html (303 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
HI Sreedhark26ur answer is wrong . dont minguide the members .what u wrote is
types of tasks used
fby the workflow manager.there r two types of sessions1.non resuable
session2.reusable session
=======================================
313.Informatica - What is meant by source is
changing
incrementally? explain with example
QUESTION #313
No best answer available. Please pick the good answer available
or submit your
answer.
January 09, 2007 07:30:59 #1
Sravan Kumar
RE: What is meant by source is changing incrementally?...
Click Here to view complete document
Source is changing Incrementally means the data in the source is keep on changing
and your capturing
that changes with time stamps keyrange and with triggers scds.Capture these
changes to load the data
incrementally.If We cannot capture this source data incrementally the data loading
process will be very
difficult.For example we have a source where the data is changing.On 09/01/07 we
have loaded all the
data in to the target.On 10/01/07 the source is updated with some new rows .It is
very difficult to load
all the rows again in the target so what we have to do is capture the data whch is
not loaded and load
only the changed rows into the target.
=======================================
I have a good example to explain this . Think about HR Data for a very big
Company. The data keep on
changing every minute. Now We had build a downstream system to capture a
chunk of data for specific
purpose say New Hirees . Every record in the source will have time stamp when
we load data today we
check the records which are updated/inserted today and will load them (This
avoids to reprocess all the
data). We used incremental Refresh method to process such data. Infact most of the
sources in OLTP
are incremental changing/constantly changing.
=======================================
314.Informatica - what is the diffrence between SCD
and
INCREMENTAL Aggregation?
QUESTION #314
No best answer available. Please pick the good answer available
or submit your
answer.
January 09, 2007 23:19:44 #1
file:///C|/Perl/bin/result.html (304 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
srinivas
RE: what is the diffrence between SCD and INCREMENTAL ...
Click Here to view complete document
Hi
HIRE DIMENSIONAL DATA IS STORED. NO AGGRIGATE
CALCULATIONS
SCD WE R USING 3 WAYS
1.TYPE-1: IT'S MAINTAIN CURRENT DATA ONY
2.TYPE-2: IT'S MAINTAIN CURRENT DATA + COMPLET HOSTROY
RECORDS
THESE R 3 WAYS :1.FLAG DATA
2.VERSION NO MAPPING
3.EFFECTIVE DATE RANGE
3.TYPE-3: IT'S MAINTAIN CURRENT DATA +ONE TIME HISTROY
INCREMENTAL AGGRATION ARE STORED AGGRAGATE VALUES
ACCORDING TO THE
USER REQUIREMENTS
=======================================
hi scd means 'slowly changing dimentions'since dimention table maintains master
data the column
values occationally changed .so dimention tables are called as scd tables and the
fields in the scd tables
are called as slowly changing dimentions .in order to maintain those changes we
are following three
types of methods.1. SCD TYPE1 this method maintains only current data2. SCD
TYPE2 this method
maintains whole history of the dimentions here three methods to identify which
record is current one .
1> flag current data 2> version number mapping 3> effective date range3. SCD
TYPE3 this method
maintains current data and one time historical data.INCREMENTAL
AGGRIGATIONsome
requirements (daily weekly every 15 days quartly..........) need to aggrigate the
values of certain colums.
HERE U have to do the same job every time (according to requirement) and add
the aggrigate value to
the previous aggrigate value(previous run value) of those column.THE PROCESS
CALLED AS
INCREMENTAL AGGRIGATION.
=======================================
file:///C|/Perl/bin/result.html (305 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
315.Informatica - when informatica 7.1.1 version was
introduced
into market
QUESTION #315
No best answer available. Please pick the good answer available
or submit your
answer.
March 12, 2007 06:19:29 #1
sreedhark26 Member Since: January 2007 Contribution: 25
RE: when informatica 7.1.1 version was introduced into...
Click Here to view complete document
2004
=======================================
316.Informatica - what is the advantages of
converting stored
procedures into Informatica mappings?
QUESTION #316
No best answer available. Please pick the good answer available
or submit your
answer.
January 18, 2007 12:41:55 #1
Gayathri84 Member Since: January 2007 Contribution: 2
RE: what is the advantages of converting stored proced...
Click Here to view complete document
Stored Procedures are Hard to maintain and Debug . Maintaince is simpler in
Informatica. Its a User
friendly tool.
Given the Logic. Its eaier to create a Mapping in Informatica than write a Stored
Procedure.
Thanks
Gayathri
=======================================
file:///C|/Perl/bin/result.html (306 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
a stored procedure call is made thru an ODBC connection over a network
(sometimes the info server
resides on the same box as the db)...since there is an overhead in making the call it
is inherently slower.
=======================================
317.Informatica - How a LOOKUP is passive?
QUESTION #317
No best answer available. Please pick the good answer available
or submit your
answer.
January 19, 2007 15:54:06 #1
monicageller Member Since: January 2007 Contribution: 3
RE: How a LOOKUP is passive?
Click Here to view complete document
Hi
Unconnected lookup is used for updating Slowly Chaging Dimensions...so it is
used to determine
whether the rows are already in the target or not but it doesn't change the no. of
rows ...so it is passive.
Connected lookup transformations are used to get a related value based on some
value or to perform a
calculation.....in either case it will either increase no. of columns or not...but
doesn't change row count...
so it is passive.
In lookup SQL override property we can add a WHERE statement to the default
SQL statement but it
doesn't change no. of rows passing through it it just reduces the no. of rows
included in the cache.
cheers
Monica.
=======================================
the fact that a failed lookup does not erase the row makes it passive.
=======================================
Lookup is using for geta a related value and perform calculation. lookup is using
for search value from
relational table.
=======================================
318.Informatica - How to partition the
Session?(Interview
file:///C|/Perl/bin/result.html (307 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
question of CTS)
QUESTION #318
No best answer available. Please pick the good answer available
or submit your
answer.
January 23, 2007 15:53:48 #1
kirangvr Member Since: January 2007 Contribution: 5
RE: How to partition the Session?(Interview question o...
Click Here to view complete document
o Round-Robin: PC server distributes rows of data evenly to all partitions. @filtero
Hash keys:
distribute rows to the partitions by group. @rank sorter joiner and unsorted
aggregator.o Key range:
distributes rows based on a port or set of ports that you specify as the partition key.
@source and targeto
Pass-through: processes data without redistributing rows among partitions. @any
valid partition point.
Hope ot helps you.
=======================================
When you create or edit a session you can change the partitioning information for
each pipeline in a
mapping. If the mapping contains multiple pipelines you can specify multiple
partitions in some
pipelines and single partitions in others. You update partitioning information using
the Partitions view
on the Mapping tab in the session properties.
You can configure the following information in the Partitions view on the Mapping
tab:
l Add and delete partition points.
l Enter a description for each partition.
l Specify the partition type at each partition point.
l Add a partition key and key ranges for certain partition types.
=======================================
By default when we create the session workflow creates pass-through partition
points at Source
Qualifier transformations and target instances.
=======================================
319.Informatica - which one is better performance
wise joiner or
lookup
file:///C|/Perl/bin/result.html (308 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
QUESTION #319
No best answer available. Please pick the good answer available
or submit your
answer.
January 25, 2007 13:41:43 #1
anu
RE: which one is better performance wise joiner or loo...
Click Here to view complete document
Are you lookuping flat file or database table? Generaly sorted joiner is more
effective on flat files than
lookup because sorted joiner uses merge join and cashes less rows. Lookup cashes
always whole file. If
the file is not sorted it can be comparable.Lookups into database table can be
effective if the database
can return sorted data fast and the amount of data is small because lookup can
create whole cash in
memory. If database responses slowly or big amount of data are processed lookup
cache initialization
can be really slow (lookup waits for database and stores cashed data on discs).
Then it can be better use
sorted joiner which throws data to output as reads them on input.
=======================================
320.Informatica - what is associated port in look up.
QUESTION #320
No best answer available. Please pick the good answer available
or submit your
answer.
February 01, 2007 16:08:08 #1
kirangvr Member Since: January 2007 Contribution: 5
RE: what is associated port in look up.
file:///C|/Perl/bin/result.html (309 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
When you use a dynamic lookup cache you must associate each lookup/output port
with aninput/output
port or a sequence ID. The PowerCenter Server uses the data in the associatedport
to insert or update
rows in the lookup cache. The Designer associates the input/outputports with the
lookup/output ports
used in the lookup condition.
=======================================
whenever you are using scd2 you have to use dynamic cache then one port must be
specified for
updating in cache that port is called associated port.
=======================================
321.Informatica - what is session recovery?
QUESTION #321
No best answer available. Please pick the good answer available
or submit your
answer.
February 09, 2007 16:16:53 #1
pal
RE: what is session recovery?
Click Here to view complete document
Session recovery is used when you want the session to continue loading from the
point where it stopped
last time it ran.for example if the session failed after loading 10 000 records and
you want the 10 001
record to be loaded then you can use session recovery to load the data from 10 001.
=======================================
when ever session fails then we have to follow these steps:
if no commit is performed by the informatica server then run session again.
if at least one commit is performed by the session recover the session.
if recover is not possible truncate the target table and load again.
recovery option is set in the informatica server. i just forgotten where to do
recovery i will tell you next
time or you search for it
=======================================
If the recover option is not set and 10 000 records are committed then delete the
record from the target
table using audit fields like update_date.
=======================================
file:///C|/Perl/bin/result.html (310 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
322.Informatica - what are the steps follow in
performance
tuning ?
QUESTION #322
No best answer available. Please pick the good answer available
or submit your
answer.
February 09, 2007 16:12:47 #1
pal
RE: what are the steps follow in performance tuning ?
Click Here to view complete document
Steps for performance tuning:
1) Target bottle neck
2) source bottle-neck
3)mapping bottle-neck etc..As a developer we just can clear the bottle necks at the
mapping level and at
the session level.
for example
1) Removing transformation errors.
2) Filtering the records at the earliest.
3) Using sorted data before an aggregator.
4) using less of those transformations which use cache.
5) using an external loader like sql loader etc to load the data faster.
6) less no of conversions like numeric to char and char to numeric etc
7) writing an override instead of using filter etc.
8)increasing the network packet size.
9) all the source systems in the server machine to make it run faster etc pal.
=======================================
Hai Friends Performance tuning means techniques for improving the
performance.1. Identify the
Bottlenecks(issues that reduces performance)2. Configure the Bottlenecks.The
Hierarchy we have to
follow in performance tuning is a) Target b) Source c) Mapping d) Session e)
SystemIf anything wrong
in this plz. tell me. because I am still in the learning stage.
=======================================
=======================================
Hi Friends
Please clarify me What is mean by "bottle neck".For ex
file:///C|/Perl/bin/result.html (311 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
1) Target bottle neck
2) source bottle-neck
3)mapping bottle-neck
I am not aware what does this bottle neck meant.
Thanks for your help in advance.
Thanks
Thana
=======================================
Bottle neck means drawbacks or problems
=======================================
Target commit intervals and Source - based commit intervals.
=======================================
323.Informatica - How to use incremental
aggregation in real time?
QUESTION #323
No best answer available. Please pick the good answer available
or submit your
answer.
February 21, 2007 22:48:09 #1
Hanu Ch Rao
RE: How to use incremental aggregation in real time?
Click Here to view complete document
The first time you run a session with incremental aggregation enabled the server
process the entire
source.
=======================================
324.Informatica - Which transformation replaces the
look up
transformation?
QUESTION #324
No best answer available. Please pick the good answer available
or submit your
answer.
Sorting Options Page 1 of 2 First 1 2 > Last
February 21, 2007 08:11:18 #1
file:///C|/Perl/bin/result.html (312 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
hegde
RE: Which transformation replaces the look up transfor...
Click Here to view complete document
can u pls clarify ur ques?
=======================================
Is there any transformation that could do the function of a lookup transformation.
or Is there any
transformation that we can use instead of lookup transformation.
=======================================
question is misleading ... do you mean look up transformation will become
obsolete since new
transformation is created to do same thing in a better way ? which version ?
=======================================
Yes Joiner with Source outer Join and Stored Procedure can replace look-up
transformation
=======================================
You can use Joiner transformation instead of lookup look up is only for relational
tables
which are not sources in your source analyzer but by using joiner we can join
relational and flat files
relational sources on different platforms.
=======================================
Vicky
Can you please Validate your answer with some example ??
=======================================
Hi
Actually by look up t/s we are looking the required fields in the target and
comparing with source ports..
Without look up we can do by joiner
Master outer joiner means Matched rows of both t/s and unmatched from master
table..
So you can get all changes..
if any doubt reply me back.. i will give another general example to you
=======================================
plz don't mislead.........
A master outer join keeps all rows of data from the detail source and the matching
rows from the master
source. It discards the unmatched rows from the master source.
=======================================
file:///C|/Perl/bin/result.html (313 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
I hope The Joiner transformation is only for join the hetrogeneous sources.For Eg
when we use one
source from Flat file and another one is from Relational.I have a question as "How
Joiner
transformation overrides the Lookup" however both transformation performs
different actions.
Cheers
Thana
=======================================
Lookup is nothing but Outer Join. So Look up t/s can be easily replaced by a joiner
in any tool.
=======================================
325.Informatica - What is Dataware house key?
QUESTION #325
No best answer available. Please pick the good answer available
or submit your
answer.
February 21, 2007 23:26:44 #1
Ravichandra
RE: What is Dataware house key?
Click Here to view complete document
Dataware house key is warehouse key or surrogate key.ie generated by sequence
generator.this key
used in loading data into fact table from dimensions.
=======================================
is it really called Data Warehouse Key ? is there such a term ?
=======================================
I never heard this type of key is there in DWH terminology!!!!!!!!!
=======================================
How datawarehouse key is called as surrogate key
=======================================
There is a Dataware house key........plz go thru kimball book
Every data warehouse key should be a surrogate key because the data warehouse
DBA must have the
flexibility to respond to changing descriptions and abnormal conditions in the raw
data.
=======================================
file:///C|/Perl/bin/result.html (314 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Surrogate key is know as datawarehouse key. Surrogate key is used as a unique
key in warehouse where
dupicate keys may exits due to type 2 changes.
=======================================
326.Informatica - What is inline view?
QUESTION #326
No best answer available. Please pick the good answer available
or submit your
answer.
February 21, 2007 22:40:51 #1
Hanu Ch Rao
RE: What is inline view?
Click Here to view complete document
The inline view is a construct in Oracle SQL where you can place a query in the
SQL FROM clause just
as if the query was a table name.
A common use for in-line views in Oracle SQL is to simplify complex queries by
removing join
operations and condensing several separate queries into a single query
=======================================
used in oracle to simplify queries
ex: select rowid from (select ename from emp group by sal) where rowid<2
=======================================
327.Informatica - What is the exact difference
between joiner and
lookup transformation
QUESTION #327
No best answer available. Please pick the good answer available
or submit your
answer.
February 20, 2007 12:21:06 #1
pal
RE: What is the exact difference between joiner and lo...
file:///C|/Perl/bin/result.html (315 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
A joiner is used to join data from different sources and a lookup is used to get a
related values from
another table or check for updates etc in the target table.
for lookup to work the table may not exist in the mapping but for a joiner to work
the table has to exist
in the mapping.
pal.
=======================================
a lookup may be unconnected while a joiner may not
=======================================
lookup may not participate in mapping
lookup does only non equi join
joiner table must paraticipate in mapping
joiner does only outer join
=======================================
328.Informatica - Explain about scheduling real time
in
informatica
QUESTION #328
No best answer available. Please pick the good answer available
or submit your
answer.
March 01, 2007 06:45:31 #1
sanghala Member Since: April 2006 Contribution: 111
RE: Explain about scheduling real time in informatica
Click Here to view complete document
Scheduling of Informatica jobs can be done by the following ways:
l Informatica Workflow Manger
l Using Cron in Unix
l Using Opcon sheduler
=======================================
329.Informatica - How do you handle two sessions
in Informatica
QUESTION #329 can any one tell me the option between the two
session . if
file:///C|/Perl/bin/result.html (316 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
previous session execute successfully than run the next session...
Click Here to view
complete document
No best answer available. Please pick the good answer available or submit your
answer.
February 28, 2007 02:09:40 #1
Divya Ramanathan
RE: How do you handle two sessions in Informatica
=======================================
You can handle 2 session by using a link condition (id $ PrevTaskStatus
SUCCESSFULL)
or you can have a decision task between them. I feel since its only one session
dependent on one have a
link condition
=======================================
By giving a link condition like $PrevTaskStatus SUCCESSFULL
=======================================
where exactly do we need to use this link condition (id $ PrevTaskStatus
SUCCESSFULL)
=======================================
you can drag and drop more than one session in a workflow.
there will be linking different and is
sequential linking
concurrent linking
in sequential linking you can run which ever session you require or if the workflow
runs all the sessions
sequentially.
in concurrent linking you can't run any session you want.
=======================================
330.Informatica - How do you change change
column to row in
Informatica
QUESTION #330
No best answer available. Please pick the good answer available
or submit your
answer.
March 02, 2007 11:38:19 #1
file:///C|/Perl/bin/result.html (317 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Hanu Ch Rao
RE: How do you change change column to row in Informat...
Click Here to view complete document
Hi
We can achieve this by Normalizer trans.
=======================================
First u can ask what type of data change columns to rows in informatica?
Next we can use expression and aggregator transformation.
aggregator is using group by duplicate rows
=======================================
331.Informatica - write a query to retrieve the latest
records from
the table sorted by version(scd).
QUESTION #331
No best answer available. Please pick the good answer available
or submit your
answer.
February 27, 2007 06:05:37 #1
sunil
RE: write a query to retrieve the latest records from ...
Click Here to view complete document
you can write a query like inline view clause you can compare previous version to
new highest version
then you can get your result
=======================================
hi Sunil
Can u please expalin your answer some what in detail ????
=======================================
Hi
Assume if you put the surrogate key in target (Dept table) like p_key and
file:///C|/Perl/bin/result.html (318 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
version field dno field and loc field is there
then
select a.p_key a.dno a.loc a.version from t_dept a
where a.version (select max(b.version) from t_dept b where a.dno b.dno)
this is the query if you write in lookup it retrieves latest (max)
version in lookup from target. in this way performance increases.
=======================================
Select Acct.* Rank() Over ( partition by ch_key_id order by version desc) as Rank
from Acct
where Rank() 1
=======================================
select business_key max(version) from tablename group by business_key
=======================================
332.Informatica - Explain about Informatica server
process that
how it works relates to mapping variables?
QUESTION #332
No best answer available. Please pick the good answer available
or submit your
answer.
March 09, 2007 09:52:01 #1
hemasundarnalco Member Since: December 2006 Contribution: 2
RE: Explain about Informatica server process that how ...
file:///C|/Perl/bin/result.html (319 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
informatica primarly uses load manager and data transformation manager(dtm) to
perform extracting
transformation and loading.load manager reads parameters and variables related to
session mapping and
server and paases the mapping parameters and variable information to the
DTM.DTM uses this
information to perform the datamovement from source to target
=======================================
The PowerCenter Server holds two different values for a mapping variable during
a session run:
l Start value of a mapping variable
l Current value of a mapping variable
Start Value
The start value is the value of the variable at the start of the session. The start value
could be a value
defined in the parameter file for the variable a value saved in the repository from
the previous run of the
session a user defined initial value for the variable or the default value based on the
variable datatype.
The PowerCenter Server looks for the start value in the following order:
1. Value in parameter file
2. Value saved in the repository
3. Initial value
4. Default value
Current Value
The current value is the value of the variable as the session progresses. When a
session starts the current
value of a variable is the same as the start value. As the session progresses the
PowerCenter Server
calculates the current value using a variable function that you set for the variable.
Unlike the start value
of a mapping variable the current value can change as the PowerCenter Server
evaluates the current
value of a variable as each row passes through the mapping.
=======================================
First load manager starts the session and it performs verifications and validations
about variables and
manages post session tasks such as mail.
then it creates DTM process.
this DTM inturn creates a master thread which creates remaining threads.
master thread credtes
read thread
file:///C|/Perl/bin/result.html (320 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
write thread
transformation thread
pre and post session thread etc...
Finally DTM hand overs to the load manager after writing into the target
=======================================
333.Informatica - What are the types of loading in
Informatica?
QUESTION #333
No best answer available. Please pick the good answer available
or submit your
answer.
March 02, 2007 11:29:17 #1
Hanu Ch Rao
RE: What are the types of loading in Informatica?
Click Here to view complete document
Hi
In Informatica there are mainly 2 types of loading is there.
1. Normal
2. Bulk
you say and one more Incremental Loading.
Normal means it loads record by record and writes logs for that. it takes time.
Bulk load means it loads number of records at a time to target - it ignores logs
ignores tracing level. It
takes less time to load data to target.
Ok...
=======================================
2 type of loading
1. Normal
2. Bulk
Normal loading creates database log it is very slow
Bulk loading by passes the database log it is very fast
it disable the constraints.
=======================================
file:///C|/Perl/bin/result.html (321 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Hanu
I agree with you.
You told about Incremental Loading.
Can u please let me know in detail about this type of Loading??????
Thanks in advance..........
=======================================
Loadings are 3 types
1. One time data loading
2. Complete data loading
3. Incremental loading
=======================================
Two Types
Normal :Data retrival is very easy it creates a index
Bulk : Data retrival is not possible it stores the data in improperway
=======================================
334.Informatica - What are the steps involved in to
get source
from OLTP system to staging area
QUESTION #334
No best answer available. Please pick the good answer available
or submit your
answer.
March 13, 2007 16:56:07 #1
reddy
RE: What are the steps involved in to get source from ...
file:///C|/Perl/bin/result.html (322 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Data Profile Data Cleanching is used to verify the data types
=======================================
Hi Reddy
Could you tell me what do u mean by DATA PROFILE?
Thanks in advance..
=======================================
go to source analyzer
import databases
go to warehouse designer
import databases (target definitions) else create using generate sql
go to mapping designer
drag and drop sources and targets definitions
link ports properly
save to repository
than q
=======================================
335.Informatica - What is use of event waiter?
QUESTION #335
No best answer available. Please pick the good answer available
or submit your
answer.
March 05, 2007 12:38:14 #1
sreedhark26 Member Since: January 2007 Contribution: 25
RE: What is use of event waiter?
file:///C|/Perl/bin/result.html (323 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
l Event-Wait task. The Event-Wait task waits for an event to occur. Once the event
triggers the
PowerCenter Server continues executing the rest of the workflow.
=======================================
event wait is of two type
1> predefine event: this type of event wait for the indicator file to trigger it..
2> user define event: this type of event wait for the event raise to trigger the event.
thanks ravinder
=======================================
Event Wait task is a file watcher.
when ever a trigger file is touched/created this task will kick off the rest of the
sessions to execute
which are there in the batch.
I used only User defined Event wait task.
=======================================
Event wait: will hold the workflow until it is get other instruction or delay
mentioned by the user
Cheers
Sithu
sithusithu@hotmail.com
=======================================
336.Informatica - which transformation can perform
the non equi
join?
QUESTION #336
No best answer available. Please pick the good answer available
or submit your
answer.
March 12, 2007 01:47:43 #1
kasireddy
RE: which transformation can perform the non equi join...
file:///C|/Perl/bin/result.html (324 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Hi
Lookup transformation.
=======================================
Here look up not supports outer join
And supports non equijoin
Joiner supports outer join
not non equi
=======================================
Lookup only can do non equi join
joiner does only outer detail outer and master outer joins.
=======================================
It is Lookup
Cheers
Sithu
sithusithu@hotmail.com
=======================================
337.Informatica - Which objects cannot be used in a
mapplet
transformation
QUESTION #337
No best answer available. Please pick the good answer available
or submit your
answer.
March 07, 2007 08:35:11 #1
tirumalesh
RE: Which objects cannot be used in a mapplet transfor...
file:///C|/Perl/bin/result.html (325 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Non-Reusuable Sequence generator cannot be used in maplet.
=======================================
l You cannot include the following objects in a mapplet:
l Normalizer transformations
l Cobol sources
l XML Source Qualifier transformations
l XML sources
l Target definitions
l Pre- and post- session stored procedures
l Other mapplets
=======================================
When you add transformations to a mapplet keep the following restrictions in
mind:
If you use a Sequence Generator transformation you must use a reusable
Sequence
Generator transformation.
If you use a Stored Procedure transformation you must configure the Stored
Procedure
Type t o b e Normal.
You cannot include PowerMart 3.5-style LOOKUP functions in a mapplet.
You cannot include the following objects in a mapplet:
Normalizer transformations
COBOL sources
XML Source Qualifier transformations
XML sources
Tar ge t d e f i ni t i ons
Other mapplets
=======================================
Joiner Transfermation
Normalizer transformations
Cobol sources
XML Source Qualifier transformations
XML sources
l Target definitions
file:///C|/Perl/bin/result.html (326 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
l Pre- and post- session stored procedures
l Other mapplets
=======================================
We can not use the following objects/ transformations in the mapplet
=======================================
338.Informatica - How to do aggregation with out
using
AGGREGAROR Transformation ?
QUESTION #338
No best answer available. Please pick the good answer available
or submit your
answer.
March 09, 2007 09:37:56 #1
hemasundarnalco Member Since: December 2006 Contribution: 2
RE: How to do aggregation with out using AGGREGAROR Tr...
Click Here to view complete document
write a sql qurey to perform an aggregation in the source qualifier transformation
=======================================
Overwrite the SQL Query at Source Qualifier transformation
=======================================
339.Informatica - how do you test mapping and
what is associate
port?
QUESTION #339
No best answer available. Please pick the good answer available
or submit your
answer.
March 26, 2007 11:00:14 #1
sreedhark26 Member Since: January 2007 Contribution: 25
RE: how do you test mapping and what is associate port...
file:///C|/Perl/bin/result.html (327 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
hi
U can test mapping in mapping Designer by using debugger. In Debugger u can
test first instance and
next instance. associated port is output port.
sreedhar
=======================================
specifying the number of test load rows in the session properties.
=======================================
340.Informatica - Why can't we use normalizer
transformation in
mapplet?
QUESTION #340
No best answer available. Please pick the good answer available
or submit your
answer.
March 26, 2007 10:56:14 #1
sreedhark26 Member Since: January 2007 Contribution: 25
RE: Why can't we use normalizer transformation in mapp...
Click Here to view complete document
Hi
Nt is using for cobol sources u can take data in source analyzer automatically
displays normalizer
transformation. u can not use in mapplet.
sreedhar
=======================================
341.Informatica - Strategy Transformation
QUESTION #341 By using Update Strategy Transformation we
use to maintain
historical data using Type2 & Type3. By both of this which is
better to use?
Why? Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
March 26, 2007 09:12:45 #1
file:///C|/Perl/bin/result.html (328 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
sai
RE: Strategy Transformation
=======================================
using type 2 u can maintain total historical data along with current data;
using type 3 u can maintain only one time historical data and currrent data:
dased upon requrirement we have to choose the best suitable.
=======================================
342.Informatica - Source Qualifier in Informatica
QUESTION #342 What is the Technical reason for having Source
Qualifier in
Informatica? Can a mapping be implemented without it? (Please
don't mention
the functionality of SQ) but the main reason why a mapping
can't do without
it... Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
March 27, 2007 06:32:28 #1
chowdary
RE: Source Qualifier in Informatica
=======================================
in informatica
source qualifier will read data from sources.
for reading data from sources sql is mandatory
=======================================
SQ reads data from the sources when the informatica server runs the session. data
to the any other
transformation is not allowed with out this SQ trans.
It has got other qualities also it can be used as a filter.
it can be used as a sorter and is also used to select distinct values.
it can be used as a joiner if the data is coming from the same source.
file:///C|/Perl/bin/result.html (329 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
=======================================
In informatica source qualifier acts as a staging area and it will create its own data
types which are
related to source data types
=======================================
343.Informatica - what is the difference between
mapplet and
reusable Transformation?
QUESTION #343
No best answer available. Please pick the good answer available
or submit your
answer.
March 30, 2007 02:38:57 #1
veera_kk Member Since: March 2007 Contribution: 3
RE: what is the difference between mapplet and reusabl...
Click Here to view complete document
A set of reusable transformations is called mapplet where as a single reusable
transformtion is called
reusable transformation.
=======================================
344.Informatica - What is the difference between
SQL Overriding
in Source qualifier and Lookup transformation?
QUESTION #344
No best answer available. Please pick the good answer available
or submit your
answer.
April 08, 2007 11:10:38 #1
ggk.krishna Member Since: February 2007 Contribution: 12
RE: What is the difference between SQL Overriding in ...
file:///C|/Perl/bin/result.html (330 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Hi
1. In LOOKUP SQL override if you add or subtract ports from the SELECT
statement the
session fails.
2. In LOOKUP if you override the ORDER BY statement the session fails if
the ORDER BY
statement does not contain the condition ports in the same order they appear
in the Lookup
condition
=======================================
You can use SQL override in lookup if you have
1. More than one look up table
2. If you use where condition to reduce the records in the cache.
=======================================
If you write a query in source qualifier(to override using sql editor) and press
validate you can
recognise whether the querry written is right or wrong.
But in lookup override if the querry is wrong and if you press validate button.You
cannot recognise but
when you run a session you will get error message and session fails
=======================================
345.Informatica - How can you call trigger in stored
procedure
transformation
QUESTION #345
No best answer available. Please pick the good answer available
or submit your
answer.
May 07, 2007 22:54:43 #1
hanug Member Since: June 2006 Contribution: 24
RE: How can you call trigger in stored procedure trans...
file:///C|/Perl/bin/result.html (331 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Hi:
Trigger can not be called from Stored procedure. Trigger will execute implicitly
when u are performing
DML operation on the table or view(instead of views).
U can get the difference between trigger and sp anywhere in the doc.
Hanu.
=======================================
346.Informatica - How to assign a work flow to
multiple servers?
QUESTION #346 I have multiple servers, I want to assign a
work flow to
multiple servers Click Here to view complete document
No best answer available. Please pick the good answer available or submit your
answer.
October 19, 2007 15:08:36 #1
krishna
RE: How to assign a work flow to multiple servers?
=======================================
Informatica server will use Load manager process to run the workflow
load manager will do assign the workflow process to the multiple servers.
=======================================
347.Informatica - what types of Errors occur when
you run a
session, can you describe them with real time
example
QUESTION #347
No best answer available. Please pick the good answer available
or submit your
answer.
January 24, 2008 19:12:31 #1
file:///C|/Perl/bin/result.html (332 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
kasisarath Member Since: January 2008 Contribution: 6
RE: what types of Errors occur when you run a session, can you describe
them with real time
example
Click Here to view complete document
There are several errors you will get. Couple of them are
1. if informatica failed to connect data base
2. If source file not eixsts in location
3. If paramerters not initilized with parameter file
4. incompataible piped data types etc.
=======================================
348.Informatica - how do you add and delete header
, footer
records from flat file during load to oracle?
QUESTION #348
No best answer available. Please pick the good answer available
or submit your
answer.
September 21, 2007 10:10:41 #1
Abhishek
RE: how do you add and delete header , footer records ...
Click Here to view complete document
We can add header and footer record in two ways.
1) Within the informatica session we can sequence the data as such that the header
flows in first and
footer flows in last. This only holds true when you have the header and footer the
same format as the
detail record.
2) As soon as the sesion to generate detail record file finishes we can call unix
script or unix command
through command task which will concat the header file detail file and footer file
and generrate the
required file
=======================================
349.Informatica - Can we update target table without
using
update strategy transformation? why?
QUESTION #349
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (333 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
May 22, 2007 07:00:56 #1
rahul
RE: Can we update target table without using update s...
Click Here to view complete document
Yes we can update the target table by using the session properties.
There are some options in the Session properties.
=======================================
using :Tu
Targetupdate override.
=======================================
350.Informatica - How to create slowly changing
dimension in
informatica?
QUESTION #350
No best answer available. Please pick the good answer available
or submit your
answer.
May 07, 2007 23:02:45 #1
hanug Member Since: June 2006 Contribution: 24
RE: How to create slowly changing dimension in informa...
Click Here to view complete document
We can create them manually or use the slowly changing dimension wizard to
avoid the hassels.
Use slowly changing dimension wizard and make necessary changes as per your
logic.
Hanu.
=======================================
351.Informatica - What are the general reasons of
session failure
with Look Up having Dynamic Cache?
QUESTION #351
No best answer available. Please pick the good answer available
or submit your
answer.
April 24, 2007 15:52:27 #1
shanthi1 Member Since: March 2007 Contribution: 6
file:///C|/Perl/bin/result.html (334 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
RE: What are the general reasons of session failure wi...
Click Here to view complete document
hi
If u r using dynamic lookup and if it is trying to return more than one value then
session fails
=======================================
352.Informatica - what is the difference between
source qualifier
transformation and filter transformation?
QUESTION #352
No best answer available. Please pick the good answer available
or submit your
answer.
April 24, 2007 15:59:43 #1
shanthi1 Member Since: March 2007 Contribution: 6
RE: what is the difference between source qualifier tr...
Click Here to view complete document
There is no SQL override in FILTER where as in Source Qualifier we have SQL
override and also in
SQ transformation we have options like SELECT DISTINCT JOIN FILTER
CONDITIONS SORTED
PORTS etc...
=======================================
In Source Qualifier we can filter records from different source systems(Relational
or Flatfile). In Filter
Transformation we will filter those records which we need to update or proceed
further. In simple
before Filter Transformation the data from source system may or may not be
processed
(ExpressionTransformation etc...).
=======================================
By using source qualifier transfomation we can filter out the records for only
relational sources but by
using Filter Transformation we can filter out the records of any source..
=======================================
By using Source Qualifier we can filter out records from only relational sources.
But by using Filter
Transformation we can filter out records from any sources.
In Filter Transformation we can use any expression to prepare filter condition
which evaluates to TRUE
or FALSE. The same cannot be done using Source Qualifier.
=======================================
A Source Qualifier transformation is the starting point in the mapping where in we
are bringing the
file:///C|/Perl/bin/result.html (335 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
incoming data or the source data is extracted from this transformation after
connecting to the source
data base.
A filter transformation is a transformation which is placed in the mapping pipe line
in order to pass the
data to the data following some specific conditions that has to be followed by the
passing records.
Of course the same purpose can be solved by the Source Qualifier transformation if
this is extracting
data from a relational source where as if the data is going to be extracted from a
flat file then we cannot
do it using source qualifier.
=======================================
353.Informatica - How do you recover a session or
folder if you
accidentally dropped them?
QUESTION #353
No best answer available. Please pick the good answer available
or submit your
answer.
May 07, 2007 22:58:05 #1
hanug Member Since: June 2006 Contribution: 24
RE: How do you recover a session or folder if you acci...
Click Here to view complete document
u can find ur backup and restore from the backup. If u dont have backup u lost
everything. u cant get it
back.
Thats why we should always take the backup of the objects that we create.
Hanu.
=======================================
354.Informatica - How do you automatically execute
a batch or
session?
QUESTION #354
No best answer available. Please pick the good answer available
or submit your
answer.
May 07, 2007 22:51:18 #1
hanug Member Since: June 2006 Contribution: 24
file:///C|/Perl/bin/result.html (336 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
RE: How do you automatically execute a batch or sessio...
Click Here to view complete document
u can use the scripting(either UNIX shell or Windows Dos scripting) and then
schedule the script.
Hanu.
=======================================
355.Informatica - What is the best way to modify a
mapping if the
target table name is changed?
QUESTION #355
No best answer available. Please pick the good answer available
or submit your
answer.
June 20, 2007 03:12:31 #1
yuva010
RE: What is the best way to modify a mapping if the ta...
Click Here to view complete document
There is no as such best way but you have to incorporate following steps -
1. Change the name in Mapping Designer
2. Refresh mapping with right click on session
3. Reset the target connections.
4. Save.
=======================================
356.Informatica - what is homogeneous
transformation?
QUESTION #356
No best answer available. Please pick the good answer available
or submit your
answer.
May 11, 2007 18:57:56 #1
vinod madala
RE: what is homogeneous transformation?
file:///C|/Perl/bin/result.html (337 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Pulling the data from same type of sources using a same database link is called
Homogenous. The key
word is same databse link (thats the system DSN we use).
=======================================
357.Informatica - If a Mapping is running slow,
What steps you
will take, to correct it?
QUESTION #357
No best answer available. Please pick the good answer available
or submit your
answer.
May 11, 2007 19:05:32 #1
vinodh259 Member Since: May 2007 Contribution: 2
RE: If a Mapping is running slow, What steps you will ...
Click Here to view complete document
provide optimizer hints to the informatica. such as force informatica to use indexes
synonyms etc ..
best technique is find ehich trnsformation is makin the mapping slow and with that
logic build a query
in the databse and see how the database executing plan. to see the plan how
database executing plan for
example EXPLAIN PLAN command in oracle helps u . for more information try
help topics in power
designer. in help lookfor optimizing hints
=======================================
358.Informatica - How can you join two tables
without using
joiner and sql override transformations?
QUESTION #358
No best answer available. Please pick the good answer available
or submit your
answer.
May 07, 2007 22:48:49 #1
hanug Member Since: June 2006 Contribution: 24
RE: How can you join two tables without using joiner a...
file:///C|/Perl/bin/result.html (338 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
U can use the Lookup Transformation to perform the join. Here u may not get the
same results as SQL
OVERRIDE(data coming from same soure) or Joiner(same source or different
sources) because it takes
single record in case of multi match
If one of the table contains single record u dont need to use SQL OVERRIDE or
Joiner to join records.
Let it perform catesion product. This way u dont need to use both(sql override
joiner)
Hanu.
=======================================
you can join two tables with in the same database by using lookup query override
=======================================
If the sources are homogenous we use source qualifier
if the sources have same structure we can use union transformation
=======================================
359.Informatica - What does Check-In and Check-
Out option refer
to in the mapping designer?
QUESTION #359
No best answer available. Please pick the good answer available
or submit your
answer.
May 15, 2007 11:21:21 #1
ramgan_tryst Member Since: May 2007 Contribution: 2
RE: What does Check-In and Check-Out option refer to i...
Click Here to view complete document
Check-In and Check-Out refers to Versioning your Mapping. It is like maintaining
the changes you
have made. It is like using VSS or CVS. When you right-click you mapping you
have a option called
Versioning if you have got that facility enabled.
=======================================
360.Informatica - Where and Why do we use Joiner
Cache?
QUESTION #360
No best answer available. Please pick the good answer available
or submit your
answer.
June 28, 2007 04:57:02 #1
mohammed haneef
file:///C|/Perl/bin/result.html (339 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
RE: Where and Why do we use Joiner Cache?
Click Here to view complete document
Hi
Joiner Cache will use in Joiner t/r to improve the performance. While using joiner
cache informatica
server first read the data from master source and built data index & data cache in
the master rows. After
building the cache joiner t/r reads records from detail source to performs joins.
=======================================
361.Informatica - where does the records goes which
does not
satisfy condition in filter transformation?
QUESTION #361
No best answer available. Please pick the good answer available
or submit your
answer.
July 06, 2007 05:14:44 #1
pkonakalla Member Since: May 2007 Contribution: 2
RE: where does the records goes which does not satisfy...
Click Here to view complete document
It goes to the default group. If you connect default group to an output the
powercenter processes the
data. Otherwise it doesnt process the default group.
=======================================
The rows which are not satisfing the filter transformation are discarded. It does not
appear in the session
logfile or reject files.
=======================================
There is no default group in Filter Transformation. The records which does not
satisfy filter condition
are discarded and not written to reject file or session log file
=======================================
362.Informatica - How can we access MAINFRAME
tables in
INFORMATICA as a source ?
file:///C|/Perl/bin/result.html (340 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
QUESTION #362 ex-: Suppose a table EMP is in MAINFRAME
then how can we
access this table as SOURCE TABLE in informatica? Click Here to
view complete
document
No best answer available. Please pick the good answer available or submit your
answer.
May 26, 2007 14:32:55 #1
vishnukirank Member Since: May 2007 Contribution: 1
RE: How can we access MAINFRAME tables in INFORMATICA...
=======================================
Use Informatica Power Connect to connect to external systems like Mainframes
and import the source
tables.
=======================================
Use the Normalizer transformation to take the Mainframe sources(COBOL)
=======================================
363.Informatica - How to run a workflow without
using GUI i.e,
Worlflow Manager, Workflow Monitor and
pmcmd?
QUESTION #363
No best answer available. Please pick the good answer available
or submit your
answer.
August 03, 2007 03:56:21 #1
balaetl Member Since: November 2005 Contribution: 3
RE: How to run a workflow without using GUI i.e, Worlf...
file:///C|/Perl/bin/result.html (341 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
pmcmd is not GUI. It is a command you can use within unix script to run the
workflow.
=======================================
Unless the job is scheduled you cannot manually run a workflow without using a
GUI.
=======================================
364.Informatica - How to implement de-
normalization concept in
Informatica Mappings?
QUESTION #364
No best answer available. Please pick the good answer available
or submit your
answer.
January 24, 2008 19:04:25 #1
kasisarath Member Since: January 2008 Contribution: 6
RE: How to implement de-normalization concept in Informatica Mappings?
Click Here to view complete document
User Normalizer. This transformation used to normalize data.
=======================================
365.Informatica - What are the Data Cleansing Tools
used in the
DWH?What are the Data Profiling Tools used for
DWh?
QUESTION #365
No best answer available. Please pick the good answer available
or submit your
answer.
June 21, 2007 04:23:27 #1
Shashikumar
RE: What are the Data Cleansing Tools used in the DWH?...
file:///C|/Perl/bin/result.html (342 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Data Cleansing Tool: Trilium
Profile tool: Informatica Profiler
=======================================
366.Informatica - how do you use Normalizer to
convert columns
into rows ?
QUESTION #366
No best answer available. Please pick the good answer available
or submit your
answer.
February 18, 2008 03:47:43 #1
Deepak Rajkumar Member Since: June 2007 Contribution: 4
RE: how do you use Normalizer to convert columns into rows ?
Click Here to view complete document
Using Normalizer in the normalizer properties we can add coloumns and for a
coloumn we can specify
the occurance levels which converts rows into coloumns.
=======================================
367.Informatica - how to use the shared cache
feature in look up
transformation
QUESTION #367
No best answer available. Please pick the good answer available
or submit your
answer.
September 28, 2007 11:26:00 #1
chandrarekha
RE: how to use the shared cache feature in look up tra...
file:///C|/Perl/bin/result.html (343 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
If you are using single lookup and you are using only for reading tha data you can
use satatic cache
=======================================
Instead of creating multiple cache files informatica server creates only one cache
file for all the lookups
used with in the mapping which selected as a shared cache option.
=======================================
368.Informatica - What is Repository size, What is its
min and
max size?
QUESTION #368
No best answer available. Please pick the good answer available
or submit your
answer.
September 28, 2007 11:24:21 #1
chandrarekha
RE: What is Repository size, What is its min and max ...
Click Here to view complete document
10GB
=======================================
369.Informatica - What will be the way to send only
duplicate
records to the Target?
QUESTION #369
No best answer available. Please pick the good answer available
or submit your
answer.
July 23, 2007 08:23:22 #1
Samir Desai
RE: What will be the way to send only duplicate record...
file:///C|/Perl/bin/result.html (344 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
You can use following query in SQ -->
Select Col_1 Col_2 Col..n from source group by Col_1 Col_2 Col..n having
count(*)>1
to get only duplicate records in the traget.
Best Regrads
Samir Desai.
=======================================
If you take a ex:EMP TABLE having the duplicate records
The query is
SELECT *FROM EMP WHERE EMPNO IN SELECT EMPNO FROM EMP
GROUP BY EMPNO
HAVING COUNT(*)>1;
Sanjeeva Reddy
=======================================
370.Informatica - What are the Commit & Commit
Intervals?
QUESTION #370
No best answer available. Please pick the good answer available
or submit your
answer.
July 31, 2007 13:05:53 #1
rasmi Member Since: June 2007 Contribution: 20
RE: What are the Commit & Commit Intervals?
Click Here to view complete document
Commit interval is a interval in which the Informatica server loads the data into the
target.
=======================================
371.Informatica - Explain Session Recovery Process?
QUESTION #371
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (345 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
September 28, 2007 11:21:50 #1
chandrarekha
RE: Explain Session Recovery Process?
Click Here to view complete document
You have 3 steps in session recovery
If Informatica server performs no commit run the session again
At least one commit perform recovery
perform recovery is not possible truncate the target table and run the session again.
Rekha
=======================================
hi
when the informatica server starts a recovery session it reads the
opb_srvr_recovery table and notes the rowid of the last row commited
to the target database. when it starts recovery process again it starts
from the next row_id. if session recovery should take place atleast one
commit must be executed.
=======================================
372.Informatica - Explain pmcmd?
QUESTION #372
No best answer available. Please pick the good answer available
or submit your
answer.
July 31, 2007 11:54:03 #1
rasmi Member Since: June 2007 Contribution: 20
RE: Explain pmcmd?
file:///C|/Perl/bin/result.html (346 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
1. Pmcmd is perform following tasks
1.start and stop the sessions and batches
2.process the sesson recovery
3.stop the informatica server
4.check whether informatica server working or not.
=======================================
pmcmd means powermart command prompt which used to perform the tasks from
command prompt
and not from Informatica GUI window
=======================================
It a command line program. It performs the following tasks
Start and stop the session and batches
Stop the Informatica Server
Checks whether the server is working or not
=======================================
hi
PMCMD means program command line utility.
it is a program command line utility to communicate with informatica server.
PMCMD performs following tasks
1)start and stop batches and sessions
2)recovery sessions
3)stops the informatica
4)schedule the sessions by shell scripting
5)schedule the sessions by using operating system schedule tools like CRON
=======================================
373.Informatica - What does the first column of bad
file (rejected
rows) indicate? Explain
QUESTION #373
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (347 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
September 24, 2007 14:24:17 #1
chandrarekha
RE: What does the first column of bad file (rejected r...
Click Here to view complete document
First column of the bad file indicates row -indicator and second column indicates
column indicator
Row indicator : Row indicator tells the writer what to do with the row of wrong
data
Row indicator meaning rejected by
0 Insert target/writer
1 update target/writer
2 delete target/writer
3 reject writer
If the row indicator is 3 the writer rejects the row because the update starategy
expression is marked as
reject
=======================================
374.Informatica - What is the size of data mart?
QUESTION #374
No best answer available. Please pick the good answer available
or submit your
answer.
August 22, 2007 05:24:31 #1
hpadala
RE: What is the size of data mart?
file:///C|/Perl/bin/result.html (348 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
Datamart size around 1 gb. its based on your project.
=======================================
Hi
Data mart is a part of the data warehouse.for example In an organization we can
manage Employee
personal information as one data mart and Project information as one data mart one
data ware house
may have any number of data marts.The size of the data mart depends on your
business needs it varies
business to business. some times OLTP database may act as data mart for ware
house.
Cheers
Thana
=======================================
375.Informatica - What is meant by named cache?At
what
situation we can use it?
QUESTION #375
No best answer available. Please pick the good answer available
or submit your
answer.
August 24, 2007 13:54:43 #1
sn3508 Member Since: April 2006 Contribution: 20
RE: What is meant by named cache?At what situati...
Click Here to view complete document
By default there will be no name for the cache in lookup transformation. Everytime
you run the session
the cache will be rebuilt. If you give a name to it it is called Persistent Cache. In
this case the first time
you run the session the cache will be build and the same cache is used for any no.
of runs. This means
the cache doesn't have any changes reflected to it even if the lookup source is
changed. You can rebuilt
it again by deleting the cache
=======================================
376.Informatica - How do you define fact less Fact
Table in
Informatica
QUESTION #376
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (349 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
September 18, 2007 00:30:12 #1
ramakrishna
RE: How do you define fact less Fact Table in Informat...
Click Here to view complete document
A Fact table with out measures is called fact less fact table
=======================================
Fact table without measure is called factless fact table
fact less fact tables are used to capture date transaction events
=======================================
377.Informatica - what are the main issues while
working with
flat files as source and as targets ?
QUESTION #377
No best answer available. Please pick the good answer available
or submit your
answer.
February 06, 2008 09:25:16 #1
rasmi Member Since: June 2007 Contribution: 20
RE: what are the main issues while working with flat files as source and as
targets ?
Click Here to view complete document
We need to specify correct path in the session and mension either that file is 'direct'
or 'indirect'. keep
that file in exact path which you have specified in the session .
-regards
rasmi
=======================================
1. We can not use SQL override. We have to use transformations for all our
requirements
2. Testing the flat files is a very tedious job
3. The file format (source/target definition) should match exactly with the format
of data file. Most of
the time erroneous result come when the data file layout is not in sync with the
actual file.
(i) Your data file may be fixed width but the definition is delimited----> truncated
data
(ii) Your data file as well as definition is delimited but specifying a wrong
delimiter (a) a delimitor
file:///C|/Perl/bin/result.html (350 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
other than present in actual file or (b) a delimiter that comes as a character in some
field of the file---
>wrong data again
(iii) Not specifying NULL character properly may result in wrong data
(iv) there are other settings/attributes while creating file definition which one
should be very careful
4. If you miss link to any column of the target then all the data will be placed in
wrong fields. That
missed column wont exist in the target data file.
Please keep adding to this list. There are tremendous challenges which can be
overcome by being a bit
careful.
=======================================
378.Informatica - When do you use Normal Loading
and the Bulk
Loading, Tell the difference?
QUESTION #378
No best answer available. Please pick the good answer available
or submit your
answer.
September 19, 2007 11:35:05 #1
rama krishna
RE: When do you use Normal Loading and the Bulk Loadin...
Click Here to view complete document
If we use SQL Loder connections then it will be to go for Bulk loading. And if we
use ODBC
connections for source and target definations then it is better to go for Normal
loading.
If we use Bulk loading then the session performence will be increased.
how means... if we use the bulk loading the data will be BYPASS through the
DATALOGS. So
automatically performence will be increased.
=======================================
Normal Load: It loads the records one by one Server writes log file for each record
So it takes more
time to load the data.
Bulk load : It loads the number of records at a time it does not write any log files or
tracing levels so it
takes less time.
=======================================
You would use Normal Loading when the target table is indexed and you would
use bulk loading when
file:///C|/Perl/bin/result.html (351 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
the target table is not indexed. Running Bulk Load in an indexed table will cause
the session to fail.
=======================================
379.Informatica - What is key factor in BRD?
QUESTION #379
No best answer available. Please pick the good answer available
or submit your
answer.
January 24, 2008 19:05:37 #1
kasisarath Member Since: January 2008 Contribution: 6
RE: What is key factor in BRD?
Click Here to view complete document
could you please eloborate what is BRD
=======================================
380.Informatica - how do you measure slowly
changing
dimensions using lookup table
QUESTION #380
No best answer available. Please pick the good answer available
or submit your
answer.
September 24, 2007 13:41:04 #1
vemurisasidhar Member Since: August 2007 Contribution: 10
RE: how do you measure slowly changing dimensions usin...
Click Here to view complete document
the lookup table is used to split the data by comparing the source and target data.
the data is branched
accordingly whether to update or insert
=======================================
381.Informatica - Have you implmented Lookup in
your
mapping, If yes give some example?
QUESTION #381
No best answer available. Please pick the good answer available
or submit your
answer.
September 26, 2007 13:39:09 #1
file:///C|/Perl/bin/result.html (352 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
vemurisasidhar Member Since: August 2007 Contribution: 10
RE: Have you implmented Lookup in your mapping, If yes...
Click Here to view complete document
we do. we have to update or insert a row in the target depending upon the data
from the sources. so
inorder to split the rows either to update or insert into the target table we use the
lookup transformation
in reference to target table and compared with source table.
=======================================
382.Informatica - Which SDLC suits best for the
datawarehousing
project.
QUESTION #382
No best answer available. Please pick the good answer available
or submit your
answer.
September 12, 2007 17:28:25 #1
anjanroy Member Since: September 2005 Contribution: 4
RE: Which SDLC suits best for the datawarehousing proj...
Click Here to view complete document
Datawarehousing projects are different from the traditional OLTP project. First of
all they are ongoing.
A datawarehouse project is never "complete". Here most of the time the business
users would say -
"give us the data and then we will tell you what do we want". So here a traditional
waterfall model is
not the optimal SDLC approach.
The best approach here is of a phased iteration - where you implement and deliver
projects in small
manageable chunks (90 days ~ 1 qtr) and keep maturing your data warehouse.
=======================================
383.Informatica - What is the difference between
source
definition database and source qualifier?
QUESTION #383
No best answer available. Please pick the good answer available
or submit your
file:///C|/Perl/bin/result.html (353 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
answer.
September 24, 2007 13:26:52 #1
vemurisasidhar Member Since: August 2007 Contribution: 10
RE: What is the difference between source definition d...
Click Here to view complete document
a source definition database contain the datatypes that are used in the orginal
database from which the
source is extracted. where the source qualifier is used to convert the source
definition datatypes to the
informatica datatypes. which is easy to work with.
=======================================
384.Informatica - What is the logic will you
implement to load
data into a fact table from n dimension tables?
QUESTION #384
No best answer available. Please pick the good answer available
or submit your
answer.
September 24, 2007 13:24:01 #1
vemurisasidhar Member Since: August 2007 Contribution: 10
RE: What is the logic will you implement to load data ...
Click Here to view complete document
we can do this by using mapping wizard. there are of basically two types:
1)getting started wizzard
2)scd
gettign started wizard is used when there is no need to change the previous data.
scd can hold the historical data.
=======================================
385.Informatica - What is a Shortcut and What is the
difference
between a Shortcut and a Reusable Transformation?
QUESTION #385
No best answer available. Please pick the good answer available
or submit your
file:///C|/Perl/bin/result.html (354 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
answer.
January 29, 2008 23:32:01 #1
Sant_parkash Member Since: October 2007 Contribution: 22
RE: What is a Shortcut and What is the difference between a Shortcut and a
Reusable
Transformation?
Click Here to view complete document
A Reusable Transformation can only be used with in the folder. but a shortcut can
be used anywhere in
the Repository and will point to actual Transformation..
=======================================
to add to this
the shortcut points to objects of shared folder only
=======================================
A shortcut is a reference (link) to an object in a shared folder these are commonly
used for sources and
targets that are to be shared between different environments / or projects. A
shortcut is created by
assigning 'Shared' status to a folder within the Repository Manager and then
dragging objects from this
folder into another open folder; this provides a single point of control / reference
for the object -
multiple projects don't all have import sources and targets into their local folders.
A reusable
transformaion is usually something that is kept local to a folder examples would be
the use of a reusable
sequence generator for allocating warehouse Customer Id's which would be useful
if you were loading
customer details from multiple source systems and allocating unique ids to each
new source-key. Many
mappings could use the same sequence and the sessions would all draw from the
same continuous pool
of sequence numbers generated.
=======================================
386.Informatica - In what all transformations the
mapplets cant be
used in informatica??
QUESTION #386
No best answer available. Please pick the good answer available
or submit your
answer.
October 06, 2007 17:50:21 #1
naina
RE: In what all transformations the mapplets cant be u...
file:///C|/Perl/bin/result.html (355 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
The mapplets cant use these following transformations
xml source qualifier
normalizer
non-reusable sequence generator (it can use only reusable sequence generator)
=======================================
387.Informatica - Eliminate Duplicate Records
QUESTION #387 Hi
I am having 10000 records in flat file, in that there are 100 records
are duplicate
records..
soo we want to eliminate those records..which is the best method
we have to
follow.
Regards
Mahesh Reddy
Click Here to view complete document
Submitted by: vivek1708
In order to beable to delete those entries
I think, you'll have to write sql queries in teh data base table
using rownum/rowid concept.
Or
by using the sorter and distinct option, load the unique rows in a temp table
followed by a truncate on the original table
and moving data back to it from the temp table.
hope it helps.
Above answer was rated as good by the following members:
ayappan.a
=======================================
You can put sorter transformation after source qualifier transformation and in
sorter tranformation'
properties enable distinct property.
Thanks
kumar
=======================================
file:///C|/Perl/bin/result.html (356 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
In order to beable to delete those entries
I think you'll have to write sql queries in teh data base table
using rownum/rowid concept.
Or
by using the sorter and distinct option load the unique rows in a temp table
followed by a truncate on the original table
and moving data back to it from the temp table.
hope it helps.
=======================================
use aggregate on primary keys
=======================================
388.Informatica - what is the difference between
reusable
transformation and mapplets?
QUESTION #388
No best answer available. Please pick the good answer available
or submit your
answer.
November 20, 2007 03:19:32 #1
ramesh raju
RE: what is the difference between reusable transforma...
Click Here to view complete document
Reusable transformation is a single transformatin which can be resuable & mapplet
is a set of
transformations which can be reusable.
=======================================
389.Informatica - What is the functionality of
Lookup
Transformation
QUESTION #389 (connected & un connected) Click Here to view
complete document
No best answer available. Please pick the good answer available or submit your
answer.
November 17, 2007 06:59:15 #1
Thananjayan Member Since: November 2007 Contribution: 15
file:///C|/Perl/bin/result.html (357 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
RE: What is the functionality of Lookup Transformation...
=======================================
Look up transformation compares the soure with specified target table and
forwarded the records to
next tranformation only matched records.If not match it returns NULL
=======================================
390.Informatica - How do you maintain Historical
data and how
to retrieve the historical data?
QUESTION #390
No best answer available. Please pick the good answer available
or submit your
answer.
October 23, 2007 12:02:59 #1
ravi
RE: How do you maintain Historical data and how to ret...
Click Here to view complete document
by using update statergy transformations
=======================================
You can maintain the historical data by desing the mapping using Slowly changing
dimensions types.
If you need to insert new and update old data best go for Update strategy.
If you need to maintain the histrory of the data for ex The cost of the product
change happen frequently
but you would like to maintain all the rate history go to SCD Type2.
The design change as per your requirement.If you make your question more clear I
can provide your
more information.
Cheers
Thana
=======================================
391.Informatica - What is difference between cbl
(constaint based
file:///C|/Perl/bin/result.html (358 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
commit) and target based commit?When we use cbl?
QUESTION #391
No best answer available. Please pick the good answer available
or submit your
answer.
January 16, 2008 01:30:51 #1
say2joshi Member Since: April 2007 Contribution: 3
RE: What is difference between cbl (constaint based commit) and target based
commit?When we
use cbl?
Click Here to view complete document
could u pls clear ur question tht CBL stands for constarint based loading?
thanks
=======================================
CBL means constaraint based loading.the data was loaded into the the target table
based on the
Constraints.i.e if we want to load the EMP&DEPT data first it loads the data of
DEPT then EMP
because DEPT is PARENT table EMP is CHILD table.
easily to undestand it loads PARENT table first then CHILD table.
=======================================
392.Informatica - Repository deletion
QUESTION #392 what happens when a repository is deleted?
If it is deleted for some time and if we want to delete it
permanently?
where is stored (address of the file) Click Here to view complete
document
No best answer available. Please pick the good answer available or submit your
answer.
November 28, 2007 22:36:19 #1
chandrarekha Member Since: May 2007 Contribution: 14
RE: Repository deletion
file:///C|/Perl/bin/result.html (359 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
=======================================
I too want to know the answer for this question
I tried to delete the repository that time it was deleted after that i tried to create the
repository with the
same name what i was deleted earliar it is showing the repositort exist in the target
we have to delete the repository in the target database
=======================================
repository is stored in database go in repository admin console right click on
repository then choose
delete . a dialog box is displayed fill up the user name(database user wr ur
repository is reside) give
password and fill all the field .thereafter u can delete it. if any queries plz mail me
sajjan.s25@gmail.
com
=======================================
393.Informatica - What is pre-session and post-
session?
QUESTION #393
No best answer available. Please pick the good answer available
or submit your
answer.
November 13, 2007 23:55:12 #1
vizaik Member Since: March 2007 Contribution: 30
RE: What is pre-session and post-session?
Click Here to view complete document
Pre-session:
=======================================
394.Informatica - What is Informatica basic data
flow?
QUESTION #394
No best answer available. Please pick the good answer available
or submit your
answer.
November 20, 2007 07:04:44 #1
Nick
RE: What is Informatica basic data flow?
file:///C|/Perl/bin/result.html (360 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
Click Here to view complete document
INF Basic Data flow means Extraction Transformation and Loading of data from
Source to the target.
Cheers
Nick
=======================================
395.Informatica - which activities can be performed
using the
repository manager?
QUESTION #395
No best answer available. Please pick the good answer available
or submit your
answer.
October 27, 2007 00:47:25 #1
karuna
RE: which activities can be performed using the reposi...
Click Here to view complete document
Using Repository manager we can create new folders for the existing repositories
and manage the
repository from it
=======================================
Using repository manager
*We can create folders under the repository
*Create Sub Folders and Folder Management
*Create Users and USer Groups
*Set Security/Access Privilges to the users
and many more...
=======================================
You can use the Repository Manager to perform the following tasks:
=======================================
396.Informatica - what is the economic comparision
of all the
Informatica versions?
QUESTION #396
No best answer available. Please pick the good answer available
or submit your
answer.
file:///C|/Perl/bin/result.html (361 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
April 10, 2008 04:59:08 #1
sri.kal Member Since: January 2008 Contribution: 6
RE: what is the economic comparision of all the Informatica versions?
Click Here to view complete document
Version controlling
=======================================
Economic comparision is nothing but the price-tag of informatica available
versions.
=======================================
397.Informatica - why do we need lookup sql
override? Do we
write sql override in lookup with special aim?
QUESTION #397
No best answer available. Please pick the good answer available
or submit your
answer.
November 13, 2007 23:47:42 #1
vizaik Member Since: March 2007 Contribution: 30
RE: why do we need lookup sql override? Do we write sq...
Click Here to view complete document
Yes sql override in lookup used to lookup more than one value from more than one
table.
=======================================
You can join the data from multiple tables in the same database by using lookup
override
You can use sql override if
1. To use more than one look up in the mapping
2. You can use SQL override for filtering records in the cache to remove unwanted
data
=======================================
Lookup override can be used to get some specific records(using filters in where
clause) from the lookup
table. Adavantages are that the whole table need not be looked up..
=======================================
398.Informatica - What are tracing levels in
transformation?
QUESTION #398
file:///C|/Perl/bin/result.html (362 of 363)4/1/2009 7:50:59 PM
file:///C|/Perl/bin/result.html
No best answer available. Please pick the good answer available
or submit your
answer.
November 19, 2007 00:18:10 #1
Abhishek Shukla
RE: What are tracing levels in transformation?
Click Here to view complete document
Tracing level keeps the information aboaut your mapping and transformation.
there are 4 kind of tracing level which is responsible for giving more information
on basis of their
characterictis.
Thanks
Abhishek Shukla
=======================================
Tracing level in the case of informatica specifies the level of detail of information
that can be recorded
in the session log file while executing the workflow.
4 types of tracing levels supported
1.Normal: It specifies the initialization and status information and summerization
of the success rows
and target tows and the information about the skipped rows due to transformation
errors.
2.Terse specifies Normal + Notification of data
3. Verbose Initialisation : In addition to the Normal tracing specifies the location
of the data cache
files and index cache files that are treated and detailed transformation statistics for
each and every
transformation within the mapping.
4. Verbose data: Along with verbose initialisation records each and every record
processed by the
informatica server
For better performance of mapping execution the tracing level should be
specified as TERSE
Verbose initialisation and verbose data are used for debugging purpose.
=======================================
file:///C|/Perl/bin/result.html (363 of 363)4/1/2009 7:50:59 PM

You might also like