Scenario's
Scenario's
Scenario's
1 AA 26000 1
2 BB 35000 2
3 CC 28000 3
4 dd 24000 1
5 ee 31000 2
6 ff 45000 3
7 xx 20000 4
8 yy 35000 2
9 zz 32000 4
eg. record.
ID Name salary dept_no
1 AA 1
2 BB 35000
Scenario 2:
Target is a file. we have to replace last duplicate value with null.
Src
1 A 10
2 B 20
3 B 20
4 C 30
5 D 30
6 E 30
Tgt
1 A 10
2 B 20
3 B NULL
4 C 30
5 D 30
6 E NULL
Scenario 3:
src: Id sal
1 300
2 400
3 500
4 500
tgt: Id sal total
1 300 1700
2 400 1700
3 500 1700
4 500 1700
Scenario 4:
Source table
name No
A 10
B 20
C 30
Output of Target table will be like this:
Name No
A 30
B 20
C 10
Scenario 5:
source
A
B
C
D
E
Target
AOOOO
OBOOO
OOCOO
OOODO
OOOOE
Scenario 6:
My table has only one column having the number data. I want to fetch only prime nos from the
column. For ex.
Src Col1 -- 1,2,3,4,5,6,7,8,9
Tgt Col1 -- 2,3,5,7
Scenario 7:
Src.
10
10
10
20
20
30
Trg
10 1
10 2
10 3
20 1
20 2
30 1
Scenario 8:
I/P
a,1
b,2
c,3
O/P
abc,1
abc,2
abc,3
Scenario 9:
Ip File
ID Name
1 a
2 b
2 b
3 c
4 d
4 d
4 d
5 e
5 e
6 f
Output 1
ID Name Count
1 a 1
3 c 1
6 f 1
Output 2
ID Name Count
2 b 2
4 d 3
5 e 2
Scenario 10:
Calculate runs per over.
Input File
Ball_no Runs
1 1
2 1
3 0
4 0
5 6
6 1
1 0
2 4
3 1
4 0
5 0
6 1
Output File
Over_no Total_runs
1 (Sum of first over)
2 (Sum of second over)
Scenario 11:
Input File
Ball_no Runs
1 1
2 1
3 0
4 0
5 6
6 1
7 0
8 4
9 1
10 0
11 0
12 1
Output File
Over_no Total_runs
1 (Sum of first over)
2 (Sum of second over)
Scenario 12:
Calculate the Cumulative Salary
Input File
ID Name Sal
101 abc 10000
102 xyz 12000
103 pqr 15000
104 lmn 10000
105 jkl 20000
106 wxy 18000
Output File
ID Name Sal Cumulative Sal
101 abc 10000 10000
102 xyz 12000 22000
103 pqr 15000 37000
104 lmn 10000 47000
105 jkl 20000 67000
106 wxy 18000 85000
Scenario 13:
Input File 1
ID
1
2
3
Input File 2
Name
a
b
c
Output:
Record
1,a
2,b
3,c
(Hint : Use one of the transform components)
Scenario 14:
Finding nth highest salary from employee table using rollup component. The graph should be
generic enough where n can be a user input.
Scenario 15:
Source:
TABLE_NAME VALUE
TABA 1
TABA 2
TABA 3
TABA 4
TABB 7
TABB 8
TABC 1
TABC 2
TABC 5
Output:
TABLE_NAME VAL1 VAL2 VAL3 VAL4
TABA 1 2 3 4
TABB 7 8
TABC 1 2 5
Scenario 16:
input is -
Output should be -
Card No Amount Date
101 1000 1-1-15
101 1500 1-2-15
101 1000 1-3-15
101 3500
102 1000 2-1-15
102 2000 2-2-15
102 3000 2-3-15
102 6000
Scenario 17:
the following input value which is of string(11) --> AAAA1014CSE"
auto assign the values to the output dml fields. how can we do that? i dont want to use string
substring and type cast for each field.
output dml:
record
string(4) name;
decimal(4) rollnum;
string(3) dept;
end;
output record:
name = AAAA
rollnum=1014
dept=CSE
Scenario 18:
You have two different fields coming from 2 different flows output contains with comma separator.
file1
1
2
3
file2:
a
b
c
output:
file
1,a
2,b
3,c
Scenario 19:
Scenario 20:
You have to assign unique sequence number to group of records which is based on a key.
def, 01/01/2015
def, 01,02,2015
1,abc, 01/01/2015
1,abc, 01,02,2015
2,def, 01/01/2015
2,def, 01,02,2015
Scenario 21:
get a single record which contains a series of two digit numbers without any delimiter like:
10102020203040404040
This series might go on, my graph should count the occurrences of each unique two digit number
and show it with its count, like:
10 : 2
20 : 3
30 : 1
40 : 4
Scenario 22:
Input File :
101,krishna
102,surya
103,asha
Output :
101,anhsirk
102,ayrus
103,ahsa
Scenario 23:
Date|Balance_Amt
========================================
20160401|110.00
20160402|120.00
20160406|1120.00
20160410|2000.00
20160411|2100.00
20160420|3200.00
---------------------------------------
Output:
Date|Balance_Amt
========================================
20160401|110.00
20160402|120.00
20160403|120.00
20160404|120.00
20160405|120.00
20160406|1120.00
20160407|1120.00
20160408|1120.00
20160409|1120.00
20160410|2000.00
20160411|2100.00
20160412|2100.00
20160413|2100.00
20160414|2100.00
20160415|2100.00
20160416|2100.00
20160417|2100.00
20160418|2100.00
20160419|2100.00
20160420|3200.00
---------------------------------------
Scenario 24:
Source_City|Destination_City|Distance
========================================
DELHI|PUNE|1500
BANGALURU|MUMBAI|900
PUNE|DELHI|1500
DELHI|KOLKATA|1200
CHENNAI|MUMBAI|1350
KOLKATA|DELHI|1200
---------------------------------------
Since route from CityA to CityB is same as CityB to CityA. Remove such duplicate routes. Do it
using
a)only one transform component.
b)Not using sort component
Output:
Source_City|Destination_City|Distance
========================================
DELHI|PUNE|1500
BANGALURU|MUMBAI|900
DELHI|KOLKATA|1200
CHENNAI|MUMBAI|1350
Scenario 25:
Name|Designation
========================================
Sagar|Analyst
Raj|Analyst
Tahir|Analyst
Dhanashree|Analyst
Ravi|Analyst
Vignesh|Consultant
Nirbhay|Director
Trevor|Chief Executive Officer
Neeraj|Director
---------------------------------------
Output:
Vowel|Count
=======================
a|<count of all a/As>
e|<count of all e/Es>
i|<count of all i/Is>
o|<count of all o/Os>
u|<count of all u/Us>
Scenario 26:
Input File
ID Name Salary dept_no
1 AA 26000 1
2 BB 35000 2
3 CC 28000 3
4 dd 24000 1
5 ee 31000 2
6 ff 45000 3
7 xx 20000 4
8 yy 35000 2
9 zz 32000 4
eg. record.
ID Name salary dept_no
1 AA 1
2 BB 35000
and so on
Scenario 27:
Separate Out Unique and Duplicate Records
Ip File
ID Name
1 a
2 b
2 b
3 c
4 d
4 d
4 d
5 e
5 e
6 f
Output 1
ID Name Count
1 a 1
3 c 1
6 f 1
Output 2
ID Name Count
2 b 2
4 d 3
5 e 2
Scenario 28:
Calculate runs per over.
Input File
Ball_no Runs
1 1
2 1
3 0
4 0
5 6
6 1
1 0
2 4
3 1
4 0
5 0
6 1
Output File
Over_no Total_runs
1 (Sum of first over)
2 (Sum of second over)
Scenario 29:
Input File
Ball_no Runs
1 1
2 1
3 0
4 0
5 6
6 1
7 0
8 4
9 1
10 0
11 0
12 1
Output File
Over_no Total_runs
1 (Sum of first over)
2 (Sum of second over)
Input File
ID Name Sal
101 abc 10000
102 xyz 12000
103 pqr 15000
104 lmn 10000
105 jkl 20000
106 wxy 18000
Output File
ID Name Sal Cumulative Sal
101 abc 10000 10000
102 xyz 12000 22000
103 pqr 15000 37000
104 lmn 10000 47000
105 jkl 20000 67000
106 wxy 18000 85000
Scenario 31:
Input File 1
ID
1
2
3
Input File 2
Name
a
b
c
Output:
Record
1,a
2,b
3,c
(Hint : Use one of the transform components)
Scenario 32:
Input File :
101,krishna
102,surya
103,asha
Output :
101,anhsirk
102,ayrus
103,ahsa
Scenario 33:
A single record which contains a series of two digit numbers without any delimiter like:
10102020203040404040
This series might go on, my graph should count the occurrences of each unique two digit number
and show it with its count, like:
10 : 2
20 : 3
30 : 1
40 : 4
Scenario 34:
You have to assign unique sequence number to group of records which is based on a key.
abc, 01/01/2015
abc, 01,02,2015
def, 01/01/2015
def, 01,02,2015
2,def, 01/01/2015
2,def, 01,02,2015
Scenario 35:
INPUT FILE
category name
1 a
3 b
2 c
1 x
4 y
2 n
4 p
1 q
3 z
1 l
Output File
category names
1 axql
2 cn
3 bz
4 yp
Scenario 36:
input
abcdefgh:-row1(string)
1234567:-row2
o/p:
col1 col2
a 1
b 2
c 3
d 4
e 5
f 6
Scenario 37:
print this parttern (abinitio)
*
***
*****
********
**********
Scenario 38:
i/p:
a1b2c3d4e5f6g7h8
o/p:
a 1
b 2
c 3
d4
e 5
f 6
g 7
h 8
Scenario 39:
Input file:
1,1
2,2
3,3
4,4
6,6
Output file:
1,6
2,5
3,4
4,3
6,1
using only any one component (except input and out put components)
Scenario 40:
Input file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Output file:
1
2
3
4
5
6
7
8
9
10
10
11
12
13
14
15
16
17
18
19
20
20
21
22
Scenario 41:
Input file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Output file:
1
2
3
4
5
6
7
8
9
10
55
11
12
13
14
15
16
17
18
19
20
155
21
22
23
24
25
Scenario 42:
using only reformat
Input file:
rec
1
2
3
4
5
Output file:
rec count
1 5
2 5
3 5
4 5
5 5
Ex
using normalize
1.Rows into Columns
2.Columns into Rows
Scenario 43:
Input file:
no sal
101 100
102 200
103 300
Output file:
no sal pre_sal
101 100 0
102 200 100
103 300 200
Scenario 44:
Input file :
no name amount
101 xxx 100
102 yyy 200
103 tyh 300
104 hgj 400
105 fgh 500
106 ujh 600
Output file :
no name amount
103 tyh 300
104 hgj 400
106 ujh 600
Scenario 45:
Input file:
col1 col2
1 A
2 B
3 C
Output file:
123
ABC
Scenario 46:
Input file:
trans_date amount
101 2 13-04-2017 300
09-02-2016 200
102 1 11-02-2017 150
Output file:
cust_id no_of trans trans_date amount
101 2 13-04-2017 300
101 2 09-02-2016 200
102 1 11-02-2017 150
Scenario 47:
Input file:
A,10
B,20
A,10
C,5
B,5
A,10
A,20
B,20
B,25
C,5
1
2
3
4
.
20
Output file:
1
2
.
.
10
55
11
12
13
.
.
20
155
In the above scenario, requirement is to display sum of every 10 records.
Scenario 49:
Display Header & Trailer records without using next_in_sequence() and Dedup Sorted Component.
Scenario 50:
1
11
111
1111
Output File
Scenario 55:
Customer File:
Output File:
Scenario 56:
develop Unix script to achieve the below requirement.
One directory has the number of .dat files coming every day. Each file has some data. You
have to generate the .ctl files for each data file, which should have the file_name|date|
count_from_file|
e.g.
cd /home/id/dat-----List of dat files
ls -l
a.dat
b.dat
c.dat
O/P:
cd /home/id/ctl----list of control files to be generated based on dat files
ls -l
a.ctl
b.ctl
c.ctl
O/P data:
cat a.ctl------Generated control file for above data file
a.dat|20170417|4|
Scenario 57:
It should accept the Todays_date(like 2017-05-09) as Input dynamically and should store
the .ctl files in “/home/id/ctl/today's_date/”
e.g:
O/P:
cd /home/id/ctl/today's_date/----list of control files to be generated based on dat files
ls -l
a.ctl
b.ctl
c.ctl
O/P data:
cat a.ctl------Generated control file for above data file
a.dat|20170417|4|
Scenario 58:
*****
****
***
**
*