Int J Adv Manuf Technol (2006) 30: 1132–1138
DOI 10.1007/s00170-005-0135-5
ORIGINA L ARTI CLE
A. Noorul Haq . T. Radha Ramanan
A bicriterian flow shop scheduling using artificial neural network
Received: 11 August 2004 / Accepted: 23 April 2005 / Published online: 12 November 2005
# Springer-Verlag London Limited 2005
Abstract This paper considers the sequencing of jobs that
arrive in a flow shop in different combinations over time.
Artificial neural network (ANN) uses its acquired sequencing knowledge in making the future sequencing decisions.
The paper focuses on scheduling for a flow shop with ‘m’
machines and ‘n’ jobs. The authors have used the heuristics
proposed by Campbell et al.(1970, A heuristic algorithm for
n-jobs m-machines sequencing problem) to find a sequence
and makespan (MS). Then a pair wise interchange of jobs is
made to find the optimal MS and total flow time (TFT). The
obtained sequence is used for giving training to the neural
network and a matrix called neural network master matrix
(NNMM) is constructed, which is the basic knowledge of
the neurons obtained after training. From the matrix, interpretations are made to determine the optimum sequence for
the jobs that arrive in the future over a period of time. The
results obtained by the ANN are compared with a constructive heuristics and an improvement heuristics. The
results show that the quality of the measure of performance
is better when ANN approach is used than obtained by
constructive or improvement heuristics. It is found that the
system’s efficiency (i.e., obtaining the optimal MS and TFT)
increases with increasing numbers of training exemplars.
Keywords Artificial neural network . Makespan . Total
flow time . Neural network master matrix . Scheduling .
Sequencing . Training exemplars
is to be processed at several machines. It is required to
sequence these jobs on the machines to optimize a certain
performance criterion.
Most of the research in the flow-shop sequencing problem
has concentrated on the development of a permutation flow
shop schedule. The machines in a flow shop are dedicated to
processing at most one job, and each job can be processed on
at most one machine at any time. Preemption of individual
jobs is not allowed. The jobs must be processed in the same
sequence by each of the ‘m’ machines, given the processing
times of each job on each machine. The objective of the
sequencing problem is usually to decide the sequence of jobs,
which minimizes the makespan. In this paper makespan and
total flow time is considered for optimization. Rajendran [3]
states that reducing MS and TFT is more effective in reducing
the total scheduling cost.
The objectives of this paper are twin fold:
(1) To develop an ANN approach for bicriterean flow
shops to give a solution to the sequencing problems of
the shop.
(2) To develop an optimal sequence considering both TFT
and MS to reduce the total time.
The rest of the paper is organized as follows: In Sect. 2, a
survey of literature is presented. In Sect. 3, the proposed
neural network is discussed. In Sect. 4, an illustration of the
ANN is shown. In Sect. 5 results and discussions are made.
In Sect. 6 conclusions are given.
1 Introduction
2 Literature review
Elsayed [2] stated that the job sequencing could be stated as
follows:
Given ‘n’ jobs to be processed, each has a setup time,
processing time, and a due date. To be completed, each job
A. Noorul Haq (*) . T. Radha Ramanan
Department of Production Engineering,
National Institute of Technology,
Tiruchirappalli, 620 015, India
e-mail: anhaq@nitt.edu
Tel.: +91-0431-2500813
2.1 ANN applications
A survey of literature, shows that most of the heuristics for
flowshop aim at minimizing makespan (MS), over the last
decade. ANNs have been in many areas ranging from
manufacturing to finance and marketing. The approach in
scheduling has also received a lot of interest and especially
tried in job shop scheduling. The ability to map and solve
combinatorial optimization using ANN has also motivated
1133
researchers since the beginning of ANN research. The
existing studies can be classified according to the following
network structures:
(1) Hopfield model and other optimizing network.
(2) Competitive networks.
(3) Back propagation networks.
Sabuncuoglu and Gurgun [4] in their paper have used a
variation of Hopfield network to obtain inhibitory connections so that feasibility could be achieved. In addition to the
feasibility the network searches for a minimum energy
level corresponding to the value of cost functions (i.e.,
makespan or mean tardiness). The paper has discussed the
ANN approach in detail using the single machine mean
tardiness scheduling problem and the makespan of job
shop problem.
Jain and Meeran [5] have proposed a neural network model
for a job shop environment. They have used a modified back
error propagation model with additional features such as a
momentum parameter, a jogging parameter and a learning rate
parameter.
Guh and Tannock [6] have discussed detection of concurrent patterns where more than one pattern exists simultaneously and Back Propagation network system is used.
Gaafar and Choueiki [7] in their paper have applied a
neural network model for an MRP problem of lot sizing.
Lee and Shaw [8] in their paper have applied an ANN
approach for a simple 2 machine ‘n’ jobs problems to optimize the makespan where the Johnson’s algorithm was
used to generate optimal solutions. The results were computed varying from 2 to 7 machines with the jobs varying
from 10 to 25 for 2 and 3 machines and only 15 and 20 jobs
had been considered for 5 and 7 machines. The paper has
not considered bigger size problems.
In competitive networks implemented by Chen and
Huang [9], the inhibitory links are established as a result of
competition rather than being determined initially as in the
Hopfield case. In designing such a network, one usually
develops an equation of motion for the elements of the
problem and defines an appropriate energy function to show
the convergence of the network.
This paper takes the lead from Lee and Shaw [8] and
have extended their work to two performance measures for
optimization viz., makespan and total flow time using the
ANN approach. The seed sequence is obtained through
CDS heuristics. After obtaining the seed sequence from the
Fig. 1 Architecture of the
proposed system
CDS, the quality of the solution is further tried to be
improved by pair wise interchange of jobs before giving
training to the nodes. The paper has extended the number of
machines and number of jobs to optimize for ‘m’ machines
and ‘n’ jobs so that bigger size problems could also be
satisfied.
In the review of literature, it was found that ANN
approach has not been applied for bigger size problems
(such as 30 jobs and 30 machines). A bicriterian approach
using ANN is also not found in the literature. This paper
makes a forward step toward finding an optimal solution to
include the maximum number of parameters of performance and also to increase the size of the problem.
2.2 Heuristic approaches in the literature
The two-machine flow shop problem with the objective of
minimizing makespan is also known as Johnson’s [10]
problems. An optimal sequence is found by following a
heuristics of finding the minimum machining time and
allotting the job to the machine in a preferential order is
adopted.
Palmers heuristic algorithm [11] proposed a slope order
index to sequence the jobs on the machines based on the
processing time. The idea is to give priority to jobs so that
jobs with processing times that tend to increase from machine to machine will receive higher priority, while jobs
with processing time that tend to decrease from machine to
machine will receive lower priority.
Campbell, Dudek and Smith (CDS) [1] proposed a
heuristic that is an extension of Johnson’s Algorithm.
Gupta [12] suggested another heuristic which is similar
to Palmer’s heuristic. He defined his slope index in a different manner taking into account some interesting facts
about optimality of Johnson’s rule for the three machine
problems.
Dannenbring [13] developed a procedure called rapid
access. It attempts to combine the advantages of Palmers
slope index and the CDS methods. Its purpose is to provide
a good solution as quickly and easily as possible. Instead of
solving m-1 artificial two machine problems, it solves only
one artificial problem using Johnson’s rule in which the
processing times are determined from a waiting scheme.
The Nawaz, Enscore, and Ham (NEH) [14] heuristic
algorithm is based on the assumption that a job with high
Optimization
module
Training
module
Neural network
Master matrix
Derived matrix
Optimal
sequence
Initial Learning Stage
Job set
Implementation Stage
1134
Random set of
Jobs
Optimization
Module
Optimal sequences
1
2
3
4
1
2
3
4
n
Fig. 2 Block diagram of optimization module
total processing time on all the machines should be given
higher priority than job with low total processing time. The
NEH algorithm does not transform that original m-machine
problem in to one artificial two-machine problem. It builds
the final sequence in a constructive way, adding a new job
at each step and finding the best partial solution.
Rajendran (CR) [3] has implemented a heuristic for flow
shop scheduling with multiple objectives of optimizing
makespan, total flow time and idle time for machines. For
this improvement heuristics, the first seed is taken from
CDS algorithm. The heuristic preference relation is proposed and is used as the basis to restrict the search for
possible improvement in the multiple objectives.
The decision-making goals seem to be prevalent in
scheduling. Baker [15] in his book has explained all the
heuristics in detail and the decision making goals viz. (1)
efficient utilization of resources (2) rapid response to demands, and (3) close conformance to prescribed deadlines.
In this paper the authors have considered the first two
decision-making goals.
3 Proposed artificial neural network approach
With the objective of optimizing the MS and TFT, the architecture is constructed in two stages, viz. initial learning
stage and implementation stage. In the initial learning stage
the nodes of the network learns the scheduling incrementally and implements the same in the implementation stage.
3.1 Assumptions
n
Fig. 4 Neural network ntructure
.
.
The jobs pass through all m machines and preemption
of jobs is not possible.
Machines have unlimited buffer.
3.2 Architecture of the proposed system
Figure 1 shows the architecture of the proposed system.
3.2.1 Initial learning stage
The optimization module: In this module the batches of
job are first generated randomly. Processing time for each
job is also generated randomly. To determine the optimum
sequences to be given as input to the training module, the
traditional heuristic is used. The traditional heuristics used
in this experiment is CDS heuristic. First, using this heuristic the least MS for a sequence is identified for the incoming jobs. Next taking this MS, a pair wise interchange
of job is made to find out if performance of measures could
be further optimized. Thus an optimal sequence with both
minimum TFT and minimum MS is identified. Since the
machining times are assumed to be constant, whenever the
jobs arrive in that same combination the sequence for
machining will also remain constant. This optimal se-
Basic assumptions of the approach and its relevance are:
.
.
.
Jobs arrive in various combinations of batches –
The shop is capable of handling n number of jobs. All
the jobs are not always available on hand for processing and thereby for scheduling. The jobs arrive in
different combinations of batches and not in any
specific order as normally expected in real time.
A Job, has a fixed machining time on each of the machine.
The shop is capable of producing only n jobs – The
neurons are trained for n number of jobs only and not
for a new job (say n+1). The NNMM is constructed for
n number of jobs only. Future inference, that is the
actual sequencing process, are based on NNMM. The
NNMM is constrained by n number of jobs.
Optimal
Sequences
Training
Module
Fig. 3 Block diagram of training module
Desirability of sequences
(Assigning a weightage)
The NEURAL NETWORK MASTER MATRIX is :
------------------------------------------------------------------------------1 2 3 4 5
6 7
8
9 10 11 12 13 14 15
----------------------------------------------------------------------------1) 0 0 0 0 64 0 105 0
0 0 449 0 7 0 0
2) 0 0 0 943 0 0 0
0
0 0
0 0 0 7 0
3) 0 0 0 0
0 0 0 29 893 0
7 0 21 0 0
4) 0 0 0 0
0 0 0 140
0 0
0 781 0 29 0
5) 265 0 7 0
0 0 302 0
0 0 207 0 0
0 0
6) 0 0 0 0
0 0 0
0
0 615 0 0 0 328 7
7) 255 0 0 0 577 0 0
0
0 0 118 0 0
0 0
8) 0 0 774 0 7 0 0
0
0 0
0 169 0
0 0
9) 29 0 0 7
0 0 0
0
0 0 21 0 893 0 0
10) 0 0 0 0
0 0 0
0
7 0 0 0 29 140 774
11) 401 0 0 0
0 0 71 0 29 0
0 0 0
0 0
12) 0 0 140 0
0 0 0 781
0 0 29 0 0
0 0
13) 0 0 0 0 302 0 472 0 21 0 119 0 0
0 29
14) 0 586 29 0
0 0 0
0
0 335 0 0 0
0 0
15) 0 364 0 0
0 140 0
0
0 0
0 0 0 446 0
Fig. 5 Neural network master matrix
1135
The seq is ...
1 11 7 5 3 9 13 15 14 2 4 12 8 6 10
Fig. 6 The output sequence
quence is the output of the optimization module. Fig. 2
shows the block diagram depicting the input and output of
optimization module.
The Training module: Thus the optimal sequence
obtained from the optimization module is the required input
to be given to the Training module. The training module
aggregates all the sequences and assigns weightage to the
arrived sequences. For each predecessor and successor a
weightage of 1 is assigned. This weightage 1 means it is the
desirability of the sequence. The weightage of 1 is aggregated every time the same predecessor and successor is
repeated for the further generated arrival of jobs. Fig. 3
shows the block diagram depicting the input and output of
the training module.
Neural network master matrix (NNMM): The aggregated weights are the acquired knowledge given through
the training module and are stored in the form of Master
matrix. The magnitude of weights is the indicator of the
desirability between jobs and NNMM is constructed before
the implementation stage. The NNMM is formed after a
certain number of training to the nodes.
3.2.2 Implementation stage
The weights of NNMM are considered as neurons (processing elements). Successors and predecessors are considered as two layers of the network. These layers are fully
connected two-layer network. ANN consists of two layers
with an equivalent number of processing elements and each
processing element is connected to all processing element
in the other layer.
Fig. 7 Output with sequences
and their corresponding MS and
TFT
Job set: During the implementation stage, when a
combination of jobs arrives for machining, it is given as
input to the NNMM and thus the NNMM is initialized to
take the desirability of the sequences.
Derived matrix (DM): From the NNMM the derived
matrix takes the relevant sequence from the processing
elements for the job set initialized.
Optimal sequence: Each job in the job set is considered
as the starting job and the DM gives a sequence. For these
sequences the MS and TFT is calculated and the optimal
sequence for the arrived job set is found. To generate an
optimal and feasible sequence of different jobs, each job is
not allowed to choose more than one other job as its
successor, and each job is not allowed to choose more than
one other job as its predecessor. Once the job is sequenced its
weightage is made to zero to avoid being sequenced again.
3.3 Neural network algorithm
Step 1
Initialize the network by giving a job set.
Step 2
Set a job set to the processing elements of the x-layer to
derive its value from the DM.
Step 3
For each element in x-layer, a connection with highest
weight from x-layer to y-layer is chosen. Winner–take–
all strategy as discussed by Zurada, [16] is used.
Step 4
For each element in y-layer, a connection with highest
weight from y-layer to x-layer is chosen. Again a
winner-take-all strategy is used.
Step 5
Repeat steps 1 to 3 till all the jobs are sequenced.
Fig. 4 shows the bi-directional nature of the network.
The sequences are ...
13 7 5 1 11 9 4 12 8 3 6 10 15 14 2
3 9 13 7 5 1 11 6 10 15 14 2 4 12 8
8 3 9 13 7 5 1 11 6 10 15 14 2 4 12
2 4 12 8 3 9 13 7 5 1 11 6 10 15 14
10 15 14 2 4 12 8 3 9 13 7 5 1 11 6
7 5 1 11 9 13 15 14 2 4 12 8 3 6 10
15 14 2 4 12 8 3 9 13 7 5 1 11 6 10
12 8 3 9 13 7 5 1 11 6 10 15 14 2 4
4 12 8 3 9 13 7 5 1 11 6 10 15 14 2
11 1 7 5 3 9 13 15 14 2 4 12 8 6 10
14 2 4 12 8 3 9 13 7 5 1 11 6 10 15
9 13 7 5 1 11 6 10 15 14 2 4 12 8 3
1 11 7 5 3 9 13 15 14 2 4 12 8 6 10
6 10 15 14 2 4 12 8 3 9 13 7 5 1 11
5 7 1 11 9 13 15 14 2 4 12 8 3 6 10
the makespans are :
200 195 196 184 181 203 190 194 191 198 186 196 198 171 203
the total flow times are :
2121 2044 2091 2003 1934 2104 1958 2068 2082 2057 2010 2021 2038 1856 2112
1136
10 Machines Comparison
Table 1 Effect of training on neural network
ANN
CDS
MS
TFT
Training exemplars 300
110
780
106
793
102
764
108
831
116
896
Training exemplars 600
110
831
108
755
119
887
120
829
116
868
Training exemplars 900
104
797
107
801
104
736
118
879
112
796
MS
TFT
108
106
102
108
114
784
807
809
831
920
110
110
117
116
116
789
765
905
846
898
106
110
101
118
112
800
818
753
879
816
At any step of the procedure at which elements in x-layer
and y-layer must make a choice between two or more
connections with the same weights, the value that is first
read is assigned.
The procedure must stop since there are only a finite
number of jobs and no connection between x-layer and ylayer is activated more than once. The final outcome generated by the neural-net approach is complete and feasible
sequence of jobs, since each job is linked at any step to
exactly another job.
4 An illustration
Assume that the flow shop can machine a set of 15 different
jobs and jobs arrive at the flow shop at random for processing. Machining time for these jobs, which the shop is
capable of handling, will be constant.
Assume that a set of jobs say, {15 18 21 24 25} arrive at
the shop, the optimal sequence determined by using CDS
% of better results
ANN
CR
CDS
100
80
60
40
20
0
5
10
15
20
No. of Jobs
25
30
Fig. 9 Comparison of % of better results obtained by ANN, CR and
CDS heuristics for 10 machines problem set
[2] heuristics is found to be {18 21 15 25 24}. From the five
sets of job that has arrived, 4 vector pairs that represent
adjacent jobs are identified. The vector pairs are (18 21) (21
15) (15 25) (25 24). These vector pair shows the desirability of job sequences. Thus the sequences are identified
and are assigned a weightage of 1 for each vector pair.
The neural network is designed to find the sequences for
n jobs and m machines. Suppose that the flow shop has the
capability to process 15 different jobs the training module
constructs a master neural matrix of 15×15. The neural
matrix constructed in Fig. 5 is obtained after giving 950
training exemplars to the network.
Assuming that all the 15 jobs have arrived, in the
implementation stage the DM takes its desirable sequences
and derives the matrix from NNMM and the matrix in this
case will be the same as NNMM.
From the DM, which is actually the NNMM in this case,
it can be seen that in the 1st row the 11th column has the
highest value. This indicates that 11th job is the next
desirable job after 1st job. The network then goes to the
11th row and finds the highest weight, which is found in
the 7th column, which stands for job number 7. Thus job 7
is sequenced after 11th job. Since the job number 1 is
already assigned, the weightage value is suitably reduced
that the network takes care that it is not scheduled. Thus the
sequence is identified until all the jobs are sequenced. The
output of the sequence is given in Fig. 6.
Figure 7 gives all the possible sequences and its corresponding optimum MS and TFT.
It was observed as shown in Table 1 that the ANN
incrementally learns the sequencing problem with the
Comparison 15 machines
ANN
120
100
80
60
40
20
0
CR
ANN
CDS
5
10
15 20 25
No. of Jobs
30
Fig. 8 Comparison of % of better results obtained by ANN, CR and
CDS heuristics for 5 machines problem set
CR
CDS
80
% of best results
% of better results
5 machines Comparison
60
40
20
0
5
10
15
20
No. of Jobs
25
30
Fig. 10 Comparison of % of better results obtained by ANN, CR
and CDS heuristics for 15 machines problem set
1137
30 machines Comparison
Comparison 20 machines
CR
CDS
ANN
100
80
60
40
20
0
5
10
15
20
25
% of better results
% of better values
ANN
30
No. of Jobs
increase in number of training exemplars. The output given
in Table 1 is for 10 jobs 10 machines problem.
5 Results and discussions
The source code of the program for ANN, CDS and CR was
written in C language on a Pentium III processor machine.
A number of problems were solved for different combinations of jobs and machines by varying jobs from 5 to
30 in steps of 5 and by varying machines from 5 to 30 in
steps of 5. A total of 180 problems were solved by taking 5
problems in each set.
5.1 Graphical inferences
Figures 8 through 13 shows graphically comparing the percentage of better results of ANN, CR, and CDS heuristics.
Figure 8 shows the comparison for 5 machines job set.
On an average the results obtained by ANN are better 70%
of times (for 5 job set ANN gives 80%, for 10 job set ANN
gives 100%, and in balance job sets of 15, 20, 25, 30 ANN
gives 60% better results and hence an average of 70%) and
CR has obtained better results 50% of times while CDS has
obtained better results 26.67% of times.
CDS
70
60
50
40
30
20
10
0
5
Fig. 11 Comparison of % of better results obtained by ANN, CR
and CDS heuristics for 20 machines problem set
CR
10
15
20
No. of Jobs
25
30
Fig. 13 Comparison of % of better results obtained by ANN, CR
and CDS heuristics for 30 machines problem set
Figures 9 and 10 are for a problem set of 10 machines
and 15 machines, respectively. It can be seen that ANN
gives better results for 46.67% of times for 10 machines as
well as for 15 machines problem set. CR gives better results
33.33% of times and CDS gives 43.33% of times better
results in the 10 machines problem set. In the 15 machines
problem set CR heuristics and CDS give 26.67% of times
better results.
Figure 11 shows the results of 20 machines job set. In
this problem set ANN performs well on an average 56.67%
of times. CR performs better 16.67% of times. CDS
performs better 33.33% of times.
Figure 12 shows the results for a problem set of 25 machines. ANN performs better 63.33% of times. CR
heuristics performs better 43.33% of times and CDS heuristics performs better 36.67 % of times.
Figure 13 shows the percentage of best results comparing the ANN, CR and CDS heuristics for a problem set of
30 machines. ANN performs better 50% of times. CR
heuristics performs better 16.67% of times and CDS heuristics performs better 33.33% of times.
Table 2 shows the comprehensive performances of the
ANN and the heuristics approach.
Thus it can be seen that ANN approach yields better
results than the constructive or improvement heuristics.
25 machines Comparison
120
% of better results
ANN
CR
CDS
100
80
Table 2 Comparison of ANN, CR heuristics and CDS heuristics
results
Problem set
ANN (%)
CR (%)
CDS (%)
5*
10*
15*
20*
25*
30*
70.00
46.67
46.67
56.67
63.33
50.00
50.00
33.33
26.67
16.67
43.33
16.67
26.67
43.33
26.67
33.33
36.67
33.33
60
40
20
0
5
10
15
20
No. of Jobs
25
30
Fig. 12 Comparison of % of better results obtained by ANN, CR
and CDS heuristics for 25 machines problem set
* The results obtained show that more than one heuristics give
optimal results
1138
6 Conclusions
The neural net approach for sequencing problems demonstrated very promising properties for solving real world
flow shop sequencing problems. The proposed approach
follows mainly two stages.
(1) The initial learning stage and
(2) The implementation stage.
In the initial learning stage, a variety of inputs are given
to neural network to acquire and store the knowledge (in
the form of NNMM) of the sequencing patterns of a flow
shop. In the implementation stage the acquired knowledge
is extracted from the NNMM and transferred to the DM
where the two layer neural network utilizes the sequencing
knowledge to make sequencing decisions.
The observations of the experiment are:
(1) The ANN incrementally improves the solution quality
with the increase in numbers of training exemplars.
(2) The ANN achieves a solution quality better to that of
traditional heuristics or at least comparable to it.
References
1. Campbell HR, Smith DM (1970) A heuristic algorithm for n-jobs
m-machines sequencing problem. Manage Sci 16B:630−637
2. Elsayed EA, Boucher TO (1985) Analysis and control of production systems. Prentice-Hall, Upper Saddle River, NJ
3. Rajendran C (1995) Theory and methodology heuristics for scheduling in flow shop with multiple objectives. Eur J Oper Res 82:540
−555
4. Sabuncuoglu I, Gurgun B (1996) A neural network model for
scheduling problems. Eur J Oper Res 93:288–299
5. Jain AS, Meeran S (1999) Job shop scheduling using neural
networks. Int J Prod Res 37:1250−1268
6. Guh RS, Tannock JDT (1999) Recognition of Control Chart
patterns using a neural network approach. Int J Prod Res 37(8):
1743−1765
7. Gaafar LK, Choueiki MH (2000) A neural network model for
solving the lot sizing problem. Omega Int J Manage Sci 28:175
−184
8. Lee I, Shaw MJ (2000) A neural-net approach to real time flowshop sequencing. Comput Ind Eng 38:125−147
9. Chen R-M, Huang Y-M (2001) Competitive neural network to
solve scheduling problems. Neurocomputing pp 177−196
10. Johnson SM (1954) Optimal two and three stage production
schedules with setup times included. Naval Res Logist Q 1(1)
11. Palmer D (1965) Sequencing jobs through a multi-stage. Process
in the minimum total time – a quick method of obtaining a near
optimum. Oper Res Q 16:45−61
12. Gupta J (1971) A functional heuristic algorithm for the flow shop
scheduling problem. Oper Res Q 22:27−39
13. Dannenbring D (1977) An evolution of flowshop scheduling
heuristics. Manage Sci 23:174−1182
14. Enscore NME, Ham I (1983) A heuristic algorithm for the mmachine, n-jobs flow shop scheduling problem. Omega II:11
−95
15. Baker KR (1974) Introduction to sequencing and scheduling
Wiley, New York
16. Zurada JM (1992) Introduction to artificial neural systems.
Jaico, Mumbai