ArticlePDF Available

Using aggregate estimation models for order acceptance in a decentralized production control structure for batch chemical manufacturing

Authors:

Abstract and Figures

Aggregate models of detailed scheduling problems are needed to support aggregate decision making such as customer order acceptance. In this paper, we explore the performance of various aggregate models in a decentralized control setting in batch chemical manufacturing (no-wait job shops). Using simulation experiments based on data extracted from an industry application, we conclude that a linear regression based model outperforms a workload based model with regard to capacity utilization and the need for replanning at the decentralized level, specifically in situations with increased capacity utilization and/or a high variety in the job mix.
Content may be subject to copyright.
Using aggregate estimation models for order acceptance in
a decentralized production control structure for batch
chemical manufacturing
WENNY H.M. RAAYMAKERS, J. WILL M. BERTRAND and JAN C. FRANSOO*
Department of Technology Management, Technische Universiteit Eindhoven, P.O. Box 513, Pav F16, 5600 MB Eindhoven,
The Netherlands
E-mail: J.C.Fransoo@tm.tue.nl
Received April 1999 and accepted November 1999
Aggregate models of detailed scheduling problems are needed to support aggregate decision making such as customer order ac-
ceptance. In this paper, we explore the performance of various aggregate models in a decentralized control setting in batch chemical
manufacturing (no-wait job shops). Using simulation experiments based on data extracted from an industry application, we conclude
that a linear regression based model outperforms a workload based model with regard to capacity utilization and the need for
replanning at the decentralized level, speci®cally in situations with increased capacity utilization and/or a high variety in the job mix.
1. Introduction
Batch chemical processes exist in many industries, such as
the food, specialty chemicals and the pharmaceutical in-
dustry, where production volumes of individual products
do not allow continuous or semi-continuous processes.
Batch processing becomes more important because of the
increasing product variety and decreasing demand vol-
umes for individual products. Two basic types are dis-
tinguished. If products all follow the same routing, this is
called multiproduct. If products follow dierent routings,
as is the case in a discrete manufacturing job shop, it is
called multipurpose. In this paper, we concentrate on
multipurpose batch process industries.
Multipurpose batch process industries produce a large
variety of dierent products that follow dierent routings
through the plant. Considerable dierences may exist
between products in the number and duration of the
processing steps that are required. Intermediate products
may be unstable, which means that a product needs to be
processed further without delay. These no-wait restric-
tions and the large variety of products with dierent
routings cause complex scheduling problems. Namely, for
each product a dierent combination of resources is re-
quired in a speci®c sequence and timing due to these no-
wait restrictions. Consequently, the capacity utilization
realized by multipurpose batch process industries is gen-
erally low. Furthermore, many of these companies oper-
ate in highly variable and dynamic markets in which
periods of high demand may be followed by periods of
low demand. Therefore, the amount and mix of produc-
tion orders may dier considerably from period to peri-
od. Consequently, bottlenecks may shift over time, due to
variations in the mix of production orders. An explor-
ative research, studying the empirical setting of produc-
tion control in batch chemical industries has been
described by Raaymakers et al. (2000).
In this paper, we will study a problem that is inspired
by batch chemical industries. Since the scheduling prob-
lem in batch chemical industries is very complex (Raay-
makers and Hoogeveen, 1999), companies in these
settings often operate under a decentralized control
structure. We consider a setting with a no-wait job shop,
dynamic order arrivals and a demand level that exceeds
the available capacity. The lead times are ®xed by the
market. In this setting, a centralized decision needs to be
made whether or not to accept an order, and to assign
this order to a speci®c period in which it will be produced
(planning). In accordance with the type of industry
studied, time is divided into equal periods, where each
period starts with an empty system and at the end of each
period, the system must again be empty. The actual ful-
®llment of the order is controlled at a decentralized
location, namely within the manufacturing department
producing the order. The departmental scheduler
* Corresponding author
0740-817X Ó2000 ``IIE''
IIE Transactions (2000) 32, 989±998
typically has a shorter horizon than the master planner
responsible for order acceptance and capacity loading.
He takes the production plan of orders allocated to a
particular period as a start. Then, period after period, he
determines a detailed sequence of orders such that the
order due dates are met and resource use is maximized. In
this study, we assume the detailed scheduling decision is
made by a simulated annealing procedure. Note that we
distinguish between the terms planning, used for central-
ized allocation of an order to a speci®c period, and
scheduling, used for decentralized determination of the
exact sequence of operations within a speci®c period.
The centralized order acceptance decision and planning
decision needs to be made such that: (i) a high resource
utilization can be reached by the manufacturing depart-
ment; and (ii) a high service level can be realized towards
the customer. The service level is the fraction of jobs that is
completed in or before the due period. In addition to these
performance measures, we also want to plan the orders in a
period in which they can actually be executed, so that: (iii)
little replanning between periods has to be done by the
departmental scheduler. The performance on (iii) is mea-
sured by the fraction of jobs that needs to be reallocated to
another period by the detailed scheduler since completion
in the initially allocated period is infeasible.
We are interested in evaluating models that support the
master planner in his decision to accept and plan an or-
der. These models should assess the feasibility of manu-
facturing a particular order in a speci®c period of time,
while the detailed scheduling decision is made later by the
departmental scheduler. Often, such a decision is based
on the workload of the order set that results after ac-
ceptance of the order in a certain period. For classical job
shops, this has been investigated by, e.g., Bertrand (1983)
in a due date assignment setting. An alternative has been
suggested by Raaymakers and Fransoo (1999), in which a
linear regression model using speci®c job set character-
istics additional to its workload is proposed.
This paper is organized as follows. Section 2 brie¯y
discusses some relevant literature on order acceptance and
capacity loading. Section 3 describes the centralized order
acceptance and planning policies in more detail. Section 4
describes the implementation of the decentralized sched-
uling of the job sets and the execution by the production
system. Section 5 considers the estimation quality of a
regression-based model in a dynamic order acceptance and
capacity loading situation. Section 6 describes the simu-
lation experiments we conducted to investigate the per-
formance of the various policies, and the results obtained
through these experiments. Section 7 gives the conclusions.
2. Literature review
Order acceptance has received limited attention in the
literature. Order acceptance is concerned with the deci-
sion to either accept or reject a customers' order based on
the availability of sucient capacity to complete the or-
der before its due date. The due dates are considered
given by the customer and non-negotiable. Generally,
customer orders are accepted for (and assigned to) a
speci®c period.
In the literature, order acceptance decisions are often
based on the workload content of the order, related to the
available workload. Guerrero and Kern (1988) and Kern
and Guerrero (1990) address this problem in the context
of MRP's Available-to-Promise. Wang et al. (1994) dis-
cuss this setting in the classical job shop. Another policy
commonly found in the literature is order acceptance
based on detailed scheduling. For example, Akkan (1997)
considers a single resource production system for which
orders are accepted only if they can be included in the
schedule, such that it can be completed before its due
date, without changing the schedule for the already ac-
cepted orders. Wester et al. (1992) consider a single re-
source production system with setup times and orders for
a limited number of product types. In their study, they
investigate three policies to customer order acceptance, a
monolithic policy (based on detailed scheduling), and two
hierarchical policies (both based on workload, but with a
dierent execution once the order has been accepted).
Simulation results show that the hierarchical policies
perform worse than the monolithic policy if the setup
times are suciently large and the due dates suciently
tight. In cases with loose due dates, the monolithic policy
appears to perform slightly worse than the hierarchical
policies. The authors indicate that selective acceptance
seems to be the main reason for the better performance of
the monolithic policy in case of tight due dates and high
setup times. Ten Kate (1994) further builds on this re-
search line.
3. Central order acceptance policies
The order acceptance policies discussed in the literature
are based either on workload or on detailed scheduling.
This also matches to a large extent the policies used in
industry, which are generally workload based in the case
where the capacity complexity is low and/or sucient
slack exists in the system, or schedule based in the case of
less slack in the system or increased complexity. Batch
chemical manufacturing can be considered as a very
complex job shop with additional constraints. However,
due to the great scheduling complexity and high interre-
lations between the jobs in multi purpose batch chemical
shops (Raaymakers and Hoogeveen, 1999), schedule
based evaluations are very time consuming. We therefore
introduce a third policy, which is based on a regression
model of speci®c job set and resource con®guration
characteristics. This policy falls in-between the two other
policies with respect to the level of detail of the infor-
990 Raaymakers et al.
mation used. We are interested in evaluating under which
conditions the regression-based policy outperforms the
workload-based policy. To obtain initial insights, we will
limit ourselves to a setting with deterministic processing
times. This enables us to compare the two policies to a
benchmark policy, based on creating a detailed schedule
to support the central decision. Note that such a policy is
time consuming and might therefore not be realistic in a
practical setting under the time constraints common for
order acceptance decisions.
In this paper, two centralized policies to support order
acceptance are compared in a decentralized setting: a
workload policy and a regression-based policy (further
denoted as `makespan estimation policy'). We consider
the following setting. Orders arrive with a non-negotiable
lead time, which is the same for all orders. Orders are
evaluated immediately upon their arrival. Each order
consists of exactly one job with a given deterministic
processing structure. The job associated with an order
arriving in planning period twith a lead time e, can be
allocated to one of the planning periods t+1tot+e.
The jobs allocated to period thave already been released
to the production department. An order is accepted only
if, according to the policy used, sucient capacity is ex-
pected to be available to complete the resulting job before
the due date of the order. Orders that fail this test are
rejected and leave the system.
If an order is accepted, the resulting job is immediately
allocated to the earliest period for which the test is suc-
cessful. This initial and central allocation of jobs to period
is used as a starting set for the decentralized and detailed
scheduling of jobs, which is discussed in the next section.
The two policies dier in the way they construct an
achievable job set. Under the workload policy, a job may
be allocated to a planning period if the workload per
resource does not exceed the available capacity per re-
source and the total workload for the entire job set (for
all resources) does not exceed a speci®ed maximum
workload. Under the makespan estimation policy, a job
may be allocated to a planning period if the estimated
makespan of the resulting job set does not exceed the
period length with a certain probability.
As a benchmark we use a centrally executed detailed
scheduling policy. Under this policy, a job may be allo-
cated to a planning period if a schedule can be con-
structed for the resulting job set such that the makespan
does not exceed the period length.
3.1. Workload policy
Under the workload policy, orders may be allocated to a
planning period as long as the total workload does not
exceed a speci®ed maximum workload and the workload
per resource does not exceed the available capacity per
resource. Consequently, a job set is considered achiev-
able, if the following conditions are met:
X
j2JX
Ij
i1
pij 1ÿsNT ;1
and
8r:X
j2JX
i2Pr
pij T;2
where
p
ij
= processing time of processing step iof job j;
J= job set to be evaluated;
I
j
= total number of processing steps in job j;
s= slack fraction, 0 £s£1
N= total number of resources;
T= period length;
P
r
= set of processing steps that need to be executed at
resource r.
3.2. Makespan estimation policy
The makespan estimation policy estimates the dierence
between the job set makespan (to be obtained by simu-
lated annealing at the decentralized scheduling decision)
and a single resource lower bound on the makespan based
on Carlier (1987). This dierence is caused by the inter-
action between the jobs on the resources and is not in-
cluded in the Carlier lower bound. We will denote this
dierence as the ``interaction margin'' (I) (Raaymakers
and Fransoo, 1999):
ICmax ÿLB
LB ;3
where C
max
is the makespan obtained by simulated an-
nealing and LB is the Carlier lower bound. The estimate
of the interaction margin uses aggregate characteristics of
the job set and the resource con®guration and can thus be
used to estimate the makespan of the job set. We use a
model based on resource and job set characteristics de-
veloped in an earlier paper (Raaymakers and Fransoo,
1999). We assume that the estimated interaction margin is
an unbiased estimate of the actual interaction margin,
with a normally distributed estimation error. Therefore, a
safety factor (ka) is introduced in the makespan estimate,
similar to the work by Enns (1993, 1995) on ¯owtime
estimations. Consequently, the (1 )a) con®dential esti-
mation of the makespan can be formulated as follows:
^
C1ÿa
max 1^
IkareLB;4
where r
e
is the standard deviation of the estimation error.
The hats are used to indicate the estimate of a variable.
Using this policy, orders can be accepted for a speci®c
planning period as long as the estimated makespan of the
order set ^
C1ÿa
max remains smaller than the period length:
^
C1ÿa
max T:5
The estimation error in this model will be discussed in
more detail in Section 5.
Aggregate estimation models for order acceptance 991
3.3. Scheduling policy
The benchmark policy for order acceptance is based on
centrally constructing a detailed schedule after every or-
der arrival. A job may be allocated to a planning period
only if a schedule of the resulting job set can be con-
structed with a makespan that does not exceed the period
length:
Cmax T:6
Schedules are constructed by a simulated annealing al-
gorithm developed by Raaymakers and Hoogeveen
(1999). Note that this means that an entirely new schedule
is reconstructed once a new job has been added to the job
set. We do not consider on-line scheduling algorithms, in
which minor changes to an existing schedule are made.
Algorithm development for this on-line one-by-one
problem is very limited and not yet available for job
shops (Sgall, 1998). Some iterative repair heuristics
dealing with uncertainty in the process times have now
been developed (see, e.g., Van Bael, 1999).
4. Decentralized scheduling and execution of the job sets
Before the start of each period, the jobs allocated to that
period are released for decentralized detailed scheduling
and execution by the production department. The simu-
lated annealing algorithm developed by Raaymakers and
Hoogeveen (1999) is used to construct a schedule. For
each period, a schedule is constructed based on an empty
production system. This implies that the shop again needs
to be empty at the end of the period. If the resulting
makespan exceeds the period length, completing the job
set in this period is not feasible and jobs must be shifted
to a later period. If the resulting makespan is smaller than
the period length, jobs from a later period may be added
to the job set for the current period without violating due
dates.
The following procedure is followed to select jobs that
are shifted to a later period if a job set is not achievable.
Candidates for being shifted to the next period are the
jobs on a critical path in the best obtained schedule. If
there are critical path jobs with a due date beyond the
current period, then these jobs are the initial candidates.
Otherwise, all jobs on a critical path are candidates. The
eect on the makespan is evaluated by removing the
candidate jobs one at a time from the schedule without
changing the sequence of the remaining jobs. Upon re-
moval of a job, the start times of the remaining jobs in the
schedule are decreased to restore a left-justi®ed schedule.
If a makespan shorter than the period length is realized
by removing only one job, then the job with the smallest
total processing time is removed which results in a
schedule with a makespan shorter than the period length.
This is done to realize a high utilization in the current
period and to limit the amount of workload added to
later periods. If more than one job needs to be removed,
then the ®rst job to be removed is the one that gives the
largest decrease in makespan. The procedure is repeated
until a schedule is realized with a makespan shorter than
the period length. The removed jobs are allocated to the
following period. This may result in a job set for that
period which is not expected to be achievable according
to the used order acceptance policy. Consequently, some
jobs allocated to that period may need to be shifted to a
later period. The procedure used to select candidate jobs
for reallocation is similar to the procedure outlined
above. However, whether the removal of a job results in
an achievable plan is evaluated by the order acceptance
and capacity loading policy. For example, if the workload
policy is used for order acceptance, the achievability of
job sets after re-allocation of jobs is evaluated on the
basis of the workload. The re-allocation of jobs to later
periods is repeated until each job set, over the planning
horizon, is expected to be achievable. The acceptance
of arriving orders is based on the job sets resulting after
re-allocation.
If the job set makespan is smaller than the period
length, it may be possible that some jobs from the fol-
lowing period can be included in the current period.
Therefore, the detailed scheduler evaluates if jobs allo-
cated to the following period can be inserted in the
schedule of the current period and that the makespan of
the resulting schedule remains shorter than the period
length. Each job in the job set for the following period is
considered as a candidate for being shifted forward. The
jobs are evaluated one by one in order of non-increasing
total processing time. Namely, to realize a high capacity
utilization preferably jobs with high workload are shifted
forward.
Note that shifting jobs backward and forward is a re-
sult of the aggregate nature of the workload policy and
the makespan estimation policy. These policies make
order acceptance and capacity loading decisions based on
an aggregate model of the production system. Therefore,
there is no guarantee that job sets obtained can actually
be completed within a planning period. Hence, shifting
jobs backward or forward is necessary to avoid capacity
con¯icts or unnecessary idle time. Under the scheduling
policy, however, an exact model of the production system
is used centrally. Therefore, the job sets resulting from the
scheduling policy are always achievable and hence never
need to be shifted.
5. Estimation quality of the makespan estimation policy
As has been shown in by Raaymakers and Fransoo
(1999), a linear regression model can provide a very good
estimate of the makespan of a given set of jobs in a no-
wait job shop. For various job structures and shop con-
®gurations, the regression model could explain over 90%
992 Raaymakers et al.
of the variance in the makespan. The regression model
was based on the following aggregate job set and resource
con®guration characteristics: average number of identical
resources, average number of processing steps per job,
average overlap of processing steps within a job, standard
deviation of processing times, and workload balance over
the resources. De®nitions of each of these characteristics
are given in the Appendix. For a detailed description of
the job sets, resource con®gurations, and regression
models, we refer to Raaymakers and Fransoo (1999). The
performance of the regression model turned out to be
rather insensitive to variations in the characteristics of the
job sets and resource structure. Since a regression model
is easy to apply, it makes sense to investigate the per-
formance that can be obtained by using such a model for
evaluating orders during a central order acceptance de-
cision. However, under this decision, the regression
model is used in a dynamic way, i.e., it is used each time
an order arrives to investigate the makespan conse-
quences of accepting this order in addition to the orders
that have already been accepted prior to the current ar-
rival. The latter orders were also accepted on the basis of
a test with the same regression model. We may expect
that, under this procedure, orders with certain speci®c
characteristics will have a higher likelihood of being ac-
cepted than other orders, especially if the period under
consideration has already been loaded by many accepted
orders. Consequently, the resulting job set will probably
not be a random selection out of the arriving order
stream, as was the case in the job sets that were used to
construct the model. This issue is also brie¯y addressed
by Wester et al. (1992). We may therefore expect that a
bias will occur in the makespan estimate of a job set
selected based on the regression model. This bias has to
be included into the order selection rule (see Equation
(4)). Thus, we have to investigate the magnitude of the
estimation bias, caused by the selectivity of the order
selection rule.
We conducted a long simulation run for a situation
for which we may expect that selectivity will occur. We
chose a situation with a high demand/capacity ratio,
high job mix variety, high workload balance, and long
delivery lead times. Two runs of 100 planning periods
were done. In run I, orders are accepted and allocated
based on the makespan estimation policy; in run II,
orders are accepted randomly with the acceptance
probability Pset empirically such that the output of
both runs is comparable. Following this, the interaction
margin is estimated ex-post using the regression model.
The resulting errors in the interaction margin estimation
are given in Table 1.
Table 1 shows that the makespan estimation model has
a clear bias, if it is used to make the order acceptance and
capacity loading decision. Apparently, job sets resulting
from order acceptance decisions based on a regression
based makespan estimation model dier from job sets
that have not undergone this type of selection process.
This is con®rmed by a further analysis of the selected job
sets.
In Table 2, we have compared the average values of
the four factors in the regression model in the 100 se-
lected job sets in each of the two runs. Given the fact
that orders are selected randomly in run II, this means
that the values of the job set characteristics in this set
are equal to the values of the original job set. For the
average number of processing steps, the average overlap,
and the standard deviation of processing times, this is
easy to check and prove true. It is not possible to check
this for the workload balance, since in the original job
set (before acceptance), the utilization would be 100%
and the workload balance cannot be determined. Ta-
ble 2 shows that there is a clear dierence between the
job sets selected in the two runs for the factor `workload
balance', which indicates the balance in capacity use for
various resources of a particular job. Apparently, the
regression-based estimation policy selects jobs such that
a more even workload on the various resources results,
since the workload balance in run I is more even than
the workload balance in run II. Note that the bias can
only be estimated by actually applying the estimation
model. Apart from the very brief discussion in Wester
et al. (1992), we have not seen the issue of selectivity
discussed in any other literature that deals with order
acceptance. In the remainder of the paper, we deal with
this estimation bias by adjusting the safety factor kain
Equation (4).
Table 1. Estimation errors in the interaction margin of the
makespan estimation model
Run Average error Standard deviation of error
I 0.059 0.070
II 0.003 0.100
Table 2. Job set characteristics
Run Average number of
processing steps
Average overlap of
processing steps
Standard deviation of
processing time
Workload balance
I 5.2 0.57 14.2 0.95
II 5.6 0.55 14.1 0.87
Aggregate estimation models for order acceptance 993
6. Simulation experiments
We conducted simulation experiments to compare the
workload based acceptance policy, the regression-based
estimation acceptance policy, and the benchmark sched-
ule-based acceptance policy. Since the makespan estima-
tion policy includes more characteristics of the job set
than just the workload, we may expect this policy to
perform better than the workload policy. We are inter-
ested in investigating under what conditions this dier-
ence is largest and how it compares to the performance of
the benchmark. The performance is measured by the re-
alized capacity utilization and service level. We chose the
capacity utilization as a performance indicator because in
an over-demanded situation the utilization that can be
realized is directly related to the number of orders that
can be accepted. In turn, capacity utilization in¯uences
the revenues of a company. The service level is used as the
second performance indicator because it indicates the
reliability of the due dates agreed with the customers. The
service level is de®ned as the percentage of orders that is
completed before their due dates. Note that the service
level realized by the schedule-based policy will always be
100% because an exact model of the production system is
used to make order acceptance and capacity loading de-
cisions. For the other two policies, the slack fraction (sin
Equation (1)) and safety factor (kain Equation (4)) have
been set such that a service level of 95% is reached.
The capacity utilization per period (q) is measured as
follows:
qP
j2JP
Ij
i1
pij
NT :7
As a third, internal, performance measure, we measured
the fraction of jobs that need to be rescheduled in the de-
centralized scheduling and execution step. A result of using
policies for order acceptance and capacity loading that are
based on aggregate information is that the job sets will not
always be achievable. Hence, some replanning is always
required because order acceptance and capacity loading
decisions are based on an aggregate model of what can be
realized by the production system. The amount of re-
planning is determined by how close this aggregate model
is to the actual situation at the production system. Many
jobs need to be shifted backwards if the aggregate model
makes an optimistic estimate of what can be realized by the
production system. On the other hand, many jobs can be
shifted forward if the aggregate model makes a pessimistic
estimate of what can be realized by the production system.
In either case, replanning jobs requires time and eort
from the planner in a company. In industrial practice, little
replanning activity is therefore preferred.
6.1. Experimental design
In this section, we present the general settings of the
simulation experiments and the parameters that are var-
ied. The following assumptions are made with respect to
the simulation experiments:
·Production system with ®ve resource types, with two
resources per type.
·Exponentially distributed inter-arrival times of or-
ders.
·Equal and deterministic requested delivery lead times
for all orders.
Further details of the production department considered
have been presented by Raaymakers et al. (2000).
The demand/capacity ratio and lead time parameters
used in the simulation experiments are given in Table 3.
With respect to the demand/capacity ratio, we consider
two levels. At the high level, the average demand re-
quirements for capacity are equal to the total available
capacity per planning period. At the low level, the aver-
age demand requirements for capacity are equal to 70%
of the total available capacity per planning period. As has
been shown by Raaymakers et al. (2000), due to the no-
wait structure of processing steps in each job, capacity
utilization in this type of industry is at most between 50
and 60%. Thus, both demand levels investigated repre-
sent situations where demand eectively exceeds available
capacity. We consider two levels for the requested lead
times, namely two and four periods.
Each order consists of exactly one job with a speci®ed
structure of no-wait processing steps. The job character-
istics are generated randomly on the arrival of the order.
Hence, each job arriving at the system may be dierent.
The performance of the order acceptance and capacity
loading policies might be aected by the job mix variety
Table 3. Parameter settings for the simulation experiments
01
Demand/capacity ratio (b) 0.7 1.0
Job mix variety (c) 4±7 processing steps 1±10 processing steps
20±30 processing time 1±49 processing time
Workload balance (d) 30, 25, 20, 15 and 10% of demand
requirements for resource type 1 to 5
20% of demand requirements for each
resource type
Standard lead time (e) 2 periods 4 periods
994 Raaymakers et al.
and the workload balance. Therefore, two levels of job
mix variety and workload balance are considered. In the
situation with high job mix variety the number of pro-
cessing steps per job is uniformly distributed between one
and 10, and the processing time is uniformly distributed
between one and 49. In the situation with low job variety,
the number of processing steps per job is uniformly dis-
tributed between four and seven, and the processing time
is uniformly distributed between 20 and 30. Note that in
both situations the average number of processing steps
and the average processing time is the same. In generating
the jobs, each processing step is allocated to a resource
type. In the situation with high workload balance, the
allocation probability is the same for each resource type.
In the situation with low workload balance, the allocation
probability is dierent for each resource type. On average
30, 25, 20, 15 and 10% of the processing steps will be
allocated to the ®ve dierent resource types respectively.
The length of the planning period is chosen such that
each job set consists of a realistic number of jobs. The
empirical study by Raaymakers et al. (2000) showed that
a job set of 40 to 50 jobs is realistic for the type of in-
dustrial process considered. The length of the planning
period depends on the average processing time per job,
and has been ®xed at 1000 time units. We used a simu-
lation run length of 24 periods. To eliminate start-up
eects (of job being shifted forward and backward), the
®rst e+ 1 periods are excluded from the results. Three
runs are done for each combination. The same seeds are
used for each combination in order to obtain identical
order arrivals for the dierent policies.
6.2. Experimental results
The workload policy and the makespan estimation policy
are based on aggregate models of the production system.
Consequently, job sets that are determined by one of
these policies may not necessarily be achievable. This may
in¯uence the realized service level. In both the workload
policy and the makespan estimation policy, we use safety
parameter settings to in¯uence the realized service level.
The value of the safety parameters has been determined
by running a tuning run for each of the test sets. In this
tuning run, the value of the safety parameters is adjusted
such that the required service level is obtained.
To compare the two policies and the benchmark we
have measured the average realized capacity utilization
and the average replanning fraction over the runs while
maintaining a minimum service level of 95%. Recall that,
under the schedule-based acceptance policy, the realized
service level is always 100% and the replanning fraction is
always zero. The results are given in Table 4. An
ANOVA showed that all main eects (four experimental
factors and three policies) have a statistically signi®cant
contribution towards the value of the capacity utilization
performance measure. Below, we will discuss the relevant
dierences between the policies under the various sce-
narios. It is worthwhile to note that the CPU time on a
Pentium 133 MHz is about 60 seconds to evaluate a
single-order acceptance decision-based on the schedule-
based policy, whereas it takes negligible time to take the
same decision based on the makespan estimation policy.
We observe that the realized capacity utilization (q)
under the scheduling policy ranges from 0.54 for scenario
Table 4. Simulation results: average capacity utilization (q) and average replanning fraction (rpf) for the order acceptance policies
under dierent scenarios
Scenario (H =high; L =low) Policy
bcdeScheduling benchmark Makespan estimation Workload
qrpf qrpf qrpf
H H H H 0.68 0 0.61 0.18 0.58 0.30
H H H L 0.63 0 0.61 0.16 0.59 0.28
H H L H 0.61 0 0.58 0.16 0.55 0.30
H H L L 0.61 0 0.58 0.13 0.56 0.30
H L H H 0.63 0 0.59 0.13 0.59 0.12
H L H L 0.60 0 0.59 0.14 0.58 0.13
H L L H 0.56 0 0.53 0.15 0.53 0.14
H L L L 0.55 0 0.53 0.16 0.53 0.16
L H H H 0.61 0 0.59 0.17 0.58 0.30
L H H L 0.59 0 0.59 0.14 0.58 0.27
L H L H 0.58 0 0.56 0.17 0.56 0.29
L H L L 0.56 0 0.55 0.13 0.55 0.27
L L H H 0.61 0 0.59 0.15 0.59 0.16
L L H L 0.59 0 0.59 0.14 0.59 0.16
L L L H 0.55 0 0.54 0.15 0.53 0.17
L L L L 0.54 0 0.53 0.13 0.53 0.15
Aggregate estimation models for order acceptance 995
(L,L,L,L) to 0.68 for scenario (H,H,H,H), which indi-
cates the relevance of the factors varied in the 16 sce-
narios. For a high demand/capacity ratio (b) the
scheduling policy clearly outperforms the other policies.
Also, there exists a considerable dierence in perfor-
mance between the makespan estimation policy and the
workload policy. The dierences between the perfor-
mance of the two policies are especially large if bis high,
whereas the dierences are small for low b. In a situation
with a high b, many orders arrive and many opportunities
exist to select the jobs that ®t in well with the other jobs.
With a low b, most arriving orders can and will be ac-
cepted by all policies. Thus, only if bis high will the
dierences in selectivity between the policies show in the
capacity utilization performance measure. We further
observe that the realized capacity utilization is consider-
ably higher if bis high. This is especially true for the case
of scenarios that also have a high job mix variety (c). This
is explained by the fact that a high cmeans that more
opportunities exist to select jobs that ®t in well, especially
in combination with a high b.
The results in Table 4 show that for high cand high b
values, the makespan estimation policy results in a higher
capacity utilization than the workload policy. It closes
about half of the performance gap between the workload
policy and the scheduling benchmark, except in scenario
(H,H,H,H) where only about one-third of the perfor-
mance gap is closed. For the remaining 12 scenarios, the
dierence for the capacity utilization performance mea-
sure between the workload policy and the makespan
estimation policy is practically negligible.
When we consider the replanning fraction, a dierent
picture emerges. Under the makespan estimation policy,
the replanning fraction ranges from 0.13 to 0.18, whereas
under the workload policy, the replanning fraction ranges
from 0.12 to 0.30. A closer inspection of the results shows
that for scenarios with a high job mix variety (c), the
workload policy consistently results in very high replan-
ning fractions (ranging from 0.27 to 0.30) as opposed to
the level of the replanning fractions (ranging from 0.12 to
0.17) in the scenarios with low c. The makespan estima-
tion policy, on the other hand, shows no signi®cant dif-
ference for the replanning fraction between the dierent
scenarios and can apparently cope very well with situa-
tions with high cvalues. Its resulting replanning fraction
is about half of the replanning fraction of the workload
policy in these high job mix variety scenarios.
7. Conclusions
In this paper, we compared the service level and capacity
utilization performance of two policies to support order
acceptance and capacity loading decisions under a de-
centralized production control structure in batch chemi-
cal industries. The two policies, a regression-based
makespan estimation policy and a workload-based poli-
cy, have been benchmarked against a detailed scheduling
policy.
The scheduling policy accepts orders based on a de-
tailed schedule that has to be constructed each time an
order arrives. In a deterministic production situation, this
policy always performs best, because complete informa-
tion on the future status of the production system is
given. However, this policy is also time consuming and
only applicable in a deterministic setting. The two in-
vestigated policies are based on aggregate information,
which means that not all job sets may be achievable. This
is compensated for by allowing jobs to be shifted for-
wards or backwards. The main advantage of the aggre-
gate policies is that they are quick and require less
information. We have investigated the various policies in
a series of simulation experiments. The data setting for
the experiments was based on empirical material.
Simulation experiments showed that a bias occurs
when the regression-based estimation model is used to
support order acceptance and capacity loading decisions.
This is explained by the fact that jobs are accepted se-
lectively and hence that the resulting job sets dier from
the unselected job sets considered when developing the
estimation model. A safety factor is used to compensate
for the bias in the estimate.
Simulation experiments have further shown the con-
ditions under which the makespan estimation policy
performs signi®cantly better than the workload policy.
We considered capacity utilization and replanning eort
as performance measures under a service level constraint.
In situations with a high demand/capacity ratio and high
job mix variety the dierence in capacity utilization is
considerable. In that situation, the makespan estimation
policy realizes, compared to the workload policy, an in-
crease in capacity utilization of 2 to 3%, which results in
an increase in production output of up to 5%. If the job
mix variety is small, the dierences in capacity utilization
between the policies are negligible, regardless of the de-
mand/capacity ratio.
In the scenarios we investigated where job mix variety
is high, regardless of the demand/capacity ratio, the
performance of the workload policy is poor with regard
to the replanning eort. In these scenarios, about 30% of
the orders need replanning, as compared to only about
15% when the job mix variety is low. The makespan es-
timation policy does not show this de®ciency and per-
forms consistently with a replanning fraction of about
15% across all scenarios.
Therefore, we can conclude that using the makespan
estimation models is especially favorable in situations
with excess demand and/or high product mix variety. The
results that the makespan is well predictable and com-
putation times are negligible for this aggregate model
implies that this approach may be extended to situations
with multiple departments requiring aggregate coordina-
996 Raaymakers et al.
tion. Further research under those conditions is however
required.
The assumption of deterministic processing times im-
plies that complete information is available when order
acceptance decisions need to be made. In a situation with
stochastic processing times, the schedule made upon or-
der acceptance is only an estimate of what can be realized
by the production system. The actual processing times
will dier from the processing times on which this
schedule is based. In that situation, the aggregate policies
may even outperform the scheduling benchmark. Initial
results for this situation have been obtained by Ten Kate
(1994). However, further research is required to investi-
gate the performance of these order acceptance policies
under stochastic conditions.
References
Akkan, C. (1997) Finite-capacity scheduling-based planning for reve-
nue-based capacity management. European Journal of Operational
Research,100, 170±179.
Bertrand, J.W.M. (1983) The eect of workload dependent due-dates
on job shop performance. Management Science,29, 799±816.
Carlier, J. (1987) Scheduling jobs with release dates and tails on
identical machines to minimize the makespan. European Journal
of Operational Research,29, 298±306.
Enns, S.T. (1993) Job shop ¯owtime prediction and tardiness control
using queueing analysis. International Journal of Production
Research,31, 2045±2057.
Enns, S.T. (1995) A dynamic forecasting model for job shop ¯ow time
prediction and tardiness control. International Journal of Pro-
duction Research,33, 1295±1312.
Guerrero, H.H. and Kern, G.M. (1988) How to more eectively accept
and refuse orders. Production and Inventory Management,29 (4),
59±62.
Kern, G.M. and Guerrero, H.H. (1990) A conceptual model for de-
mand management in the assemble-to-order environment. Journal
of Operations Management,9, 65±84.
Raaymakers, W.H.M., Bertrand, J.W.M. and Fransoo, J.C. (2000)
Aggregation principles in hierarchical production planning in a
batch chemical plant. Journal of Intelligent Manufacturing,11(2),
217±228.
Raaymakers, W.H.M. and Fransoo, J.C. (1999) Identi®cation of ag-
gregate resource and job set characteristics for predicting job set
makespan in batch process industries. International Journal of
Production Economics, (in press).
Raaymakers, W.H.M. and Hoogeveen, J.A. (1999) Scheduling multi-
purpose batch process industries with no-wait restrictions by
simulated annealing. European Journal of Operational Research,
(in press).
Sgall, J. (1998) On-line scheduling, in: Online Algorithms: The State of
the Art, Fiat, A. and Woeginger, G.J. (eds.), Springer, Berlin,
pp. 198±231.
Ten Kate, H.A. (1994) Towards a better understanding of order ac-
ceptance. International Journal of Production Economics,37, 139±
152.
van Bael, P. (1999) A study of rescheduling strategies and abstraction
levels for a chemical process scheduling problem. Production
Planning and Control,10, 359±364.
Wang, J., Yang, J.Q. and Lee, H. (1994) Multicriteria order acceptance
decision support in over-demanded job shops: a neural network
approach. Mathematical and Computer Modeling,19(5), 1±19.
Wester, F.A.W., Wijngaard, J. and Zijm, W.H.M. (1992) Order ac-
ceptance strategies in a production-to-order environment with
setup times and due-dates. International Journal of Production
Research,30, 1313±1326.
Appendix
De®nitions of aggregate resource and job set character-
istics.
Notation:
J= number of jobs;
N= number of resources;
M= number of resource types;
L
m
= workload on resource type m;
LB = lower bound on the makespan;
s
j
= number of processing steps of job j;
d
ij
= time delay of step iof job j;
p
ij
= processing time of step iof job j;
S= total number of processing steps;
l
p
= average processing time over all processing steps.
1. Average number of identical resources:
laN
M:A1
2. Average number of processing steps:
ls1
JX
J
j1
sj:A2
3. Average overlap:
lg1
JX
J
j1
gj:A3
where
gj1
sjÿ1X
sj
i2
1ÿdij ÿdiÿ1;j
piÿ1;j

:A4
4. Standard deviation in processing times:
rp
1
SX
J
j1X
sj
i1
pij ÿlp
ÿ
2
v
u
u
t:A5
5. Workload balance, represented by the maximum uti-
lization if the makespan were equal to the lower bound
qmax
L
LB ;A6
where
L1
NX
M
m1
Lm:A7
Aggregate estimation models for order acceptance 997
Biographies
Wenny H.M. Raaymakers is currently a logistics engineer at Akzo
Nobel in Oss, The Netherlands. She holds an M.Sc. in Industrial
Engineering and a Ph.D. in Operations Management, both from the
Technische Universiteit Eindhoven. Her research interests include
speci®cally production control in batch process industries. She has
published in journals such as European Journal of Operational
Research, International Journal of Production Economics and Journal of
Intelligent Manufacturing.
J. Will M. Bertrand is full Professor of Operations Planning and
Control at the Technische Universiteit Eindhoven. He holds an M.Sc.
in Industrial Engineering and a Ph.D. in Operations Management from
the Technische Universiteit Eindhoven. He has held a visiting position
at Rutgers University in the USA and has worked for ASM-Lithog-
raphy and Philips Machine Factory. His research interests lie in the
area of production planning and control in the semi-process industry
and the capital goods industry. He has been the co-author of three
books on production planning and control, published in journals such
as European Journal of Operational Research,Production Planning and
Control,International Journal of Production Research,International
Journal of Production Economics,Transportation Research,Journal of
Operations Management, and Management Science, and is a member of
the Editorial Board of International Journal of Production Research and
International Journal of Operations and Production Management.Heis
a member of INFORMS, POMS and EUROMA.
Jan C. Fransoo is an Associate Professor of Operations Planning and
Control at the Technische Universiteit Eindhoven. He holds an M.Sc.
in Industrial Engineering and a Ph.D. in Operations Management
from the Technische Universiteit Eindhoven and has held visiting
positions at Clemson University and Stanford University in the USA.
His research interests lie in the area of production planning and
control and Supply Chain Management, particularly in (semi) process
industries. He has published in journals such as European Journal of
Operational Research,International Journal of Operations and Pro-
duction Management,Journal of Intelligent Manufacturing,Interna-
tional Journal of Production Economics,Production and Operations
Management,Transportation Research,Supply Chain Management and
Supply Chain Management Review. He is a member of INFORMS,
POMS, and DSI.
998 Raaymakers et al.
... It turns out that these methods do not perform well in their specific setting. Raaymakers et al. (2000b) compare a regression based makespan estimation approach with a workload-based approach for batch chemical manufacturing in a setting with deterministic processing times. When the utilisation is high and when there is a high variety in the job mix, the regression-based model outperforms the workload-based model. ...
... Instead of looking at the aggregate capacity over all resources, we can look at the resource level. For example, Raaymakers et al. (2000b) looked at the capacity per workcenter. Taking the precedence relations between the jobs into account may lead to a better estimation of the capacity utilisation and the possibility to complete the order on time. ...
... Similar approaches for the single resource case can be found in Wester et al. (1992), Ten Kate (1995) and Akkan (1997). Raaymakers et al. (2000b) and Ivanescu et al. (2002) use a simulated annealing approach to make a detailed schedule. We use the Earliest Due Date dispatching rule to construct a detailed schedule. ...
... Different policies may be used to evaluate whether sufficient capacity is available to produce an order before the due date requested by the customer. Generally, the order acceptance policies used in industry are workload based, in the case where the capacity complexity is low or sufficient slack exists in the system, or schedule based in the case of less slack in the system or increased complexity (Raaymakers et al., 2000b). Due to the high scheduling complexity and many interrelations between processing steps 1. Introduction 9 in batch process industries, schedule based evaluations are very time consuming. ...
... Order acceptance has received limited attention in the literature. Research on order acceptance is reported by Guerrero & Kern (1988); Kern & Guerrero (1990); Wester et al. (1994); Ten Kate (1994); Wang et al. (1994); Akkan (1997); Raaymakers et al. (2000a) and Raaymakers et al. (2000b). ...
... It turns out that these methods do not perform well in their specific setting. Raaymakers et al. (2000b) compare a regression based makespan estimation policy with both a workload-based policy and a detailed scheduling-based 40 4. Dynamic order acceptance policy for batch chemical manufacturing in a setting with deterministic processing times. When the utilization is high and there is a high variety in the job mix, the regression-based model outperforms the workload-based model. ...
... An application case for the chemical batch industry is described in Raaymakers et al. (2000b). ...
... References dealing with this problem are e.g. , Ivanescu et al. (2002), and Wang et al. (2006). A particular case of this problem is presented e.g. in Luss and Rosenwein (1993), Raaymakers et al. (2000b), Sawik (2006), and Wester et al. (1992). ...
... For all approaches (except for the integrated one), it is possible that decisions adopted in some of the subproblems result in problematic or infeasible solutions for other subproblems. An example would be, in approach V, to obtain a set of committed orders in an OAS procedure that cannot be Raaymakers et al. (2000b), and Saad et al. (2004). ...
Article
Full-text available
Available-to-promise (ATP) systems deal with a number of managerial decisions related to order capture activities in a company, including order acceptance/rejection, due date setting, and resource scheduling. These different but interrelated decisions have often been studied in an isolated manner, and, to the best of our knowledge, no framework has been presented to integrate them into the broader perspective of order capture. This paper attempts to provide a general framework for ATP-related decisions. By doing so, we: (1) identify the different decision problems to be addressed; (2) present the different literature-based models supporting related decisions into a coherent framework; and (3) review the main contributions in the literature for each one of these. We first describe different approaches for order capture available in the literature, depending on two parameters related to the application context of ATP systems, namely the inclusion of explicit information about due dates in the decision model, and the level of integration among decisions. According to these parameters, up to six approaches for ATP-related decisions are identified. Secondly, we show the subsequent decision problems derived from the different approaches, and describe the main issues and key references involving each one of these decision problems. Finally, a number of conclusions and future research lines are discussed.
... When firms have high utilization rates along with high product variety, they often accept customer orders via the workload policy or some other ad hoc method. Using a simulation based on industry data, Raaymakers et al. (2000) show that a regression model for accepting customer orders based on order characteristics outperforms the workload policy with respect to capacity utilization and the need for replanning. Kallrath (2005) presents a model for customer portfolio optimization. ...
Article
Full-text available
This article analyzes a complex scheduling problem at a company that uses a continuous chemical production process. A detailed mixed-integer linear programming model is developed for scheduling the expansive product line, which can save the company an average of 1.5% of production capacity per production run. Furthermore, through sensitivity analysis of the model, key independent variables are identified, and regression equations are created that can estimate both the capacity usage and material waste generated by the product line complexity of a particular production run. These regression models can be used to estimate the complexity costs imposed on the system by any particular product or customer order. Such cost estimates can be used to properly price new customer orders and to most economically assign them to the production runs with the best fit. The proposed approach may be adapted for other long-production-run manufacturing companies that face uncertain demand and short customer lead times.
Chapter
This paper explores a dynamic order acceptance policy of firms in a decentralized supply chain (SC) to improve the profits of an SC by using the machine learning method. The dynamic arrival and due date orders in SC were divided into three types according to the profit that the SC can obtain. Two echelons of the SC, in which a supplier that cooperate with other firms in SC will receive orders in and out of the SC, are employed in this study. Capturing four order characteristics in make-to-order SC, we examine whether this model can make a higher profit by using a simulation model of Support Vector Machines (SVMs) rather than First Come First Serve (FCFS) and Artificial Neural Network (ANN). The experimental results indicate that SVMs is an efficient tool for firms in a dynamic SC to improve the performance of the SC. A numerical example is used to validate the results.
Article
Full-text available
Purpose The purpose of this paper is to develop a framework for the manufacturer of a make-to-order company to simultaneously negotiate with multiple customers through mediator to achieve order acceptance decisions (OADs). The paper developed mathematical models for the manufacturer, as well as customers to revise their offers during negotiations. Moreover, the paper also proposed a method for the mediator to carry out his assigned duties to assist in negotiation. In the decision process, mediator acts as a bridge between the manufacturer and customers to reach an agreement. A numerical example is enumerated to illustrate the working mechanism and superiority of proposed framework as compared to the framework where simultaneous negotiations are carried out without the presence of mediator. Findings Iterative method of negotiation conducted without mediator leads to delay in reaching agreement as the aspiration level of manufacturer offer and counter-offer of customer will never cross each other. In addition, the party who submits the offer first may suffer as the opponent can take the advantage of his/her offer during negotiation, thereby, derailing the issue of fairness. Introducing mediator between the manufacturer and the customer for their negotiations could overcome these two issues. Numerical analysis clearly illustrates that, in average, the rounds of negotiation to reach an agreement can be reduced by 22 percent using proposed negotiation framework. In addition, the fairness in negotiations can be improved by 33 percent with the incorporation of mediator. Originality/value Through continuing research efforts in this domain, certain models and strategies have been developed for negotiation. Iterative method of negotiations without mediator will help neither the manufacturer nor the customer in terms of fairness and negotiations round to reach an agreement. To the best of the author’s knowledge, so far, this is the first instance of research work in the domain of OAD and negotiation framework that attempts to incorporate mediator for simultaneous negotiation between manufacturer and customers on multiple issues simultaneously.
Article
Work flows in a job-shop are determined not only by the release load but also by the number of accepted orders. In this paper the common assumption of accepting all incoming orders regardless of shop condition is relaxed. Instead of placing the orders in a 'pre-shop pool' queue, as in previous research, orders that arrive at the shop, when it is highly congested, may be immediately rejected or their due dates may be negotiated. This paper explores the idea of controlling the workload since the acceptance/rejection stage. A new acceptance/rejection rule is proposed, and tests are conducted to study the sensitivity of job-shop performance to different order acceptance parameters, like the tolerance of the workload limit and the due date extension acceptance. The effect of the negotiation phase on the job-shop performance is evaluated using a simulation model of a generic random job-shop that allow us to conclude that having a negotiation phase prior to rejection improves almost all workload performance measures. Different tolerances of the workload limit slightly affect the performance of the job-shop.
Article
This paper studies the non-permutation solution for the problem of flow shop scheduling with order acceptance and weighted tardiness (FSS-OAWT). We formulate the problem as a linear mixed integer programming (LMIP) model that can be optimally solved by AMPL/CPLEX for small-sized problems. In addition, a non-linear integer programming (NIP) model is presented to design heuristic algorithms. A two-phase genetic algorithm (TP-GA) is developed to solve the problem of medium and large sizes based on the NIP model. The properties of FSS-OAWT are investigated and several theorems for permutation and non-permutation optimum are provided. The performance of the TP-GA is studied through rigorous computational experiments using a large number of numeric instances. The LMIP model is used to demonstrate the differences between permutation and non-permutation solutions to the FSS-OAWT problem. The results show that a considerably large portion of the instances have only an optimal non-permutation schedule (e.g., 43.3% for small-sized), and the proposed TP-GA algorithms are effective in solving the FSS-OAWT problems of various scales (small, medium, and large) with both permutation and non-permutation solutions.
Article
Full-text available
Available-to-promise (ATP) decision, as a means for managing customer demands, production scheduling and the available resource, has three main components: order acceptance/selection, due date assignment and order scheduling. This research presents two decision support systems of hierarchical and monolithic models to integrate the three ATP components to maximise the profit, while satisfying customer orders over required time horizon and effective cost in a multi-site make-to-order supply chain scenario. Numerical examples are used to demonstrate the application of the models and their effectiveness. In order to improve the system efficiency, a branch-and-price approach is adopted to solve the proposed monolithic model.
Article
In this paper we study the permutation flow shop scheduling problem with order acceptance and weighted tardiness (PFSS-OAWT) faced by firms that have a number of candidate orders to be selected and scheduled on a flow shop production line. The objective is to maximize the total net profit with weighted tardiness penalties. We formulate the PFSS-OAWT problem as an integer programming (IP) model. A heuristic algorithm named Simulated Annealing Based on Partial Optimization (SABPO) is developed for solving the IP model and obtaining near-optimal solutions. Computational studies are carried out on solving 160 problem instances with different scales (small, medium, large, and very large). The experimental results show that the SABPO algorithm exhibits good optimality for small-sized problems and robustness for medium/large-sized problems compared with benchmarks.
Article
Full-text available
This study explores the due-date performance of job shop control systems which base job due dates on a time-phased representation of the workload and the machine capacity in the shop. The performance is measured by the mean and the standard deviation of the lateness. Two parameters are used to vary the functioning of the due-date assignment system: a minimum allowance for waiting, denoted by SL, and a maximum fraction of the available capacity allowed for loading, denoted by CLL. The system increases the waiting lime allowance if congestion is observed when loading a new job. The capability of the system to observe congestion is determined by the parameters CLL and SL. Simulation experiments are used to investigate the performance of the assignment system. It is shown that the assignment system performs quite well with respect to reducing the standard deviation of the lateness; the performance is not very sensitive however to the parameter values used; with an expected capacity utilization of 90%, CLL should be set between 0.80 and 1.00 times the mean available capacity and SL should be set between 0.55 and 0.90 times the mean operation waiting time in the shop. The assignment system may also perform well with respect to controlling the mean lateness. If SL is set between 0.55 and 0.75 times the mean expected waiting time in the shop, a constant mean lateness is obtained independent of the utilization of the shop if CLL is set between 0.70 and 0.80 times of the mean available capacity. However, the mean lateness turns out to be quite sensitive to variations in the job-mix of the workload. Finally it is shown that if the values of the assignment parameters are adequate, the mean job lateness is independent of the number of operations in a job. This property can be used to monitor the correctness of the parameter values.
Article
The article introduced and discussed the essential elements for establishing a demand management (DM) system and the major decisions involved in DM (order accumulation, order prioritization, and capacity allocation). Guidelines for making these decisions were suggested. An example of DM processing was described, and the managerial implications of the DM approach were presented.
Article
This paper describes a scheduling algorithm developed to solve chemical process scheduling problems (CPSP) which belong to the job shop scheduling problems and are known to be NP-hard combinatorial optimization problems. The problem is solved using an iterative improvement algorithm in combination with a constraint satisfaction problem paradigm. Within the algorithm different rescheduling strategies based on a generative or iterative repair mechanism are examined. The final solution strategy to build rapidly near-optimal schedules combines high and low level scheduling with each using a different rescheduling strategy.
Article
The term demand management encompasses the activities required to accept customer orders and promise them productive capacity. This activity is central to effective manufacturing planning and control. The popularity of just‐in‐time systems suggests that manufacturers are steadily losing authority for establishing order delivery dates, further confounding the demand management process. No reported research has suggested how this process can be performed. This paper presents a rigorous description of the demand management process in the assemble‐to‐order environment, as a mathematical model. The description provided in this model should be viewed as an initial attempt to define the complex challenges that arise in dealing with demand management on an operational level. The focus of this model is short‐term decision making involved in controlling demand management—accepting orders and dispatching capacity to fill the orders. The problem is presented in the context of the assemble‐to‐order environment which requires the monitoring of both a final assembly schedule and a master production schedule. The factors involved in effective decision making for demand management including capacity availability, relative costs, and the relative timing of prospective customer orders are discussed. A simple example is presented to illustrate the performance of the mathematical model.
Article
This paper investigates flowtime prediction under conditions where Jackson's decomposition principle can be applied. Four models in which due-date setting rule parameters are based on predicted flowtime are developed and compared. Simulation results show both job characteristic and dynamic shop load information to be useful in predicting flowtimes. Analysis of prediction deviations shows that good predictions lead to errors which are approximately normally distributed. The variance of prediction errors can also be analytically determined. Therefore, quoted delivery dates can be set which are consistent with a desired level of delivery performance.
Article
A production situation is considered in which different items are produced on one machine. Setup times are incurred between the production of orders of different items. Production is driven by customer orders; each order concerns a batch of one product type and is furthermore completely characterized by its batchsize and (customer determined) due-date. Acceptance of orders may be refused if these orders are likely to cause late deliveries. The problem is to determine good acceptance strategies which naturally raises the question on what information such acceptance decisions have to be based. Three basic approaches are explored in this paper. In the monolithic approach, the acceptance decision is based on detailed information on a current production schedule for all formerly accepted orders. In the hierarchic approach, the acceptance strategy is based on global capacity load profiles only, while detailed scheduling of accepted orders takes place at a lower level (possibly later in time). In the myopic approach the acceptance decision is similar to the one in the hierarchic approach but scheduling is myopic, i.e. once the machine becomes idle only the next order to be produced is actually scheduled. The performances of these three approaches are compared by means of simulation experiments. The results indicate that the differences in performance are small. Insofar as the monolithic approach performs better, this is mainly due to the selective acceptance mechanism implicitely present in case of a heavy workload. An adaptation of the myopic approach to incorporate such a selective acceptance mechanism leads to a comparable performance
Article
Order acceptance is an important issue in job shop production systems where demand exceeds capacity. In this paper, a neural network approach is developed for order acceptance decision support in job shops with machine and manpower capacity constraints. First, the order acceptance decision problem is formulated as a sequential multiple criteria decision problem. Then a neural network based preference model for order prioritization is described. The neural network based preference model is trained using preferential data derived from pairwise comparisons of a number of representative orders. An order acceptance decision rule based on the preference model is proposed. Finally, a numerical example is discussed to illustrate the use of the proposed neural network approach. The proposed neural network approach is shown to be a viable method for multicriteria order acceptance decision support in over-demanded job shops.
Article
This paper presents a forecasting approach to flowtime prediction in a job shop. The flowtime prediction relationship developed considers both job characteristic and shop loading information. Forecast errors are shown to be approximately normally distributed. A lateness feedback approach is also developed to dynamically estimate the variance of forecast error. The estimated distribution of forecast error is used to set delivery safety allowances which are based on a desired level of delivery performance. Results show that the lead times required to maintain a desired level of delivery performance are lowest when due-date dependent dispatch is used.
Article
This paper considers the problem of scheduling independent jobs with release dates and tails on m identical machines to minimize the makespan. This m-machines problem is NP-hard in the strong sense. Jackson's schedule is defined as the list schedule built by giving priority to the available job with the largest tail. It is proved that the deviation of Jackson's schedule from the optimum is smaller than twice the largest processing time.Next, a new branching scheme is proposed by associating with each job an interval of time during which it has to be processed. To branch, the interval for a particular job is divided into two smaller ones. This is a general scheme which can be applied to many scheduling problems.Finally, a branch and bound algorithm is explained in detail and computational results are given.