Content uploaded by Leon F. Mcginnis
Author content
All content in this area was uploaded by Leon F. Mcginnis on Jul 30, 2016
Content may be subject to copyright.
Solving the Forward-Reserve Allocation Problem
in Warehouse Order Picking Systems
Jinxiang Gu1, Marc Goetschalckx2, Leon F. McGinnis3
Nestle USA, 800 N. Brand Blvd, Glendale, CA 91203
School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332
Abstract
Many warehouses store at least some goods in two areas, a reserve area that is efficient for
storage and a forward area that is efficient for order picking. The forward-reserve allocation
problem determines the set of Stock-Keeping Units (SKUs) and their space allocations in the
forward area to maximize the forward area’s benefit by trading off the relevant costs of order
picking and internal replenishment. The mathematical model of this decision resembles the
classical knapsack problem with the additional complexity that it has a discontinuous nonlinear
cost function. A simple greedy heuristics has been proposed in the literature to solve this
problem. This paper proposes an alternative branch-and-bound algorithm that can quickly solve
the problem to optimality. Heuristic and optimal solutions are numerically compared using
problem instances based on real warehouse data. Results suggest that the heuristic solutions are
very close to the optimal ones in terms of both the objective value and the forward assignment.
Keyword: Warehousing, Forward-reserve Warehouse, Order Picking, Branch and Bound
Algorithm.
1 Email: jinxiang.gu@us.nestle.com
2 Email: marc.goetschalckx@isye.gatech.edu
3 Email: leon.mcginnis@isye.gatech.edu
Version 6.6, last modified on 10 April 2008
1
1. Introduction
Warehousing has been a rich source of opportunities for applying operations research. One
of the earliest problems studied in location theory addressed warehousing (see e.g., Kaufman et
al. (1977), Baker, et al. (1982), and Kelly and Khumawala (1982)). Warehouse operations, such
as crossdocking (Li, et al. (2004)), and order picking from carousels (Van Den Berg (1996)),
have been studied, as have problems of warehouse capacity (Aghezzaf (2005)) and sizing
(Cormier and Gunn (1996)). In this paper, we address another important warehouse decision
related to the operation of forward-reserve storage and picking systems.
Conventional warehouses need enough storage area to accommodate the peak inventory of
all SKUs. However, it often is inefficient to pick orders directly from this storage area for two
primary reasons. First, the storage area may use high-density storage equipment such as high-
stacking deep-lane pallet racks to maximize space utilization, but these technologies do not allow
convenient or fast item access and extraction. Second, picking orders from a large area may
cause excessive unproductive travel between picking locations. Therefore, many warehouses use
a separate area, often called the forward area, for efficient order picking. The forward area is
designed to have a compact physical size and uses equipment such as gravity flow rack that
allows convenient and fast item access and extraction. The use of a forward area can improve
order picking efficiency but requires additional replenishing of the SKUs from the storage area to
the forward area. Furthermore, the storage capacity of the forward area must be limited to
preserve travel efficiency since it uses low-density storage equipment. As more SKUs are
assigned to the forward area, less space can be allocated to each SKU and consequently more
frequent replenishing must occur. Therefore, it is important to carefully determine which SKUs
2
should be assigned to the forward area and in what quantity to balance the tradeoffs between
order picking and replenishing so that the benefits of the forward area are maximized.
Hackman and Rosenblatt (1990) proposed a mathematical model for the forward-reserve
allocation problem. The model has a set of integer variables indicating whether or not a SKU is
assigned to the forward area and a set of continuous variables indicating how much space in the
forward area is allocated for each SKU assigned to the forward area. The objective is to
maximize the total benefit of the forward area, i.e., the total savings in order picking minus the
total replenishing cost. The model is similar to the classical knapsack problem with the
difference that it has a nonlinear objective function that is discontinuous at zero. Hackman and
Rosenblatt (1990) propose a greedy heuristic to solve the forward-reserve allocation problem
based on an index that ranks SKUs in terms of their desirability to be put in the forward area.
The forward-reserve allocation problem has been extended by various authors. Frazelle et
al. (1994) consider the situation where the size of the forward area is a decision variable. The
costs in their model include the equipment cost of the forward area and the material handling
cost for order-picking and internal replenishment. Their solution method is a direct extension of
the greedy heuristic by Hackman and Rosenblatt (1990). van den Berg et al. (1998) consider the
problem with unit-load replenishments, i.e., only one unit can be replenished per trip. They
assume the forward area can be replenished instantaneously, so there is no need to assign more
than one unit to the forward area. They consider warehouses that have busy and idle periods and
show that it is possible to reduce the number of replenishments during busy periods by
performing replenishments in the preceding idle periods. A knapsack-based heuristic is
developed to find the set of SKUs to put in the forward area that minimizes the expected total
labor for both order picking and replenishing during a busy period.
3
This paper provides an alternative algorithm for the forward-reserve allocation problem
that can find the guaranteed optimal solution efficiently. Extensive numerical experiments are
performed to evaluate how the heuristic solutions compare with the optimal ones in terms of both
the objective value and the forward assignment using problem instances based on real warehouse
data.
2. The forward-reserve allocation problem
This section gives a brief introduction to the forward-reserve allocation model and the
greedy heuristic proposed by Hackman and Rosenblatt (1990). The non-optimality of the
heuristic is shown by a small example. The following notation adopted from Hackman and
Rosenblatt (1990) will be used throughout the paper:
Parameters:
ei – “savings” per fulfillment request if the item i is stored in the forward area
ci – cost per internal replenishment for item i
Ri – the number of fulfillment requests per unit time for item i
Di – the demand per unit time for item i converted into units of volume
N – number of items in the warehouse
V – the volume of the forward area
Variables:
zi – volume in the forward area allocated to item i
xi – binary decision variable determining if item i is assigned to the forward area
The forward-reserve allocation model is:
4
(P1)
∑
=
N
i
ii zf
1
)(max
∑
=
≤
N
i
iVzts
1
..
0
≥
i
z
where
⎪
⎩
⎪
⎨
⎧
=
>−
=
00
0
)(
i
i
i
ii
ii
ii
zif
zif
z
Dc
Re
zf
To simplify notation, let and
iii Rea =iii Dcb
=
, then:
⎪
⎩
⎪
⎨
⎧
=
>−
=
00
0
)(
i
i
i
i
i
ii
zif
zif
z
b
a
zf
Problem (P1) is similar to the classical knapsack problem with the additional difficulties
that the fi(zi) terms in the objective function are nonlinear and discontinuous at zero and that the
resource consumption or “size” of the items are also decision variables. Hackman and Rosenblatt
(1990) proposed an index (i.e., iii baL /=) to measure an SKU’s desirability to be assigned to
the forward area, and based on the index, developed the following simple heuristic to solve the
problem:
Step 1: Sort the SKUs so that
1...,,2,1,
1
−
=∀≥ +NiLL ii
Step 2: For each ordered set of items }...,,2,1{ kSk
=
where Nk
≤
≤
1, solve problem (P1)
by assuming the forward area contains only the items in Sk. Note that problem (P1)
is easy to solve if the items in the forward area are known (see discussions in the
next section).
5
Step 3: Select the set from all the ordered set (
k
SNk
≤
≤
1) that has the maximum
objective value v(Sk).
Steps 2 and 3 of the above algorithm require checking all N ordered subsets to find the one
that has the maximum value of v(Sk). A more efficient implementation can be developed by
exploiting the fact that the function v(Sk) is unimodal for Nk
≤
≤
1 (see Proposition 1 in
Hackman and Rosenblatt (1990)), and therefore a bisection search on k will quickly find the
solution.
Bartholdi and Hackman (2005) show that the above heuristic will produce a solution that is
no farther from optimum than the net-benefit of a single SKU. However, this gives no
predetermined performance bound and the actual optimality gap may be big as shown by the
following small example. The problem has 3 SKUs to be considered for forward storage. The
saving per pick is $1 if a SKU is stored in the forward area and the cost per internal
replenishment is $40. The numbers of picks per unit time for the three SKUs (SKU1, SKU2,
SKU3) are 86, 644, and 245 respectively, and the demand per unit time for the three SKUs
(SKU1, SKU2, SKU3) are 122.8, 10449, and 1513.8 cubic feet respectively. The size of the
forward area is 804 cubic foot. It can be verified that the heuristic will produce a solution that
has an objective value of 91 with SKU1 and SKU2 in the forward area, and the optimal solution
has an objective value of 207 with SKU1 and SKU3 in the forward area. So there is a 56%
optimality gap between the heuristic and optimal solutions. It is to be expected that the
optimality gap will become smaller as the number of SKUs increases. One objective of this paper
is to evaluate how close the heuristic solutions will be to the optimal ones in terms of both the
objective value and the assignment of SKUs in a practical setting.
6
3. An optimal branch-and-bound algorithm based on outer approximation
In this section we develop an alternative algorithm to find the optimal solution for the
forward-reserve allocation problem. For given , the forward-reserve allocation problem
reduces to determining the space allocation in the forward area for those items with
N
Bx ∈
1
=
i
x. If we
let , the sub-problem for a fixed is:
}1:{ ==
+
i
xiX N
Bx ∈
∑+
∈
−=
Xi i
i
iz
b
axv )(max)(
∑+
∈
≤
Xi
iVzts ..
+
∈∀≥ Xizi,0
Since v(x) is concave in z, its optimal value can be determined from its Lagrangian dual,
i.e.,
))()((maxmin)( 00 ∑∑ ++ ∈∈
≥≥ −+−=
Xi
i
Xi i
i
izu zVu
z
b
axv
The Lagrangian dual can be solved analytically by setting the first derivatives with respect
to z and u equal to zero, and we obtain:
1/2
*
1/2
k
k
i
iX
b
zV b
+
∈
=∑
V
b
axv Xi
i
Xi
i
22/1 )(
)(
∑
∑+
+
∈
∈
−=
This result can be written equivalently as:
V
bx
axxv
N
i
ii
N
i
ii
2
1
2/1
1
)(
)(
∑
∑=
=
−=
7
Therefore, the original problem P1 becomes:
)(max xv
N
Bx∈
which is a binary nonlinear problem, and a branch-and-bound algorithm based on outer
approximation can be developed to solve it (see also Ryoo and Sahinidis (1996)). First, the above
problem can be restated in the following equivalent form:
(PM) ∑
=
−
N
i
ii V
w
ax
1
1
max
2
21
.. wwts =
∑
=
=N
i
iibxw
1
2/1
2
Nixi∈∀∈ ]1,0[
A linear relaxation of this problem can be developed by relaxing the nonlinear constraint
, as discussed in the following. Suppose the variable has a lower and upper bound
and . For example, we can take 0 and ∑ as the respective lower and upper bound
in our case. A linear relaxation of for is represented by the following set
of inequalities:
2
21 ww =
L
w2
2
w
U
w2
=
N
i
i
b
1
2/1
[22
L
ww ∈
2
21 ww =],2
U
w
2
2221 )(2 UU wwww −≥
2
2221 )(2 LL wwww −≥
ULUL wwwwwww 2222221 −+≤
Given that is a convex function, the first two inequalities are generated by the tangential
support lines in the two endpoints of the interval and the third inequality is generated by the line
segment connecting the endpoints. A relaxation of PM for can be represented by
1
w
],[ 222
UL www ∈
8
the following mixed integer problem PR. Figure 1 illustrates the linear relaxation of for
, where the shaded area is the relaxed region bounded by the three linear
constraints, i.e., PR.3, PR.4, and PR.5.
2
21 ww =
],[ 222
UL www ∈
(PR) ∑
=
−
N
i
ii V
w
ax
1
1
max (PR.1)
P
R.2)
∑
=
=N
i
iibxwts
1
2/1
2
.. (
(PR.3)
2
2221 )(2 UU wwww −≥
(PR.4)
2
2221 )(2 LL wwww −≥
(PR.5)
ULUL wwwwwww 2222221 −+≤
(PR.6)
UL www 222 ≤≤
] (PR.7) 1,0[∈
i
x
Insert Figure 1 here
The following proposition characterizes the optimal solution of PR, and provides a lower
and upper bound for PM over . ],[ 22
UL ww
Proposition 1. If ( ) is an optimal solution of PR over the interval , then it must
satisfy . An upper and lower bound of PM over the
specified interval is provided by and respectively, where is the
objective function of PR (and PM).
'
2
'
1
',, wwx
(2 '
22
Uww −
],[ 22
UL ww
))(2,)max( 2
2
'
22
2
2
LLU wwww −
),,( '
2
'
1
'wwxf f
'
1
w=
c=
cw ≥
'
1
),)(,( '
2
2'
2
'wwx f
Proof: Let . If ( ) is an optimal solution of
PR, then because it must satisfy PR.3 and PR.4. Suppose , it is easy to check that
))(2,)(2max( 2
2
'
22
2
2
'
22
LLUU wwwwww −− '
2
'
1
',, wwx
w>
'
1c
9
() is a feasible solution of PR, and
2
,, wcx ′′ ),,( 2
wcxf
′
′
>). Therefore, ( )
cannot be optimal, which is a contradiction.
,,( '
2
'
1
'wwxf
),, '
2
'
1
'ww
'
2
'
1,ww
],[ 22
U
ww′
'
2
'
1
',, wwx
],2
U
w
Since ( ) is an optimal solution of PR, provides an upper bound for
PM because PR relaxes PM. On the other hand, since ( ) is an optimal solution of PR, it
must satisfy PR.2 and PR.7. Therefore, ( is a feasible solution of PM, which
provides a lower bound for PM.
'
2
'
1
',, wwx (f
',x
),)(, '
2
2'
2
'ww
],[ 22
UL ww
2
2
w=
],[ 22 ww L′
x
x
Two situations could arise for the optimal solution of PR over any interval [, i.e.,
or , or . In the first case, the lower and upper bounds for PM over
are equal since the relaxation is tight at the end points of the interval. In the second
case, the lower and upper bounds for PM over are not equal, and therefore the previous
relaxation needs to be further refined to provide a more precise approximation for PM. From
Proposition 1, ( ) always lies on the boundary defined by the two linear functions (i.e., PR.3
and PR.4) that underestimate the function w. A better approximation can be constructed by
dividing into two sub-intervals and , and developing outer
approximations on each of the sub-intervals (as illustrated by the shaded areas in Figure 2, note
that the previous solution ( ) is already cut off). Based on this idea, a branch-and-bound
procedure can be developed to solve PM optimally by recursively dividing the original interval of
into smaller sub-intervals to provide more accurate approximations of PM. At any iteration of
the branch-and-bound procedure, a list of sub-intervals is maintained that define the current
approximation of PM. The algorithm terminates if the optimality gap is sufficiently small;
2
L
w
L
ww 2
'
2=
],[ 22
UL ww
2
w
U
w2
,
2
Lww
UL www 2
'
22 <<
'
2
w
''
1
',, wwx
'
1,w
]
2
U
1
[
2
10
otherwise, one of the sub-intervals is selected and further divided to provide a refined
approximation of .
2
21 ww =
*
2
w
2
w
max{
k
LB k
(,) 22
U
i
Lww
Insert Figure 2 here
Let ( ) be the optimal solution of PM. Denote as the current set
of sub-intervals in the space that contains and defines the current approximation of PM.
Let UBk and LBk be the local upper and lower bounds for PM over subinterval k, and UB and LB
be the global upper and lower bounds. We have the following relations:
*
1
*,, wx 22
[( ) , ( ) ]
LU
k
kI
ww
∈
∪k
*
2
w
*
222
***
12 |( ) ( )
|}(,,) max{|}
LU
kk k
kw w w
I f x w w UB UB k I
≤≤
∈≤ ≤ ≤ ∈
The left side inequality is due to the fact that each LBk corresponds to a feasible solution of
PM and therefore is less than or equal to the optimal solution. The right side inequality holds
because if w for any ])[(
*
2i
∈kI
∈
(in other words there exists a k for which this is
true), then the optimal solution is less than or equal to the relaxed optimal solution of PM over
that sub-interval, i.e., UBk. Therefore, the global upper and lower bounds are given by:
max{ | }
k
LB LB k I=∈ and max{ | }
k
UB UB k I
=
∈
The branch-and-bound algorithm for solving PM is formally stated as follows. Define set I
as a list of the sub-intervals in the space. Each sub-interval has a relaxed optimal solution and
the lower and upper bounds (i.e., UBk and LBk) for PM over that sub-interval.
2
w
Branch-and-Bound Algorithm
1. Initialization: select the convergence tolerance parameter ε > 0; define the initial bound
], for 2
w; solve PR over ],[ 22
UL ww to obtain the initial global lower and upper [22
UL ww
11
bound: LB and UB ; define the set I, which initially contains only ],[ 22
UL ww with its
associated relaxed optimal solution and bounds.
2. Termination test: if
ε
≤
−LBUB , then terminate and the solution that yields the current
global lower bound (i.e., the best feasible solution) is optimal.
3. Branch and Bound: remove from I the interval k
that has the maximum
upper bound (i.e., the interval that defines the current global upper bound); divide
]
U
kk
into two sub-intervals '
[( k
and 2
[( ( ) ]
U
k
, solve PR over
the sub-intervals to obtain the relaxed optimal solution as well as the lower and upper
bound for PM over the corresponding sub-intervals, and insert the sub-intervals into I;
Update LB = m : }
k
22
[( ) , ( ) ]
LU
k
ww
2
) ]w'
2
) ,
22
[( )w, ( )
L
w2
) ,(
L
k
wk
ww
ax{LB k I
∈
and UB = m : }
k
ax{UB k I
∈
; delete all intervals that
satisfy k
UB LB< from I; go to Step 2.
In the above algorithm, if the termination criterion is not satisfied, we select an interval and
further divide it into two smaller intervals in hope of finding better bounds. This explains why
the interval that has the maximum upper bound among all intervals currently in I is selected as
the candidate for branching; it defines the current global upper bound (UB = max{ :
k
UB }k I
∈
)
and by branching on it we hope to reduce the global upper bound. The following proposition
shows that the branch-and-bound algorithm will converge to the optimal solution in a finite
number of iterations.
Proposition 2: The branch-and-bound algorithm will converge to the optimal solution after a
finite number of branchings on .
2
w
12
Proof: In step 3 of the branch-and-bound algorithm, an interval is selected and
branched into two sub-intervals and , where corresponds to the optimal
solution of PR for . Therefore, it satisfies w for a certain due
to constraint PR.2. Since x is a discrete variable, there are only a finite number of possible values
for , or in other words, the interval can only be branched into a finite number of sub-intervals
according to the algorithm.
],[ 22
UL ww
2
],[ '
22 ww L],[ 2
'
2
U
ww '
2
w
iibx 1
],[ 222
UL www ∈∑
=
=N
i1
/'
2
N
Bx ∈
'
2
w
If a sub-interval cannot be further branched, it means we cannot find a
relaxed optimal solution that satisfies . Therefore, the relaxed optimal
solution over [( must satisfy or . Because the relaxation is tight
at the end points of the intervals, the lower and upper bounds on the interval are
equal (i.e., UB ).
22
[( ) , ( ) ]
LU
k
ww
2
) ]
U
kk
w
k
U
k
k
∈
'
222
() () ( )
L
kk
www<<
'
22
() ()
L
kk
ww=2
(
U
w
2
) ,(
L
w
kk
LB=
)
k
22
[( ) , ( ) ]
LU
k
ww
So after a finite number of iterations, the algorithm will terminate either because the
optimality gap is sufficiently small or all sub-intervals cannot be further branched. In the latter
case, the global upper ( ) and lower (max{ : }
k
UB UB k I=max{ : }
k
LB LB k I
=
∈) bounds must
be equal since UB for all k. Therefore, the branch-and-bound algorithm converges to the
optimal solution after a finite number of branchings.
k
L=k
B
It is also possible to approximate the nonlinear function over the interval by
dividing a priori the complete interval in k sub intervals. Note, that those sub intervals do not
have to be of equal length. This division yields k-1 internal boundary points and the two original
boundary points. Given that is a convex function, this yields k+1 inequality constraints. This
approach has two disadvantages. First, the approximation error
ε
is same for all the sub intervals.
1
w],[ 22
UL ww
1
w
13
If
ε
is set to a small value, then a large number of supports will be generated in regions of the
interval that are far from the optimal solution. Second, the linear relaxation contains all the
inequality constraints simultaneously and thus a much larger formulation is created.
The outer approximation is different from the piecewise linearizations typically
implemented using special ordered set constraints of type two (see, e.g., de Farias, et al. (2000),
or Keha, et al. (2006)) in that it can provide at each iteration a relaxation of the original nonlinear
model and therefore an upper bound to the optimal solution.
4. Computational results
This section provides numerical results that demonstrate the computational performance of
the proposed algorithm and compare the heuristic and optimal solutions using a set of practical
examples.
4.1 Test Problems
Test problems used in the numerical experiments are generated based on two basic data
sets from real warehouses as provided by Bartholdi and Hackman (2005). The first data set (S1)
is from an office product warehouse and the second (S2) is from a tire warehouse. Table 1 shows
the summary statistics of these two data sets. It can be seen that these two data sets represent
quite different warehouse scenarios as seen from the statistics of Li, i.e, the ranking index
measuring a SKU’s desirability to be assigned to the forward area used by the heuristics
algorithm. This difference is mainly due to the fact that the average picking size is much smaller
in the office product warehouse (e.g., staplers and clips) than that in the tire warehouse (e.g.,
tires). For each scenario, samples are randomly generated with different sizes (i.e., N = 50, 100,
500, 1000, 5000, and 10000 SKUs) following the same distribution of ai and bi in the basic data
14
set. The size of the forward area is set at three difference levels (i.e., V1, V2, and V3) for each
scenario and each sample size so that there are approximately 20%, 50%, and 80% SKUs
assigned to the forward area in the optimal solution. In summary, there are totally 36 cases
(2×6×3) with different warehouse scenarios, different numbers of SKUs, and/or different sizes of
the forward area, and for each of them, 50 instances are randomly generated for a total of 1800
test problems.
Insert Table 1 Here.
4.2 Computational efficiency of the optimal algorithm
The proposed algorithm is implemented in C, with calls to ILOG/CPLEX to solve the
relaxed problem PR. All tests are performed on a Sun 280R server with 2×900MHz UltraSparc-
III CPU and 2GB RAM. Table 2 shows the average and range of computation time for the
different test cases, with 50 randomly generated problem instances per test case. In general, the
algorithm is very efficient and in most cases can converge to the optimal solution within 60
seconds. The results in Table 2 also suggest that the computation time is much shorter for cases
with a larger forward area. A detailed look at the convergence history of the algorithm shows
that given all other factors are kept constant, increasing the size of the forward area usually
results in a smaller initial optimality gap, as illustrated in Figure 3. Figures 3(a) (i.e., the three
figures on the left) show the convergence history of the optimal algorithm for scenario 1 (S1)
with 5000 SKUs, and Figures 3(b) (i.e., the three figures on the right) for scenario 2 (S2) with
5000 SKUs. The two lines in each sub-figure represent the normalized upper and lower bounds
(i.e., the actual bounds divided by the corresponding optimal value). It can be seen that the
algorithm gives a very tight bound after the first iteration for both scenarios when the size of the
15
forward area is set at V3. Figure 3 also suggests that the algorithm can quickly locate a near
optimal solution within a few iterations. For example, the relative optimality gap is within 0.01%
of the optimal value after 5 iterations for all cases shown in Figure 3. Similar results exist in all
other tested cases as well.
Insert Table 2 and Figure 3 Here.
4.3 Comparing the optimal and heuristic solutions
The optimal algorithm not only provides an alternative method to optimally solve the
forward-reserve problem, but also allows us to evaluate the optimality of heuristics by comparing
the heuristic solutions with the optimal solutions for practical problems. Table 3 shows the
number of times that the heuristic objective value coincides with the optimum within a
calculation precision of ± 10-3 for the 50 randomly generated instances of each tested case. The
results suggest that the heuristic solution can often find the optimal solution (within a precision
of ± 10-3). For instances where an optimal solution is not achieved, the actual optimality gap is
always very small. Table 4 shows the maximum relative optimality gap (i.e., the absolute gap
divided by the corresponding optimal value) for all instances that the heuristics failed to find the
optimal solution. It can be seen that even for cases with 50 SKUs and a forward area size of V1
(i.e., approximately 10 SKUs are assigned to the forward area), the relative optimality gap is
very small: less than 0.313% for the office product warehouse and less than 0.039% for the tire
warehouse. The relative gap becomes even smaller as the number of SKUs increases. Besides
comparing the objective value, we also compared the optimal and heuristic solution in terms of
their forward assignment (i.e., xi). In order to do this, we use the difference index DI to measure
the similarity of two solutions, which is defined as the ratio of the number of SKUs that have
16
different assignment in the optimal and heuristic solutions (i.e., xi not equal in the optimal and
heuristic solutions) and the total number of SKUs. The smaller the index value is, the more
similar the two solutions are. Table 5 shows the maximum DI over the 50 randomly generated
instances of each test case. Note it is possible that two solutions have the same objective value
but different forward assignments. The results suggest that the heuristic solution is very close to
the optimal in terms of the forward assignment. Even for cases with 10000 SKUs, there are less
than 5 SKUs (10000 × 0.0005) that are different in terms of the optimal and heuristic
assignments. In summary, although the heuristic may produce a large gap in some small
instances, the solutions when it is applied to larger practical problems are always very close to
the optimal in terms of both the objective value and the forward assignment. This demonstrates
that the ranking index Li is a very effective measure in selecting the set of SKUs to assign to the
forward area.
Insert Table 3, 4, and 5 Here.
5. Conclusions
This paper develops a branch-and-bound algorithm based on outer approximation to
optimally solve the forward-reserve allocation problem. The outer approximation provides at
every iteration a relaxation of the original nonlinear model and therefore an upper bound to the
optimal solution. This, combined with the lower bound obtained from a feasible solution, enables
us to use the branch and bound scheme to find a guaranteed optimal solution in a finite number
of iterations.
Computational results demonstrate that the proposed optimal algorithm, although requiring
more computational effort than the best heuristic algorithm, is fast enough to solve practical
problems. In our numerical experiments, the optimal solution can be found in less than 60
17
seconds for most of the realistically sized problem instances, and solution time appears to be
relatively insensitive to the number of SKUs. The heuristic solutions are compared with the
optimal solutions in terms of both the objective value and the forward assignment using problem
instances based on real warehouse data. The results verify that, although the greedy heuristic
might result in a large optimality gap in some small examples, when it is applied to practical
problems the solutions are so close to the optimum that the difference can be ignored from a
practical point of view.
References:
Aghezzaf E., 2005. Capacity planning and warehouse location in supply chains with uncertain
demands, Journal of the Operational Research Society, 56(4): 453.
Baker, B. M., 1982. Linear Relaxations of the Capacitated Warehouse Location Problem, Journal
of the Operational Research Society, 33(5): 475-479.
Bartholdi, J.J. and Hackman, S.T., 2005. Warehouse & Distribution Science (Version 0.50),
Available online: http://www.warehouse-science.com.
Cormier, G. and Gunn, E.A., 1996. Simple models and insights for warehouse sizing, Journal of
the Operational Research Society, 47(5): 690-696.
de Farias, I. R., Johnson, E.L., and Nemhauser, G.L., 2000. A generalized assignment problem
with special ordered sets: A polyhedral approach, Math. Programming, 89: 187-203.
Frazelle, E.H., et al., 1994. The forward-reserve problem. In: Ciriani, T.A. and Leachman, R.C.
(Eds.), Optimization in Industry 2, John Wiley & Sons Ltd., New York.
Hackman, S.T. and Rosenblatt, M.J., 1990. Allocating items to an automated storage and
retrieval system, IIE Transactions, 22(1), 7-14.
18
Kaufman, L., M. Van den Eede, and Hansen, P., 1977. Plant and Warehouse Location Problem,
Operational Research Quarterly, 28(3): 547-554.
Keha, A. B., de Farias, I. R., Jr.; and Nemhauser, G.L., 2006. A branch-and-cut algorithm
without binary variables for nonconvex piecewise linear optimization, Operations Research,
54(5): 847-858.
Kelly, D. and Khumawala, B.M., 1982. Capacitated Warehouse Location with Concave Costs,
Journal of the Operational Research Society, 33(9): 817-826.
Li, Y., Lim, A. and Rodrigues, B., 2004. Crossdocking--JIT scheduling with time windows,
Journal of the Operational Research Society, 55(12): 1342.
Ryoo, H.S. and Sahinidis, N.V. 1996. A branch-and-reduce approach to global optimization.
Journal of Global Optimization, 8, 107-138.
van den Berg, J. P., 1996. Multiple order pick sequencing in a carousel system: A solvable case
of the rural postman problem. Journal of the Operational Research Society. Vol. 47, No. 12;
pp. 1504-1515.
van den Berg, J.P., et al., 1998. Forward-reserve allocation in a warehouse with unit-load
replenishments, European Journal of Operational Research, 111, 98-113.
19
20
Figure 1. Illustration of the outer-approximation
Figure 2. Illustration of the branch-and-bound procedure
Note: The vertical axes are scaled differently in order to clearly show the gaps in
different cases.
Figure 3. Convergence of the optimal algorithm with 5000 SKUs
21
Table 1. Summary statistics for the two basic data sets
Mean Median StDev Minimum Maximum
ai 18.85 14.5 12.88 1.5 55.5
bi 17.45 10.56 18.43 0.28 90.1
S1
Li 5.219 4.74 2.517 0.873 11.3
ai 27.521 15.309 30.49 0.945 186
bi 927.7 470.4 1238.7 11.2 7661
S2
Li 0.88145 0.76421 0.41755 0.08929 2.28
Table 2. Computational time of the optimal algorithm (seconds)
50 SKUs 100 SKUs 500 SKUs 1K SKUs 5K SKUs 10K SKUs
V1
0.14
(0.05, 0.25)
0.34
(0.13, 0.56)
4.40
(1.99, 7.76)
8.33
(4.21, 16.54)
64.00
(25.1, 153.9)
135.68
(15.9, 314.1)
V2 0.07
(0.03, 0.15)
0.12
(0.05, 0.23)
1.08
(0.56, 2)
2.03
(1.17, 3.23)
6.63
(1.88, 15.1)
10.3
(4.04, 37.76)
S1
V3 0.02
(0.01, 0.05)
0.04
(0.02, 0.07)
0.41
(0.26, 0.53)
0.95
(0.7, 1.26)
1.53
(0.99, 2.13)
3.49
(2.73, 4.79)
V1
0.09
(0.05, 0.16)
0.18
(0.1, 0.29)
2.04
(1.13, 3.35)
3.83
(2.16, 6.96)
18.50
(6.57, 56.79)
31.01
(6.12, 91.14)
V2 0.05
(0.01, 0.11)
0.10
(0.05, 0.17)
1.07
(0.56, 1.67)
1.65
(0.91, 2.64)
3.84
(1.43, 7.47)
4.71
(3.06, 9.99)
S2
V3 0.02
(0.01, 0.04)
0.03
(0.01, 0.06)
0.30
(0.21, 0.46)
0.71
(0.56, 0.99)
0.98
(0.72, 1.27)
2.32
(1.79, 3.15)
22
23
Table 3. Number of times (out of 50) that the heuristic solution is optimal
50 SKUs 100 SKUs 500 SKUs 1K SKUs 5K SKUs 10K SKUs
V1 44 40 44 40 45 42
V2 47 49 50 42 41 43
S1
V3 50 50 46 47 38 50
V1 48 47 46 47 49 43
V2 48 50 50 44 47 43
S2
V3 49 50 50 44 37 39
Table 4. Maximum relative optimality gap (%)
50 SKUs 100 SKUs 500 SKUs 1K SKUs 5K SKUs 10K SKUs
V1 3.13E-01 2.37E-01 5.50E-03 1.10E-03 5.70E-06 3.32E-05
V2 4.91E-02 1.20E-02 - 9.19E-06 1.84E-06 1.78E-06
S1
V3 - - 5.13E-05 4.40E-06 1.11E-06 -
V1 3.90E-02 5.86E-03 5.39E-04 5.07E-06 7.82E-07 2.81E-04
V2 7.95E-03 - - 3.52E-06 5.97E-07 2.53E-05
S2
V3 3.81E-04 - - 3.78E-06 7.13E-07 3.68E-06
Table 5. Maximum DI of the optimal versus heuristic solutions
50 SKUs 100 SKUs 500 SKUs 1K SKUs 5K SKUs 10K SKUs
V1 0.04 0.02 0.004 0.002 0.0002 0.0001
V2 0.02 0.01 0 0.005 0.0004 0.0002
S1
V3 0 0.02 0.004 0.001 0.0002 0
V1 0.04 0.01 0.004 0.002 0.0006 0.0005
V2 0.02 0 0.002 0.001 0.001 0.0004
S2
V3 0.04 0 0 0.002 0.0004 0.0004