ThesisPDF Available

Abstract and Figures

Travelling Salesman Problem (TSP) is one of the problems which is being widely used in transportation industry which its optimization would speed up services and increase customer satisfaction. The traveling salesman problem involves finding the shortest path that visits n specified locations, starting and ending at the same place and visiting the other n-1 destinations exactly once. It’s an NP-hard combinatorial problem, and therefore there is no known polynomial-time algorithm that is able to solve all instances of the problem. In this document we shall discuss on the Travelling Salesman Problem (TSP) a very famous NP-hard problem and will take a few attempts to solve it, using Dynamic programming, or by using approximation algorithms (GVNS) and work on the corresponding python implementations.
Content may be subject to copyright.
*-*-*-*-*
INSTITUT NATIONAL
DE STATISTIQUE ET D’ECONOMIE APPLIQUEE
*-*-*-*-*
NATIONAL INSTITUTE
OF STATISTICS AND APPLIED ECONOMICS
Traveling Salesmen Problem Solving by
Dynamic Programming & GVNS
By
Zakaria MEJDOUL
Naoual SMAILI
Abdessamad AOUISSI
INSEA, Rabat
Master of Research entitled M2SI
January 17, 2021
Academic year: 2020/2021
A dissertation presented in part fulfillment of the requirements of the Degree of Master of
Research entitled Information Systems and Intelligent Systems at the INSEA of Rabat
2
Summary
Abstract ...................................................................................................................................................... 3
Introduction ............................................................................................................................................... 4
Travelling Salesman Problem with Dynamic Programming ....................................................................... 6
Travelling Salesman Problem with GVNS ............................................................................................... 10
Comparison .............................................................................................................................................. 17
Conclusion ............................................................................................................................................... 18
Bibliography/Webography ....................................................................................................................... 19
3
Abstract
Travelling Salesman Problem (TSP) is one of the problems which is being widely
used in transportation industry which its optimization would speed up services and
increase customer satisfaction. The traveling salesman problem involves finding the
shortest path that visits n specified locations, starting and ending at the same place
and visiting the other n-1 destinations exactly once. To the layman, this problem
might seem a relatively simple matter of connecting dots, but that could not be further
from the truth.
The TSP is one of the most significant problems in the history of applied
mathematics. In 1952, three operations researchers (Danzig, Fulkerson, and
Johnson, the first group to really crack the problem) successfully solved a TSP
instance with 49 US cities to optimality.
Consequently, it is fair to say that the TSP has birthed a lot of significant
combinatorial optimization research, as well as help us recognize the difficulty of
solving discrete problems accurately and precisely.
The TSP’s wide applicability (school bus routes, home service calls) is one
contributor to its significance, but the other part is its difficulty. It’s an NP-hard
combinatorial problem, and therefore there is no known polynomial-time algorithm
that is able to solve all instances of the problem. Some instances of the TSP can be
merely understood, as it might take forever to solve the model optimally.
Consequently, researchers developed heuristic algorithms to provide solutions that
are strong, but not necessarily optimal. In this document, we will show the why and
the how of two methods for the TSP.
4
Introduction
Travelling salesman problem was proposed by mathematicians, Carl Menger and
Hustler Wietni in 1930 [3]. The problem is that a travelling salesman wants to visit a
large number of cities and his goal is to find the shortest path, such that it passes all
cities, and each city is only passing once and finally returns to the starting point. An
example of this problem is shown in undermentioned Figure. In first part of this
Figure, there are 40 points which show cities and in part "B", the optimal path which
the salesman should pass to visit all cities is represented. TSP is one an NP-hard
problem [3]. Complexity order of these problems is exponential which does not have
an acceptable execution time. In solving such problems, obtaining certain solutions
might require much more time than lifetime of the system.
Dynamic programming solutions are one of the best methods for solving such terms
in terms of execution time and convert time order of the problem into polynomial
form. Problem of dynamic programming problems is their memory consumption
where in large problems, system cannot meet dynamic programming requirements.
Size and complexity of optimization problems like travelling salesman and real-
world problems, have attracted attentions of researchers towards meta-heuristic
algorithms and local investigation to inspire from social intelligence of creatures [4].
Meta-heuristic algorithms try to obtain logical results in an acceptable time by
consuming minimum memory.
Figure. An example of TSP: A) location of 40 cities. B)
Optimal path for travelling salesman.
5
Several algorithms have been proposed to solve TSP. most of these algorithms are
meta-heuristic algorithms and compute the approximate solution in a very short time.
Travelling salesman problem is very similar to routing vehicles. Several algorithms are
proposed to solve vehicle routing problem which can be used to solve TSP.
In this document we shall discuss on the Travelling Salesman Problem (TSP) a very
famous NP-hard problem and will take a few attempts to solve it, using Dynamic
programming, or by using approximation algorithms (GVNS) and work on the
corresponding python implementations.
Goal of the project:
Develop efficient programs for solving TSP problem.
6
Travelling Salesman Problem with Dynamic
Programming
One of the most widely used methods in designing algorithms is dynamic
programming method. This method works on sub-problems like division method
and division-based methods. When sub-problems do not overlap, the method
performs well. Dynamic programming is widely used in optimization problems.
Basic condition for using the method for calculating optimal case is known as
optimality principle.
Optimality principle is to solve the problem optimally including optimal solution of
all sub-problems. In other words, the problem should be such that upon finding its
optimal solution, optimal solution of all sub-problems is also obtained. For example,
in finding shortest path between two cities, path between origin and each node on
the optimal path is the most optimal path between those cities [5].
In this document, this algorithm is used to solve TSP. considering optimality
principle and dynamic programming, it should be noted which sub-problem is
suitable for this problem? It is assumed that we have started from city 1 and a few
numbers of cities have been visited and now we are in city j. Best city should be
selected considering that it has not been visited previously.
For a subset of cities          , length of shortest path visited at S is shown
with   which has started from city 1 and has ended at city j. In this method,
    , because the path could not have started from city 1 and ended at city
1. Now the subproblems should be developed to achieve the main problem. After
visiting city j, city i should be visited which is calculated based on length function
represented in this Equation.
        
In this equation,   is size length of the path between city 1 and city j and  is
the length of path between city i and j.
7
This Figure shows Pseudo-code of the dynamic programming algorithm for TSP.
The number of sub-problems in this method is 2n*n and each sub-problem can be
solved in a time in linear order. Therefore, cost of this method is equal to O(n2×2n).
8
Python implementation
Parameters
global instance: Distance matrix of shape (n x n) with the (i, j) entry indicating the
distance from node i to j.
first_city: Used to define the source city where the problem begins.
def solve_tsp_dynamic_programming(first_city) -> Tuple[List, int]:
# Solve TSP to optimality with dynamic programming.
# Get initial set {0, 1, 2, ..., tsp_size} as a frozenset because since
# @lru_cache requires a hashable type
N = frozenset(range(0, len(instance)))
N = N.difference({first_city-1})
memo: Dict[Tuple, int] = {}
# Step 1: get minimum distance
@lru_cache(maxsize=(len(instance)**2) * (2**len(instance)))
def dist(ni, N: frozenset):
if not N:
return instance[ni][first_city-1]
# Store the costs in the form (nj, dist(nj, N))
costs = [
(nj, instance[ni][nj] + dist(nj, N.difference({nj})))
for nj in N
]
nmin, min_cost = min(costs, key=lambda x: x[1])
memo[(ni, N)] = nmin
return min_cost
best_distance = dist(first_city-1, N)
# Step 2: get path with the minimum distance
ni = first_city-1
solution = [first_city]
while N:
ni = memo[(ni, N)]
solution.append(ni+1)
N = N.difference({ni})
solution.append(first_city)
return solution, best_distance
9
Returns
solution: A permutation of nodes from 1 to n that produces the least total distance.
best_distance: The total distance the optimal permutation produces.
Notes
Algorithm: cost of the optimal path
Consider a TSP instance with 3 nodes: {0, 1, 2}.
Let dist(0, {1, 2}) be the distance from 0, visiting all nodes in {1, 2} and
going back to 0. This can be computed recursively as:
dist(ni, N) = min ( c_{ni, nj} + dist(nj, N - {nj}) )
nj in N
and
dist(ni, {}) = c_{ni, 0}
With starting point as dist(0, {1, 2, ..., tsp_size}).
The notation N - {nj} is the difference operator, meaning set N without node nj.
Algorithm: compute the optimal path
The previous process returns the distance of the optimal path. To find the
actual path, we need to store in a memory the following key/values:
memo[(ni, N)] = nj_min
With nj_min the node in N that provided the smallest value of dist(ni, N).
Then, the process goes backwards starting from
memo[(0, {1, 2, ..., tsp_size})].
In the previous example, suppose memo[(0, {1, 2})] = 1.
Then, look for memo[(1, {2})] = 2.
Then, since the next step would be memo[2, {}], stop there.
The optimal path would be 0 -> 1 -> 2 -> 0.
10
Travelling Salesman Problem with GVNS
In order to define GVNS we need to define VNS first.
VNS (Variable Neighborhood Search) is an algorithm based on a local search with
different neighborhood structures (Doerner et al., 2007). They define a set of
modifications that can be applied to a solution in order to create new solutions
(Meignan et al., 2012). According to Mladenovic and Hansen (1997), the basic
algorithm can be described as a set of steps that starts by the definition about the
amount and types of neighborhood structures that will be considered. Then a
random solution is obtained (Figure 1). With that, for each iteration and
neighborhood structure, the algorithm executes a local search procedure in order to
find a solution better than the considered before. If the algorithm does not find no
better solution after n iterations, or after evaluates all neighbors, the structure is
changed, and the local search is done again. This process repeats until all considered
neighborhood structures are used. The algorithm finishes when a stop criterium is
reached, like as a number of iterations or time of processing.
When the local search used is the VND (Variable neighborhood descent) method
then we will be in the VNS’s variant: General Variable Neighborhood Search
(GVNS)
In the next pages you find the algorithm and the implementation of the GVNS.
11
General VNS
RVNS
The reduced VNS (RVNS) method is obtained if random points are selected
from Nk(x) and no descent is made. Rather, the values of these new points are
compared with that of the incumbent and an update takes place in case of
improvement. We also assume that a stopping condition has been chosen like the
maximum CPU time allowed tmax or the maximum number of iterations between
two improvements.
To simplify the description of the algorithms we always use tmax below. Therefore,
RVNS uses two parameters: tmax and kmax. It is presented in Algorithm
12
Local search
13
The variable neighborhood descent (VND) method is obtained if a change of
neighborhoods is performed in a deterministic way. It is presented in Algorithm 2,
where neighborhoods are denoted as Nk,k = 1,...,kmax.
14
Real value of a solution
This function calculates the distance value for path x
Neighborhood Structures
Three local search operators are considered for exploring different solutions:
1. NS_two_opt The 2-Opt move breaks two arcs in the current solution
and reconnects them in a different way.
2. NS_swapping This move swaps two nodes in the current route.
3. NS_insertion_before This move removes node i from its current
position in the route and re-inserts it before a selected node b.
15
Neighborhood Change
Function NeighborhoodChange() compares the incumbent value f(x) with the new
value f(x’) obtained from the kth neighborhood. If an improvement is obtained, the
new incumbent is updated, and k is returned to its initial value. Otherwise, the next
neighborhood is considered.
Shaking Methods
This diversification method randomly selects one of the predefined neighborhood
structures and applies it k times (1 < k < kmax, where kmax is the maximum number of
shaking iterations) in the current solution.
16
The neighborhood Nk(x) denotes the set of solutions in the kth neighborhood of x
Initialization
This function takes the value of the starting city and builds a random initial path
17
Comparison
We have three neighborhood structures, and the order of its structures has an
impact on the result, so we will have to compare the six combinations with
different execution times.
N1: NS_swapping
N2: NS_insertion_before
N3: NS_two_opt
Computational Results:
Result for one minute:
N1-N2-N3
N1-N3-N2
N2-N1-N3
N2-N3-N1
N3-N1-N2
N3-N2-N1
T(min)
Instance 1
2090
2085
2085
2085
2085
2085
1
Instance 2
33797
33744
33729
33729
33614
33551
1
Instance 3
5811
5811
5811
5811
5811
5811
1
Result for four minutes:
N1-N2-N3
N1-N3-N2
N2-N1-N3
N2-N3-N1
N3-N1-N2
N3-N2-N1
T(min)
Instance 1
2085
2085
2085
2085
2085
2085
4
Instance 2
33729
33633
33797
33628
33551
33551
4
Instance 3
5811
5811
5811
5811
5811
5811
4
Result for eight minutes:
N1-N2-N3
N1-N3-N2
N2-N1-N3
N2-N3-N1
N3-N1-N2
N3-N2-N1
T(min)
Instance 1
2085
2085
2085
2085
2085
2085
8
Instance 2
33729
33628
33809
33551
33551
33551
8
Instance 3
5811
5811
5811
5811
5811
5811
8
Note:
From the results of the three tables, we notice that the combination
of (N3-N2-N1) gives the best result.
18
Conclusion
It is true that dynamic programing gives an exact solution, but it’s still
limited in term of capacity of memory, so in order to resolve a problem
with massive data, approachable solutions are the best, in our case we used
the metaheuristic GVNS which is not limited as dynamic programming,
but we have no guaranty that it gives the exact solution (optimal value).
19
Bibliography/Webography
[1] Variable Neighborhood Search Pierre Hansen, Nenad Mladenovic, Jack
Brimberg and José A. Moreno Pérez
[2] Meta-Heuristic Approaches for Solving Travelling Salesman Problem Elham
Damghanijazi, Arash Mazidi
[3]. D. L. Applegate., R. E. Bixby, V. Chvatal, and W. J. Cook. The traveling
salesman problem: a computational study. Princeton university press, 2011.
[4]. Giagkiozis, R. C. Purshouse, and P. J. Fleming. "An overview of population-
based algorithms for multi-objective optimisation." International Journal of
Systems Science, Vol. 46, No. 9 pp. 1572-1599, 2015.
[5]. V. L. De Matos, A. B. Philpott, and E. C. Finardi. "Improving the performance
of stochastic dual dynamic programming." Journal of Computational and
Applied Mathematics 290, pp. 196-208, 2015.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In this work we present an overview of the most prominent population-based algorithms and the methodologies used to extend them to multiple objective problems. Although not exact in the mathematical sense, it has long been recognised that population-based multi-objective optimisation techniques for real-world applications are immensely valuable and versatile. These techniques are usually employed when exact optimisation methods are not easily applicable or simply when, due to sheer complexity, such techniques could potentially be very costly. Another advantage is that since a population of decision vectors is considered in each generation these algorithms are implicitly parallelisable and can generate an approximation of the entire Pareto front at each iteration. A critique of their capabilities is also provided.
Article
This paper is concerned with tuning the Stochastic Dual Dynamic Programming algorithm to make it more computationally efficient. We report the results of some computational experiments on a large-scale hydrothermal scheduling model developed for Brazil. We find that the best improvements in computation time are obtained from an implementation that increases the number of scenarios in the forward pass with each iteration and selects cuts to be included in the stage problems in each iteration. This gives an order of magnitude decrease in computation time with little change in solution quality.
Article
This book presents the latest findings on one of the most intensely investigated subjects in computational mathematics--the traveling salesman problem. It sounds simple enough: given a set of cities and the cost of travel between each pair of them, the problem challenges you to find the cheapest route by which to visit all the cities and return home to where you began. Though seemingly modest, this exercise has inspired studies by mathematicians, chemists, and physicists. Teachers use it in the classroom. It has practical applications in genetics, telecommunications, and neuroscience. The authors of this book are the same pioneers who for nearly two decades have led the investigation into the traveling salesman problem. They have derived solutions to almost eighty-six thousand cities, yet a general solution to the problem has yet to be discovered. Here they describe the method and computer code they used to solve a broad range of large-scale problems, and along the way they demonstrate the interplay of applied mathematics with increasingly powerful computing platforms. They also give the fascinating history of the problem--how it developed, and why it continues to intrigue us.
Approaches for Solving Travelling Salesman Problem Elham Damghanijazi
  • Meta-Heuristic
Meta-Heuristic Approaches for Solving Travelling Salesman Problem Elham Damghanijazi, Arash Mazidi