Graph Optimization

A common thread that connects all the topics in this unit is the opportunity to optimize (maximize or minimize) some quantity that is associated with a graph. For instance, we may decide to select a path from one node to another based on the weight of the connections between those nodes. The weights refer to some aspect of the situation at hand, like distance, cost, or importance.

9.5 Graph Optimization

The common thread that connects all of the problems in this section is the desire to optimize (maximize or minimize) a quantity that is associated with a graph. We will concentrate most of our attention on two of these problems, the Traveling Salesman Problem and the Maximum Flow Problem. At the close of this section, we will discuss some other common optimization problems.


9.5.1 Weighted Graphs

Definition 9.5.1 Weighted Graph. A weighted graph, (V, E, w), is a graph (V, E) together with a weight function : E →ℝ. If e ∈ E, w(e) is the weight on edge e.

As you will see in our examples, w(e) is often a cost associated with the edge e; therefore, most weights will be positive.


Example 9.5.2 A Distance Graph. Let be the set of six capital cities in New England: Boston, Augusta, Hartford, Providence, Concord, and Montpelier. Let E be {{a, b} ∈ V × V| a ≠ b}; that is, (V, E) is a complete unordered graph. An example of a weight function on this graph is w(c1, c2) = the distance, in miles, from c1 to c2.

Many road maps define distance functions as in the following table.


Table 9.5.3 Distances between capital cities in New England

--

 Augusta 

 Boston 

 Concord 

 Hartford 

 Montpelier 

Providence

Augusta, ME

--

165

148

266

190

208

Boston, MA

165

--

75

103

192

43

Concord, NH

148

75

--

142

117

109

Hartford, CT

266

103

142

--

204

70

Montpelier, VT

190

192

117

204

--

223

Providence, RI

208

43

109

70

223

--



9.5.2 The Traveling Salesman Problem

The Traveling Salesman Problem is, given a weighted graph, to find a circuit (e1, e2, . . . , en) that visits every vertex at least once and minimizes the sum of the weights, \sum
        _{i=1}^{n}w (ei). Any such circuit is called an optimal path.

Some statements of the Traveling Salesman Problem require that the circuit be Hamiltonian. In many applications, the graph in question will be complete and this restriction presents no problem. If the weight on each edge is constant, for example, w(e) = 1, then an optimal path would be any Hamiltonian circuit.


Example 9.5.4 The problem of a Boston salesman. The Traveling Salesman Problem gets its name from the situation of a salesman who wants to minimize the number of miles that he travels in visiting his customers. For example, if a salesman from Boston must visit the other capital cities of New England, then the problem is to find a circuit in the weighted graph of Example 9.5.2. Note that distance and cost are clearly related in this case. In addition, tolls and traffic congestion might also be taken into account.

The search for an efficient algorithm that solves the Traveling Salesman has occupied researchers for years. If the graph in question is complete, there are (n 1)! different circuits. As n gets large, it is impossible to check every possible circuit. The most efficient algorithms for solving the Traveling Salesman Problem take an amount of time that is proportional to n2n. Since this quantity grows so quickly, we can't expect to have the time to solve the Traveling Salesman Problem for large values of n. Most of the useful algorithms that have been developed have to be heuristic; that is, they find a circuit that should be close to the optimal one. One such algorithm is the "closest neighbor" algorithm, one of the earliest attempts at solving the Traveling Salesman Problem. The general idea behind this algorithm is, starting at any vertex, to visit the closest neighbor to the starting point. At each vertex, the next vertex that is visited is the closest one that has not been reached. This shortsighted approach typifies heuristic algorithms called greedy algorithms, which attempt to solve a minimization (maximization) problem by minimizing (maximizing) the quantity associated with only the first step.


Algorithm 9.5.5 The Closest Neighbor Algorithm. Let G = (V, E, w) be a complete weighted graph with |V| = n. The closest neighbor circuit through G starting at vis (v1, v2, . . . , vn), defined by the steps:

(1) V1 = V { v1}.

(2) For k = 2 to n 1

(a) v= the closest vertex in V− 1 to v− 1:

w(v− 1, vk) = min (w (v− 1, v) |v V− 1)

In case of a tie for closest, vmay be chosen arbitrarily.

(b) V= V− 1 −  {vk}

(3) v= the only element of Vn

The cost of the closest neighbor circuit is \sum_{k=1}^{n-1}w(vk, v+ 1) + w(vn, v1)


Example 9.5.6 A small example. The closest neighbor circuit starting at A in Figure 9.5.7 is (1, 3, 2, 4, 1), with a cost of 29. The optimal path is (1, 2, 3, 4, 1), with a cost of 27.


Figure 9.5.7 A small example

Although the closest neighbor circuit is often not optimal, we may be satisfied if it is close to optimal. If Copt and Ccn are the costs of optimal and closest neighbor circuits in a graph, then it is always the case that Copt ≤ Ccn or \frac{C_{cn}}{C_{opt}} \geq 1. We can assess how good the closest neighbor algorithm is by determining how small the quantity \frac{C_{cn}}{C_{opt}} gets. If it is always near 1, then the algorithm is good. However, if there are graphs for which it is large, then the algorithm may be discarded. Note that in Example 9.5.6, \frac{C_{cn}}{C_{opt}}\frac{29}{27} \approx 1.074. A 7 percent increase in cost may or may not be considered significant, depending on the situation.


Example 9.5.8 The One-way Street. A salesman must make stops at vertices A, B, and C, which are all on the same one-way street. The graph in Figure 9.5.9 is weighted by the function w(i, j) equal to the time it takes to drive from vertex i to vertex j.

Figure 9.5.9 Traveling a one-way street

Note that if j is down the one-way street from i, then w(i, j) < w(j, i). The values of Copt, and Ccn are 20 and 32, respectively. Verify that Ccn is 32 by using the closest neighbor algorithm. The value of \frac{C_{cn}}{C_{opt}} = 1.6 is significant in this case since our salesman would spend 60 percent more time on the road if he used the closest neighbor algorithm.

A more general result relating to the closest neighbor algorithm presumes that the graph in question is complete and that the weight function satisfies the conditions

  • w(x, y) = w(y, x) for all x, y in the vertex set, and
  • w(x, y) + w(y, z) ≥ w(x, z) for all x, y, z in the vertex set.

The first condition is called the symmetry condition and the second is the triangle inequality.


Theorem 9.5.10 If (V, E, w) is a complete weighted graph that satisfies the symmetry and triangle inequality conditions, then

\frac{C_{cn}}{C_{opt}} \leq \frac{\left \lceil \mathrm{log_{2}}(2n) \right \rceil}{2}


Observation 9.5.11 If |V| = 8, then this theorem says that Ccn can be no larger than twice the size of Copt; however, it doesn't say that the closest neighbor circuit will necessarily be that far from an optimal circuit. The quantity \frac{\left \lceil \mathrm{log_{2}}(2n) \right \rceil}{2} is called an upper bound for the ratio \frac{C_{cn}}{C_{opt}}. It tells us only that thing can't be any worse than the upper bound. Certainly, there are many graphs with eight vertices such that the optimal and closest neighbor circuits are the same. What is left unstated in this theorem is whether there are graphs for which the quantities are equal. If there are such graphs, we say that the upper bound is sharp.

The value of \frac{C_{cn}}{C_{opt}}in Example Example 9.5.8 is 1.6, which is greater than \frac{\left \lceil \mathrm{log_{2}}(2 \cdot 4) \right \rceil}{2} = 1.5; however, the weight function in this example does not satisfy the conditions of the theorem.


Example 9.5.12 The Unit Square Problem. Suppose a robot is programmed to weld joints on square metal plates. Each plate must be welded at prescribed points on the square. To minimize the time it takes to complete the job, the total distance that a robot's arm moves should be minimized. Let d(P, Q) be the distance between P and Q. Assume that before each plate can be welded, the arm must be positioned at a certain point P0. Given a list of n points, we want to put them in order so that

d(P0, P1) + d(P1, P2) + · · · + d(P− 1, Pn) + d(Pn, P0)

is as small as possible.

The type of problem that is outlined in the example above is of such importance that it is one of the most studied version of the Traveling Salesman Problem. What follows is the usual statement of the problem. Let [0, 1] = {x ∈ ℝ | 0 ≤ x ≤ 1}, and let S = [0, 1]2, the unit square. Given n pairs of real numbers (x1, y1), (x2, y2) , . . . , (xn, yn) in S that represent the n vertices of a Kn, find a circuit of the graph that minimizes the sum of the distances traveled in traversing the circuit.

Since the problem calls for a circuit, it doesn't matter which vertex we start at; assume that we will start at (x1, y1). Once the problem is solved, we can always change our starting position. A function can most efficiently describe a circuit in this problem. Every bijection f : {1, ..., n{1, ..., n} with f (1) = 1 describes a circuit

(x1, y1), (xf(2), yf(2)) , . . . , (xf(n), yf(n) )

There are (n 1)! such bijections. Since a circuit and its reversal have the same associated cost, there are \frac{(n-1)!}{2} cases to consider. An examination of all possible cases is not feasible for large values of n.

One popular heuristic algorithm is the strip algorithm:


Heuristic 9.5.13 The Strip Algorithm. Given n points in the unit square:

Phase 1:

  • Divide the square into \left \lceil \sqrt{n/2} \right \rceil vertical strips, as in Figure 9.5.14. Let d be the width of each strip. If a point lies on a boundary between two strips, consider it part of the left-hand strip.
  • Starting from the left, find the first strip that contains one of the points. Locate the starting point by selecting the first point that is encountered in that strip as you travel from bottom to top. We will assume that the first point is (x1, y1)
  • Alternate traveling up and down the strips that contain vertices until all of the vertices have been reached.
  • Return to the starting point.

Phase 2:

  • Shift all strips d/2 units to the right (creating a small strip on the left).
  • Repeat Steps 1.2 through 1.4 of Phase 1 with the new strips.
When the two phases are complete, choose the shorter of the two circuits obtained.


Figure 9.5.14 The Strip Algorithm

Step Item 3 may need a bit more explanation. How do you travel up or down a strip? In most cases, the vertices in a strip will be vertically distributed so that the order in which they are visited is obvious. In some cases, however, the order might not be clear, as in the third strip in Phase I of Figure 9.5.14. Within a strip, the order in which you visit the points (if you are going up the strip) is determined thusly: (xi, yi) precedes (xj, yj) if y< yor if y= yand x< xj. In traveling down a strip, replace y< ywith y> yj.

The selection of \left \lceil \sqrt{n/2} \right \rceil strips was made in a 1959 paper by Beardwood, Halton, and Hammersley. It balances the problems that arise if the number of strips is too small or too large. If the square is divided into too few strips, some strips may be packed with vertices so that visiting them would require excessive horizontal motion. If too many strips are used, excessive vertical motion tends to be the result. An update on what is known about this algorithm is contained in [39].

Since the construction of a circuit in the square consists of sorting the given points, it should come as no surprise that the strip algorithm requires a time that is roughly a multiple of n log n time units when n points are to be visited.

The worst case that has been encountered with this algorithm is one in which the circuit obtained has a total distance of approximately \sqrt{2n}(see Sopowit et al.).


9.5.3 Networks and the Maximum Flow Problem

Definition 9.5.15 Network. A network is a simple weighted directed graph that contains two distinguished vertices called the source and the sink with the properties that the indegree of the source and outdegree of the sink are both zero, and source is connected to sink. The weight function on a network is the capacity function, which has positive weights.

An example of a real situation that can be represented by a network is a city's water system. A reservoir would be the source, while a distribution point in the city to all of the users would be the sink. The system of pumps and pipes that carries the water from source to sink makes up the remaining network. We can assume that the water that passes through a pipe in one minute is controlled by a pump and the maximum rate is determined by the size of the pipe and the strength of the pump. This maximum rate of flow through a pipe is called its capacity and is the information that the weight function of a network contains.


Example 9.5.16 A City Water System. Consider the system that is illustrated in Figure 9.5.17. The numbers that appear next to each pipe indicate the capacity of that pipe in thousands of gallons per minute. This map can be drawn in the form of a network, as in Figure 9.5.18.

 

Figure 9.5.17 City Water System

 

Figure 9.5.18 Flow Diagram for a City's Water Network

Although the material passing through this network is water, networks can also represent the flow of other materials, such as automobiles, electricity, bits, telephone calls, or patients in a health system.


Problem 9.5.19 The Maximum Flow Problem. The Maximum Flow Problem is derived from the objective of moving the maximum amount of water or other material from the source to the sink. To measure this amount, we define a flow as a function f : E → ℝ such that (1) the flow of material through any edge is nonnegative and no larger than its capacity: 0 ≤ f(e) ≤ w(e), for all e E; and (2) for each vertex other than the source and sink, the total amount of material that is directed into a vertex is equal to the total amount that is directed out:

\begin{align*} \sum_{(x,v) \in E} f(x,v) &= \sum_{(v,y) \in E} f(v,y) \\ \mathrm{Flow \; into \;}v &= \mathrm{Flow \; out \; of \;}v \end{align*}     (9.5.1)

The summation on the left of (9.5.1) represents the sum of the flows through each edge in E that has v as a terminal vertex. The right-hand side indicates that you should add all of the flows through edges that initiate at v.


Theorem 9.5.20 Flow out of Source equals Flow in Sink. If f is a flow, then \sum_{(source,v) \in E} f(source,v) = \sum_{(v,sink) \in E} f(v,sink)

Proof. Subtract the right-hand side of (9.5.1) from the left-hand side. The result is:

Flow into v Flow out of v = 0

Now sum up these differences for each vertex in V = V {source, sink}. The result is

\sum_{v \in V'} \left( \sum_{(x,v) \in E} f(x,v) - \sum_{(v,y) \in E} f(v,y) \right ) = 0       (9.5.2)

Now observe that if an edge connects two vertices in V , its flow appears as both a positive and a negative term in (9.5.2). This means that the only positive terms that are not cancelled out are the flows into the sink. In addition, the only negative terms that remain are the flows out of the source. Therefore,

\sum_{(v,sink) \in E} f(v,sink) - \sum_{(source,v) \in E} f(source,v) = 0


Definition 9.5.21 The Value of a Flow. The two values flow into the sink and flow out of the source were proved to be equal in Theorem 9.5.20 and this common value is called the value of the flow . It is denoted by V(f ). The value of a flow represents the amount of material that passes through the network with that flow.

Since the Maximum Flow Problem consists of maximizing the amount of material that passes through a given network, it is equivalent to finding a flow with the largest possible value. Any such flow is called a maximal flow.

For the network in Figure 9.5.18, one flow is f1, defined by f1(e1) = 25, f1(e2) = 20, f1(e3) = 0, f1(e4) = 25, and f1(e5) = 20. The value of f1, V (f1), is 45. Since the total flow into the sink can be no larger than 50 (w(e4) + w(e5) = 30 + 20), we can tell that f1 is not very far from the solution. Can you improve on f1 at all? The sum of the capacities into the sink can't always be obtained by a flow. The same is true for the sum of the capacities out of the source. In this case, the sum of the capacities out of the source is 60, which obviously can't be reached in this network.

A solution of the Maximum Flow Problem for this network is the maximal flow f2, where f2(e1) = 25, f2(e2) = 25, f2(e3) = 5, f2(e4) = 30, and f2(e5) = 20, with V(f2) = 50. This solution is not unique. In fact, there is an infinite number of maximal flows for this problem.

There have been several algorithms developed to solve the Maximal Flow Problem. One of these is the Ford and Fulkerson Algorithm (FFA). The FFA consists of repeatedly finding paths in a network called flow augmenting paths until no improvement can be made in the flow that has been obtained.


Definition 9.5.22 Flow Augmenting Path. Given a flow f in a network (V, E), a flow augmenting path with respect to f is a simple path from the source to the sink using edges both in their forward and their reverse directions such that for each edge e in the path, w(e) f(e) > 0 if e is used in its forward direction and f (e) > 0 if e is used in the reverse direction.


Example 9.5.23 Augmenting City Water Flow. For f1 in Figure 9.5.18, a flow augmenting path would be (e2, e3, e4) since w(e2) f1(e2) = 15, w(e3) f1(e3) = 5, and w(e4) f1(e4) = 5.

These positive differences represent unused capacities, and the smallest value represents the amount of flow that can be added to each edge in the path. Note that by adding 5 to each edge in our path, we obtain f2, which is maximal. If an edge with a positive flow is used in its reverse direction, it is contributing a movement of material that is counterproductive to the objective of maximizing flow. This is why the algorithm directs us to decrease the flow through that edge.


Algorithm 9.5.24 The Ford and Fulkerson Algorithm.

(1) Define the flow function f0 by f0(e) = 0 for each edge e E.

(2) i = 0.

(3) Repeat:

(a) If possible, find a flow augmenting path with respect to fi

(b) If a flow augmenting path exists, then:

(i) Determine

d = min{{w(e) fi(e) |e is used in the forward direction}{f i(e) |e is used in the reverse direction}}

(ii) Define f+ 1 by

f+ 1(e) = f(e) if e is not part of the flow augmenting path

f+ 1(e) = f(e) + d if e is used in the forward direction

f+ 1(e) = f(e) d if e is used in the reverse direction

(iii) i = i + 1

until no flow augmenting path exists.

(4) Terminate with a maximal flow fi


List 9.5.25 Notes on the Ford and Fulkerson Algorithm

  1. It should be clear that every flow augmenting path leads to a flow of increased value and that none of the capacities of the network can be violated.
  2. The depth-first search should be used to find flow augmenting paths since it is far more efficient than the breadth-first search in this situation. The depth-first search differs from the breadth-first algorithm in that you sequentially visit vertices until you reach a "dead end" and then backtrack.
  3. There have been networks discovered for which the FFA does not terminate in a finite number of steps. These examples all have irrational capacities. It has been proven that if all capacities are positive integers, the FFA terminates in a finite number of steps. See Ford and Fulkerson, Even, or Berge for details.
  4. When you use the FFA to solve the Maximum Flow Problem by hand it is convenient to label each edge of the network with the fraction \frac{f_{i}(e)}{w(e)}.


Algorithm 9.5.26 Depth-First Search for a Flow Augmenting Path. This is a depth-first search for the Sink Initiating at the Source. Let Ebe the set of directed edges that can be used in producing a flow augmenting path. Add to the network a vertex cal led start and the edge (start, source).

(1) S =vertex set of the network.

(2) p =source Move p along the edge (start, source)

(3) while p is not equal to start or sink:

(a) if an edge in Eexists that takes you from p to another vertex in S:

then set p to be that next vertex and delete the edge from E

else reassign p to be the vertex that p was reached from (i.e., backtrack).

(4) if p = start:

then no flow augmenting path exists.

else p = sink and you have found a flow augmenting path.


Example 9.5.27 A flow augmenting path going against the flow. Consider the network in Figure 9.5.28, where the current flow, f , is indicated by a labeling of the edges.


Figure 9.5.28 Current Flow

The path (Source, v2, v1, v3, Sink) is a flow augmenting path that allows us to increase the flow by one unit. Note that (v1, v3) is used in the reverse direction, which is allowed because f (v1, v3) > 0. The value of the new flow that we obtain is 8. This flow must be maximal since the capacities out of the source add up to 8. This maximal flow is defined by Figure 9.5.29.

 

Figure 9.5.29 Updated Flow


9.5.4 Other Graph Optimization Problems

  1. The Minimum Spanning Tree Problem: Given a weighted graph, (V, E, w), find a subset Eof E with the properties that (V, E) is connected and the sum of the weights of edges in Eis as small as possible. We will discuss this problem in Chapter 10.
  2. The Minimum Matching Problem: Given an undirected weighted graph, (K, E, w), with an even number of vertices, pair up the vertices so that each pair is connected by an edge and the sum of these edges is as small as possible. A unit square version of this problem has been studied extensively. See [39] for details on what is known about this version of the problem.
  3. The Graph Center Problem: Given a connected, undirected, weighted graph, find a vertex (called a center) in the graph with the property that the distance from the center to every other vertex is as small as possible. "As small as possible" is normally interpreted as minimizing the maximum distance from the center to a vertex.



Source: Al Doerr and Ken Levasseur, http://faculty.uml.edu/klevasseur/ads-latex/ads.pdf
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License.

Last modified: Wednesday, August 19, 2020, 5:35 PM