We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
E cient Algorithms for Geometric Optimization yPankaj K. Agarwal z April 1, 1998AbstractWe review the recent progress in the design of e cient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, prune-and-search techniques for linear programming and related problems, and LP-type problems and their e cient solution. We then describe a wide range of applications of these and other techniques to numerous problems in geometric optimization, including facility location, proximity problems, statistical estimators and metrology, placement and intersection of polygons and polyhedra, and ray shooting and other query-type problems.
Micha Sharir x
Both authors are supported by a grant from the U.S.-Israeli Binational Science Foundation. Pankaj Agarwal has also been supported by National Science Foundation Grant CCR-93{01259, by an Army Research O ce MURI grant DAAH04-96-1-0013, by a Sloan fellowship, and by an NYI award and matching funds from Xerox Corp. Micha Sharir has also been supported by NSF Grants CCR-94-24398 and CCR-9311127, by a Max-Planck Research Award, and by a grant from the G.I.F., the German-Israeli Foundation for Scienti c Research and Development. y A preliminary version of this paper appeared as: P. K. Agarwal and M. Sharir, Algorithmic techniques for geometric optimization, in Computer Science Today: Recent Trends and Developments, Lecture Notes in Computer Science, vol. 1000 (J. van Leeuwen, ed.), 1995, pp. 234{253. z Department of Computer Science, Box 90129, Duke University, Durham, NC 27708-0129, USA. x School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, ISRAEL; and Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA.
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Introduction
1
1 IntroductionCombinatorial optimization deals with problems of maximizing or minimizing a function of one or more variables subject to a large number of inequality (and equality) constraints. Many problems can be formulated as combinatorial optimization problems, which has made this a very active area of research during the past half century. In many applications, the underlying optimization problem involves a constant number of variables and a large number of constraints that are are induced by a given collection of geometric objects; we refer to such problems as geometric optimization problems. In such cases one expects that faster and simpler algorithms can be developed by exploiting the geometric nature of the problem. Much work has been done on geometric optimization problems during the last twenty years. Many new elegant and sophisticated techniques have been developed and successfully applied to a wide range of geometric optimization problems. The aim of this paper is to survey the main techniques and applications of this kind. This survey consist of two parts. The rst part describes s
everal general techniques that have led to e cient algorithms for a variety of geometric optimization problems, the most notable of which is linear programming. The second part lists many geometric applications of these techniques and discusses some of them in more detail. The rst technique that we present is called parametric searching. Although restricted versions of parametric searching already existed earlier (see e.g. 112]), the full-scale technique was presented by Megiddo in the late 1970s and early 1980s 209, 210]. The technique was originally motivated by so-called parametric optimization problems in combinatorial optimization, and did not receive much attention by the computational geometry community until the late 1980s. In the last seven years, though, it has become one of the major techniques for solving geometric optimization problems e ciently. We outline the technique in detail in Section 2, rst exemplifying it on the slope-selection problem 77], and then presenting various extensions of the technique. Despite its power and versatility, parametric searching has certain drawbacks, which we discuss next. Consequently, there have been several recent attempts to replace parametric searching by alternative techniques, including randomization 15, 54, 73, 199], expander graphs 30, 176, 179, 178], geometric cuttings 18, 49], and matrix searching 124, 125, 126, 127, 131]. We present these alternative techniques in Section 3. Almost concurrently with the development of the parametric searching technique, Megiddo devised another ingenious technique for solving linear programming and several related optimization problems 211, 213]. This technique, now known as decimation or prune-andsearch, was later re ned and extended by Dyer 99], Clarkson 68], and others. The technique can be viewed as an optimized version of parametric searching, in which certain special properties of the problem allows one to improve further the e ciency of the algorithm. For example, this technique yields linear-time deterministic algorithms for linear programmingGeometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Introduction
2
and for several related problems, including the smallest-enclosing-ball problem, when the dimension is xed. (However, the dependence of the running time of these algorithms on the dimension is at least exponential.) We illustrate the technique in Section 4 by applying it to linear programming. In the past decade, randomized algorithms have been developed for a variety of problems in computational geometry and in other elds; see, e.g., the books by Mulmuley 227] and by Motwani and Raghavan 226]. Clarkson 71] and Seidel 248] gave randomized algorithms for linear programming, whose expected time is linear in any xed dimension, which are much simpler than their earlier deterministic counterparts. The dependence on the dimension of the running time of these algorithms is better (though still exponential). Actually, Clarkson's technique is rather general, and i
s also applicable to a variety of other geometric optimization problems. We describe this technique in Section 5. Another signi cant progress on linear programming was made about four years ago, when new randomized algorithms for linear programming were obtained independently by Kalai 174], and by Matousek et al. 207, 256] (these two algorithms are essentially dual versions of the same technique). The expected number of arithmetic operations performed by these algorithms is subexponential in the input size, and is still linear in any xed dimension, so they constitute an important step toward the still open goal of obtaining strongly polynomial algorithms for linear programming. (Recall that the polynomial-time algorithms by Khachiyan 181] and by Karmarkar 175] are not strongly polynomial, as the number of arithmetic operations performed by these algorithms depends on the size of the coe cients of the input constraints.) This new technique is presented in Section 6. The algorithm in 207, 256] is actually formulated in a general abstract framework, which ts not only linear programming but many other problems. Such LP-type problems are also reviewed in Section 6, including the connection, recently noted by Amenta 35, 36], between abstract linear programming and Helly-type theorems. In the second part of this paper, we survey many geometric applications of the techniques described in the rst part. These applications include problems involving facility location (e.g., nding p congruent disks of smallest possible radius whose union covers a given planar point set), geometric proximity (e.g., computing the diameter of a point set in three dimensions), statistical estimators and metrology (e.g., computing the smallest-width annulus that contains a given planar point set), placement and intersection of polygons and polyhedra (e.g., nding the largest similar copy of a convex polygon that ts inside a given polygonal environment), and query-type problems (e.g., the ray-shooting problem, in which we want to preprocess a given collection of objects, so that the rst object hit by a query ray can then be determined e ciently). Numerous non-geometric optimization problems have also bene ted from the techniques presented here (see 22, 74, 124, 141, 229] for a sample of such applications), but we will focus on geometric applications.Geometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Parametric Searching
3
Although the common theme of most of the applications reviewed here is that they can be solved e ciently using parametric-searching, prune-and-search, LP-type, or related techniques, each of them requires a problem-speci c, and often fairly sophisticated, approach. For example, the heart of a typical application of parametric searching is the design of e cient sequential and parallel algorithms for solving the appropriate problem-speci c\decision procedure" (see below for details). We will provide details of these solutions for some of the problems, but will have to suppr
ess them for most of the applications.
PART I: TECHNIQUESThe rst part of the survey describes several techniques commonly used in geometric optimization problems. We describe each technique and illustrate it by giving an example.
2 Parametric SearchingWe begin by outlining the parametric-searching technique, and then illustrate the technique by giving an example where this technique is applied. Finally, we discuss various extensions of parametric searching.
2.1 Outline of the techniqueThe parametric searching technique of Megiddo 209, 210] can be described in the following general terms (which are not as general as possible, but su ce for our purposes). Suppose we have a decision problem P ( ) that depends on a real parameter, and is monotone in, meaning that if P ( 0 ) is true for some 0, then P ( ) is true for all< 0 . Our goal is to nd, the maximum for which P ( ) is true, assuming such a maximum exists. Suppose further that P ( ) can be solved by a (sequential) algorithm As that takes and a set of data objects (independent of ) as the input, and that, as a by-product, As can also determine whether the given is equal to, smaller than, or larger than the desired value? . Assume moreover that the control ow of As is governed by comparisons, each of which amounts to testing the sign of some low-degree polynomial in . Megiddo's technique then runs As generically at the unknown optimum and maintains an open interval I that is known to contain . Initially, I is the whole line. Whenever As reaches a branching point that depends on some comparison with an associated polynomial p( ), it computes all the roots 1; 2;::: of p and computes P ( i ) by running (the standard, non-generic version of) As with the value of= i . If one of the i is equal to?, we stop since we have found the value of? . Otherwise, we have determined the open intervalGeometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Parametric Searching
4
( i; i+1 ) that contains . Since the sign of p( ) remains the same for all 2 ( i; i+1 ), we can compute the sign of p( ) by evaluating, say, p(( i+ i+1 )=2). The sign of p( ) also determines the outcome of the comparison at? . We now set I to be I\ ( i; i+1 ), and the execution of the generic As is resumed. As we proceed through this execution, each comparison that we resolve constrains the range where? can lie even further; and we thus obtain a sequence of progressively smaller intervals, each known to contain?, until we either reach the end of As with a nal interval I, or hit? at one of the comparisons of As . Since the value of P ( ) changes at, the generic algorithm will always make a comparison whose associated polynomial vanishes at?, which will then cause the overall algorithm to terminate with the desired value of? . If As runs in time Ts and makes Cs comparisons, then, in general, the cost of the procedure just described is O(Cs Ts ), and is thus generally quadratic in the original complexity. To speed up the ex
ecution, Megiddo proposes to implement the generic algorithm by a parallel algorithm Ap (under Valiant's comparison model of computation 266]; see below). If Ap uses P processors and runs in Tp parallel steps, then each parallel step involves at most P independent comparisons; that is, we do not need to know the outcome of such a comparison to be able to execute other comparisons in the same batch. We can then compute the O(P ) roots of all the polynomials associated with these comparisons, and perform a binary search to locate among them, using (the non-generic) As at each binary step. The cost of simulating a parallel step of Ap is thus O(P+ Ts log P ) (there is no need to sort the O(P ) roots; instead, one can use repeated median nding, whose overall cost is only O(P )), for a total running time of O(PTp+ Tp Ts log P ). In most cases, the second term dominates the running time. This technique can be generalized in many ways. For example, given a concave function f ( ), we can compute the value of that maximizes f ( ), provided we have sequential and parallel algorithms for the decision problem (of comparing with a speci c ); these algorithms compute f ( ) and its derivative, from which the relative location of the maximum of f is easy to determine. We can compute even if the decision algorithm As cannot distinguish between, say,< and=; in this case we maintain a half-closed interval I containing, and the right endpoint of the nal I gives the desired value . See 14, 78, 263, 264] for these and some other generalizations. It is instructive to observe that parallelism is used here in a very weak sense, since the overall algorithm remains sequential and just simulates the parallel algorithm. The only feature that we require is that the parallel algorithm performs only a small number of batches of comparisons, and that the comparisons in each batch are independent of each other. We can therefore assume that the parallel algorithm runs in the parallel comparison model of Valiant 266], which measures parallelism only in terms of the comparisons being made, and ignores all other operations, such as bookkeeping, communication between the processors, and data allocation. Moreover, any portion of the algorithm that is independent of can be performed sequentially in any manner we like. These observations simplify theGeometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Parametric Searching
5
technique considerably in many cases.
2.2 An example: The slope-selection problemAs an illustration, consider the slope selection problem: Given a set of n points in the plane?n and an integer k 2, determine a segment connecting two input points that has the k-th smallest slope among all such segments. Using the duality transform 108], we can formulate this problem as follows: We are given a set L of n nonvertical lines in the plane? and an integer 1 k n, and we wish to nd an intersection point between two lines 2 of L that has the k-th smallest x-coordinate. (We assu
me, for simplicity, general position of the lines, so that no three lines are concurrent, and no two intersection points have the same x-coordinate.) We are thus seeking the k-th leftmost vertex of the arrangement A(L) of the lines in L; see 108, 254] for more details concerning arrangements. We de ne P ( ) to be true if the x-coordinates of at most k vertices of A(L) are smaller than or equal to . Obviously, P ( ) is monotone, and, the maximum value of for which P ( ) is true, is the x-coordinate of the desired vertex. After having computed, the actual vertex is rather easy to compute, and in fact the algorithm described below can compute the vertex without any additional work. In order to apply the parametric-searching technique, we need an algorithm that, given a vertical line`: x=, can compare with . Let k be the number of vertices of A(L) whose x-coordinates are less than or equal to . If k< k, then clearly< . If k k, then> (resp.= ) if and only if no (resp. at least one) vertex lies on the vertical line x= . Let (`1;`2;:::;`n ) denote the sequence of lines in L sorted in the decreasing order of their slopes, and let (` (1);` (2);:::;` (n) ) denote the sequence of these lines sorted by their intercepts with x= . An easy observation is that two lines`i,`j, with i< j, intersect to the left of x= if and only if (i)> (j ). In other words, the number of intersection points to the left of x= can be counted, in O(n log n) time, by counting the number of inversions in the permutation, using a straightforward tree-insertion procedure 183]. Moreover, we can implement this inversion-counting procedure by a parallel sorting algorithm that takes O(log n) parallel steps and uses O(n) processors (e.g., the one in 29]). Hence, we can count the number of inversions in O(n log n) time sequentially or in O(log n) parallel time using O(n) processors. Plugging these algorithms into the parametric-searching paradigm, we obtain an O(n log3 n)-time algorithm for the slope-selection problem.
2.3 Improvements and extensionsCole 76] observed that in certain applications of parametric searching, including the slope selection problem, the running time can be improved to O((P+ Ts)Tp ), as follows. Consider a parallel step of the above generic algorithm. Suppose that, instead of invoking the decision procedure O(log n) times in this step, we call it only O(1) times, say, three times. This willGeometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Parametric Searching
6
determine the outcome of 7=8 of the comparisons, and will leave 1=8 of them unresolved (we assume here, for simplicity, that each comparison has only one critical value of where its outcome changes; this is the case in the slope-selection problem). Suppose further that each of the unresolved comparisons can in uence only a constant (and small) number, say, two, of comparisons executed at the next parallel step. Then 3=4 of these comparisons can still be simulated generically with
the currently available information. This leads to a modi ed scheme that mixes the parallel steps of the algorithm, since we now have to perform together new comparisons and yet unresolved old comparisons. Nevertheless, Cole shows that if carefully implemented (by assigning an appropriate time-dependent weight to each unresolved comparison, and by choosing the weighted median at each step of the binary search), the number of parallel steps of the algorithm increases only by an additive logarithmic term, which leads to the improvement stated above. An ideal setup for Cole's improvement is when the parallel algorithm is described as a circuit (or network), each of whose gates has a constant fan-out. Since sorting can be implemented e ciently by such a network 29], Cole's technique is applicable to problems whose decision procedure is based on sorting. Cole's idea therefore improves the running time of the slope-selection algorithm to O(n log2 n). (Note that the only step of the algorithm that depends on is the sorting that produces . The subsequent inversion-counting step is independent of, and can be executed sequentially. In fact, this step can be totally suppressed, since it does not provide us with any additional information about .) Using additional machinery, Cole et al. 77] gave an optimal O(n log n)-time solution. They observe that one can compare with a value that is far away from in a faster manner, by counting inversions only approximately. This approximation is progressively re ned as approaches in subsequent comparisons. Cole et al. show that the overall cost of O(log n) calls to the approximating decision procedure is only O(n log n), so this also bounds the running time of the whole algorithm. This technique was subsequently simpli ed in 49]. Chazelle et al. 61] have shown that the algorithm of 77] can be extended to compute, in O(n log n) time, the k-th leftmost vertex in an arrangement of n line segments. The slope-selection problem is only one of many problems in geometric optimization that have been e ciently solved using parametric searching. See 5, 9, 14, 18, 60, 233] for a sample of other problems, many of which are described in Part II, that bene t from parametric searching. The parametric searching technique can be extended to higher dimensions in a natural manner. Suppose we have a d-variate (strictly) concave function F ( ), where varies over Rd . We wish to compute 2 Rd at which F ( ) attains its maximum value. Let As; Ap be, as above, sequential and parallel algorithms that can compute F ( 0 ) for any given 0 . As above, we run Ap generically at . Each comparison involving now amounts to evaluating the sign of a d-variate polynomial p( 1;:::; d ), and each parallel step requires resolving P such independent comparisons at . Resolving a comparison is more di cultGeometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Alternative Approaches to Parametric Searching
7
because p( 1;:::; d )= 0 is now a (d? 1)-dimensional var
iety. Cohen and Megiddo 74] described a recursive procedure to execute a parallel step for the case in which the polynomial corresponding to each of the comparisons is2 a linear function. The total running time in simulating Ap, using their procedure, is 2O(d ) Ts(Tp log P )d . Agarwala and Fernandez-Baca improved the running time slightly by extending Cole's idea to multidimensional parametric searching 22]; see also 213, 229]. The running time was further improved by Agarwal et al. 18] to dO(d) Ts (Tp log P )d . Later Toledo 265] extended these techniques to comparisons involving nonlinear polynomials, using Collins's cylindrical algebraic decomposition 79]. The total running time of his procedure is O(Ts (Tp log n)2d?1 ). For the sake of completeness, we present these higher-dimensional extensions in an Appendix.
3 Alternative Approaches to Parametric SearchingDespite its power and versatility, the parametric-searching technique has some shortcomings: (i) Parametric searching requires the design of an e cient parallel algorithm for the generic version of the decision procedure. This is not always easy, even though it only needs to be done in the weak comparison model, and it often tends to make the overall solution quite complicated and impractical. (ii) The generic algorithm requires exact computation of the roots of the polynomials p( ) whose signs determine the outcome of the comparisons made by the algorithm. In general, the roots of a polynomial cannot be computed exactly, therefore one has to rely either on numerical techniques to compute the roots of p( ) approximately, or on computational-algebra techniques to isolate the roots of p( ) and to determine the sign of p( ) without computing the roots explicitly. Both of these alternatives are rather expensive. (iii) Finally, from an aesthetic point of view, the execution of an algorithm based on parametric searching may appear to be somewhat chaotic. Such an algorithm neither gives any insight to the problem, nor does its execution resemble any\intuitive" ow of execution for solving the problem. These shortcomings have led several researchers to look for alternative approaches to parametric searching for geometric optimization problems. Roughly speaking, parametric searching e ectively conducts an implicit binary search over a set= f 1;:::; t g of critical values of the parameter, to locate the optimum among them. (For example, in the slope-selection problem, the critical values are the (n2 ) x-coordinates of the vertices of the arrangement A(L).) The power of the technique stems from its ability to perform the binary search by generating only a small number of critical values during the search, without computing the entire explicitly. In this section we describe some alternative ways of performing such a binary search, which also generates only a small set of critical values.Geometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Alternative Approaches to Parametric Searching
8
3.1 Randomiz
ationRandomization is a natural approach to perform an implicit binary search over the critical values 199, 267, 275]. Suppose we know that lies in some interval I=;]. Suppose further that we can randomly choose an element 0 2 I\, where each item is chosen with probability 1=jI\ j. Then it follows that, by comparing with a few randomly chosen elements of I\ (i.e., by executing the decision algorithm at these values), we can shrink I to an interval I 0 that is guaranteed to contain and that is expected to contain signi cantly fewer critical values. The di cult part is, of course, choosing a random element from I\ . In many cases, a procedure for computing jI\ j can be converted into a procedure for generating a random element of I\ . For example, in the slope-selection problem, given a set L of n lines in the plane and a vertical strip W= (l; r) R, an inversion-counting algorithm for computing the number of vertices of A(L) within W can be used to generate a multiset of q random vertices of A(L)\ W in time O(n log n+ q) 199]. Based on this observation, Matousek 199] obtained the following simple slope-selection algorithm: Each step of the algorithm maintains a vertical strip W (a; b)= f(x; y) j a x bg that is guaranteed to contain the k-th leftmost vertex; initially a=?1 and b=+1. Let m be the number of vertices of A(L) lying inside W . We repeat the following step until the algorithm terminates. If m n, the k-th leftmost vertex of A(L) can be computed in O(n log n) by a sweep-line algorithm (through W ). Otherwise, set k to be the p number of vertices lying to the left of p the line x= a. Let j= b(k? k ) n=mc, ja= j? b3 nc, and jb= j+ b3 nc. We choose n random vertices of A(L) lying inside W (a; b). If the kp leftmost vertex lies in W (ja; jb ) -th and the vertical strip W (ja; jb ) contains at most cm= n vertices, for some appropriate constant c> 0, we set a= ja, b= jb, and repeat this step. Otherwise, we discard the random sample of vertices, and draw a new sample. It can be shown, using Cherno 's bound 226], that the expected running time of the above algorithm is O(n log n). Shafer and Steiger 252] gave a slightly di erent O(n log n) expected-time algorithm for the slopeselection algorithm. They choose a random subset of u= O(n log n) vertices of A(L). Let a1; a2;:::; au be the x-coordinates of these vertices. Using the algorithm by Cole et al. 77] for counting the number of inversions approximately, they determine in O(n log n) time the vertical strip W (ai; ai+1 ) that contains the k-th leftmost vertex of A(L). They prove that, with high probability, W (ai; ai+1 ) contains only O(n) vertices of A(L), and therefore the desired vertex can be computed in an additional O(n log n) time by a sweep-line algorithm. See 90] for yet another randomized slope-selection algorithm. We will mention below a few more applications of this randomized approach. Clarkson and Shor 73] gave another randomized technique for
solving geometric optimization problem. Originally, they had proposed the algorithm for computing the diameter of a set of points in R3 (see Section 8.1). Their algorithm was extended by Agarwal and Sharir 15] and Chan 54] to solve a variety of other geometric optimization problems.Geometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Alternative Approaches to Parametric Searching
9
3.2 Expanders and cuttingsIn many cases, the above randomized approach can be derandomized, without a ecting the asymptotic running time, using techniques based on expanders or on geometric cuttings . Expander graphs are special graphs that can be constructed deterministically (in a rather simple manner), have a linear number of edges, and share many properties with random graphs; see 32] for more details. For example, in the slope-selection problem, we can construct expander graphs whose vertices are the given lines and whose edges correspond to appropriate vertices of the arrangement (each edge corresponds to the intersection of the two lines that it connects). If we search among these vertices for the optimal, we obtain a slab containing and free of any of the expander-induced vertices. One can then show, using properties of expander graphs, that the number of vertices of A(L) within the slab is rather small, so that the search for within the slab can proceed in a more e cient manner. This is similar to the randomized solution in 199]. (The precise construction of the expander graph is somewhat involved, and is described in 178].) Although expanders have been extensively used in many areas, including parallel computation, complexity theory, communication networks, and VLSI routing, their applications in computational geometry have been rather sparse so far. Ajtai and Megiddo 30] gave an e cient parallel linear-programming algorithm based on expanders, and later Katz 176] and Katz and Sharir 179, 178] applied expanders to solve several geometric optimization problems, including the application to the slope-selection problem, as just mentioned. Chazelle et al. 60] developed an O(n log2 n)-time deterministic algorithm for the slopeselection problem, using cuttings.1 The bound was subsequently improved by Bronnimann and Chazelle 49] to O(n log n). For the sake of completeness, we give a brief sketch of the algorithm by Chazelle et al. 60]. The algorithm works in O(log n) stages. In the beginning of the j -th stage, we have a vertical strip Wj, which contains, and a triangulation Tj of Wj . For each triangle 4 2 Tj, we store the vertices of 4, the subset L4 of lines in L that intersect the interior of 4, and the intersection points of L4 with the edges of 4. We refer to the vertices of Tj and the intersection points of L4 with the edges of 4, over all 4 2 Tj as critical points . The algorithm maintains the following two invariants: (C1) The total number of critical points is at most c1 n, for some constant c1> 0. (C2) For every triangle 4 2 Tj, jL4j n=cj lines of L, where c
2 2 is a constant. 2 By (C1) and (C2), Wj contains O(n) vertices of A(L) for j> log n, so we can nd by a sweep-line algorithm. We set W1 to be a vertical strip containing all the vertices of A(L), and T1 consists of a single unbounded triangle, namely W1 itself. Suppose we are in theof n hyperplanes in Rd is a partition of Rd into O(rd ) simplices with pairwise disjoint interiors so that the interior of each simplex intersects at most n=r hyperplanes of H . For any given r, a 1=r-cutting of size O(rd ) can be computed in time O(nrd?1 ) 58].H
1 A (1=r)-cutting for a set
Geometric Optimization
April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Alternative Approaches to Parametric Searching
10
beginning of the j -th stage. For every triangle 4 with jL4j> n=cj, we compute a (1=4c)2 cutting 4 of L4, clip each triangle 2 4 within 4, re-triangulate\ 4, and compute the critical points for the new triangulation of Wj . If the total number of critical points after this re nement is at most c1 n, we move to the next stage. Otherwise, we shrink the strip Wj as follows. We choose a critical point with the median x-coordinate, say x= m, and, using the decision algorithm described in Section 2.2, determine whether is greater than, smaller than, or equal to m . If= m, then we stop, otherwise we shrink Wj to either (l; m ) R or ( m; r) R, depending on whether is smaller or larger than m . In any case, the new strip contains only half of the critical points. After repeating this procedure for a constant number of times, we can ensure that the number of critical points in the current strip is at most c1 n=4. We set Wj+1 to this strip, clip Tj0 within Wj+1, re-triangulate every clipped triangle, and merge two triangles if the their union is a triangle intersecting at most n=cj lines of L. The total number of critical points after these steps 2 can be proved to be at most c1 n. As shown in 60], each stage takes O(n log n) time, so the overall running time is O(n log2 n). Using the same idea as in 77] (of counting the number of inversions approximately), Bronnimann and Chazelle 49] managed to improve the running time to O(n log n).
3.3 Matrix searchingAn entirely di erent alternative to parametric searching was proposed by Frederickson and Johnson 124, 126, 127], which is based on searching in sorted matrices. It is applicable in cases where the set of candidate critical values for the optimum parameter can be represented in an n n matrix A, each of whose rows and columns is sorted. The size of the matrix is too large for an explicit binary search through its elements, so an implicit search is called for. Here we assume that each entry of the matrix A can be computed in O(1) time. We give a brief sketch of this matrix-searching technique. Let us assume that n= 2k for some k 0. The algorithm works in two phases. The rst phase, which consists of k steps, maintains a collection of disjoint submatrices of A so that is guaranteed to be an element of one of these submatrices. In the beg
inning of the i-th step, the algorithm has at most Bi= 2i+2? 1 matrices, each of size 2k?i+1 2k?i+1 . The i-th step splits every such matrix into four square submatrices, each of size 2k?i 2k?i, and discards some of these submatrices, so as to be left with only Bi+1 matrices. After the k-th step, we are left with O(n) singleton matrices, so we can perform a binary search on these O(n) critical values to obtain . The only nontrivial step in the above algorithm is determining which of the submatrices should be discarded in each step of the rst phase, so that at most Bi+1 matrices are left after the i-th step. After splitting each submatrix, we construct two sets U and V: U is the set of the smallest (i.e., upper-leftmost) element of each submatrix, and V is the set of largest (i.e., bottom-rightmost) element of each submatrix. We choose the median elementsGeometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Prune-and-Search Technique and Linear Programming
11
U and V of U and V, respectively. We run the decision algorithm at U and V, to compare them with . If any of them is equal to, we are done. Otherwise, there are four cases to consider:
(i) If U<, we discard all those matrices whose largest elements are smaller than U . (ii) If U>, we discard all those matrices whose smallest elements are larger than U; at least half of the matrices are discarded in this case. (iii) If V<, we discard all those matrices whose largest elements are smaller than V; at least half of the matrices are discarded in this case. (iv) If V>, we discard all those matrices whose smallest elements are larger than V . It can be shown that this prune step retains at most Bi+1 submatrices 124, 126, 127], as desired. In conclusion, we can nd the optimum by executing only O(log n) calls to the decision procedure, so the running time of this matrix-searching technique is O(log n) times the cost of the decision procedure. The technique, when applicable, is both e cient and simple compared to the standard parametric searching. Aggarwal et al. 23, 24, 25, 26] studied a di erent matrix-searching technique for optimization problems. They gave a linear-time algorithm for computing the minimum (or maximum) element of every row of a totally monotone matrix; a matrix A= fai;j g is called totally monotone if ai1;j1< ai1;j2 implies that ai2;j1< ai2;j2, for any 1 i1< i2 m; 1 j1< j2 n. Totally monotone matrices arise in many geometric, as well as nongeometric, optimization problems. For example, the farthest neighbors of all vertices of a convex polygon and the geodesic diameter of a simple polygon can be computed in linear time, using such matrices 24, 151].
4 Prune-and-Search Technique and Linear ProgrammingLike parametric searching, the prune-and-search (or decimation) technique also performs an implicit binary search over the nite set of candidate values for, but, while doing so, it also tries to eliminate input objects that are guaranteed not to a ect the value of . Eac
h phase of the technique eliminates a constant fraction of the remaining objects. Therefore, after a logarithmic number of steps, the problem size becomes a constant, and the problem can be solved in a nal, brute-force step. Because of the\decimation" of input objects, the overall cost of the resulting algorithm remains proportional to the cost of the rst pruning phase. The prune-and-search technique was originally introduced by Megiddo 211, 213], in developing a linear-time algorithm for linear programming with n constraints in 2- and 3-dimensions. Later he extended the approach to obtain an O(22d n)-time algorithm for linear programming in Rd . Since then the prune-and-search technique has been applied toGeometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Prune-and-Search Technique and Linear Programming
12
many other geometric optimization problems. We illustrate the technique by describing Megiddo's two-dimensional linear-programming algorithm. We are given a set H= fh1;:::; hn gT n halfplanes and a vector c, and we wish to of minimize cx over the feasible region K= n=1 hi . Without loss of generality, assume that i c= (0; 1) (i.e., we seek the lowest point of K ). Let L denote the set of lines bounding the halfplanes of H, and let L+ (resp. L? ) denote the subset of lines`i 2 L whose associated halfplane hi lies below (resp. above)`i . The algorithm pairs up the lines of L into disjoint pairs (`1;`2 ), (`3;`4 );:::, so that either both the lines in a pair belong to L+, or both belong to L? . The algorithm computes the intersection points of the lines in each pair, and chooses the median, xm, of their x-coordinates. Let x denote the x-coordinate of the optimal (i.e., lowest) point in K (if such a point exists). The algorithm then uses a linear-time decision procedure (whose details are omitted here, though some of them are discussed below), that determines whether xm= x, xm< x, or xm> x . If xm= x we stop, since we have found the optimum. Suppose that xm< x . If (`;`0 ) is a pair of lines both of which belong to L? and whose intersection point lies to the left of xm, then we can discard the line with the smaller slope from any further consideration, because that line is known to pass below the optimal point of K . All other cases can be treated in a fully symmetric manner, so we have managed to discard about n=4 lines. We have thus computed, in O(T) time, a n 0 H of about 3n=4 constraints so that the optimal point of K 0= subset H h2H 0 h is the same as that of K . We now apply the whole procedure once again to H 0, and keep repeating this (for O(log n) stages) until either the number of remaining lines falls below some small constant, in which case we solve the problem by brute force (in constant time), or the algorithm has hit x accidentally, in which case it stops right away. (We omit here the description of the linear-time decision procedure, and of handling cases in which K is empty or unbounded; see 108, 211, 213]
for details.) It is now easy to see that the overall running time of the algorithm is O(n). Remark. It is instructive to compare this method to the parametric searching technique, in the context of two-dimensional linear programming. In both approaches, the decision procedure aims to compare some given x0 with the optimal value x . The way this is done is by computing the maximum and minimum values of the intercepts of the lines in L? and in L+, respectively, with the line x= x0 . A trivial method for computing those maximum and minimum in parallel is in a binary-tree manner, computing the maximum or minimum of pairs of lines, then of pairs of pairs, and so on. Both techniques begin by implementing generically the rst parallel step of this decision procedure. The improved performance of the prune-and-search algorithm stems from the realization that (a) there is no need to perform the full binary search over the critical values of the comparisons in that stage|a single binary search step su ces (this is similar to Cole's enhancement of parametric searching, mentioned above), and (b) this single comparison allows us to discard a quarter of the given lines, so there is no point in continuing the generic simulation, and it is better to start the whole algorithm from scratch, with the surviving lines. From thisGeometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Randomized Algorithms for Linear Programming
13
point of view, the prune-and-search technique can be regarded as an optimized variant of parametric searching. This technique can be extended to higher dimensions, although it becomes more complicated, and requires recursive invocations of the algorithm on subproblems in lower dimensions. It yields a deterministic algorithm for linear programming that runs in O(Cd n) time, where Cd is a constant depending on d. One of the di cult steps in higher dimensions is to develop a procedure that can discard a fraction of the input constraints from further consideration by invoking the (d? 1)-dimensional linear-programming algorithm a constant number of times; the value of Cd depends on this constant. The original approach by2 Megiddo 213] gives Cd= 22d, which was improved by Clarkson 68] and Dyer 100] to 3d . Their procedure can be simpli ed and improved using geometric cuttings as follows (see 18, 104]). Let H be the set of hyperplanes bounding the input constraints. Choose r to be a constant, and compute a 1=r-cutting . By invoking the (d? 1)-dimensional linear-programming algorithm recursively O(rd ) times (at most three times for each hyperplane h supporting a facet of a simplex in: on h itself, and on two parallel hyperplanes parallel to h, one on each side and lying very close to h), one can determine the simplex of that contains x . The constraints whose bounding hyperplanes do not intersect can be discarded because they do not determine the optimum value. We solve the problem recursively with the remaining n=r constraints. Dyer and Frieze 104] (see a
lso Agarwal et al. 18]) have shown that the number of calls to the recursive algorithm can be reduced to O(dr). This yields an dO(d) n-time algorithm for linear programming in Rd . An entirely di erent algorithm with a similar running time was given by Chazelle and Matousek 74]. It is an open problem whether faster deterministic algorithms can be developed. Although no progress has been made on this front, there have been signi cant developments on randomized algorithms for linear programming, which we will discuss in the next two sections. Recently there has been a considerable interest in parallelizing Megiddo's prune-andsearch algorithm. Deng 89] gave an O(log n)-time and O(n)-work algorithm for twodimensional linear programming, under the CRCW model of computation. His algorithm, however, does not extend to higher dimensions. Alon and Megiddo 31] gave a randomized algorithm under the CRCW model of computation that runs, with high probability, in O(1) time using O(n) processors. Ajtai and Megiddo 30] gave an O((log log n)d )-time deterministic algorithm using O(n) processors under Valiant's model of computation. Goodrich 138] and Sen 250] gave an O((log log n)d+2 )-time, O(n)-work algorithm under the CRCW model; see also 102].
5 Randomized Algorithms for Linear ProgrammingRandom sampling has become one of the most powerful and versatile techniques in computational geometry, so it is no surprise that this technique has also been successful in solvingGeometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Randomized Algorithms for Linear Programming
14
many geometric optimization problems. See the book by Mulmuley 227] and the survey papers by Clarkson 69] and Seidel 249] for applications of the random-sampling technique in computational geometry. In this section, we describe a randomized algorithm for linear programming by Clarkson 71], based on random sampling, which is actually quite general and can be applied to any geometric set-cover and related problems 6, 51]. Other randomized algorithms for linear-programming, which run in expected linear time for any xed dimension, are proposed by Dyer and Frieze 104], Seidel 248], and Matousek et al. 207]. Let H be the set of constraints. We assign a weight (h) 2 Z to each constraint; initially P (h)= 1 for all h 2 H . For a subset A H, let (A)= h2A (h). The algorithm works in rounds, each of which consists of the following steps. Set r= 6d2 . If jH j 6d2, we compute the optimal solution using the simplex algorithm. Otherwise, choose a random sample R H such that (R)= r. (We can regard H as a multiset in which each constraint? appears (h) times, and we choose a multiset R 2 H of r constraints.) We compute the r optimal solution xR for R and the subset V H n R of constraints that xR violates (that is, the subset of constraints that do not contain xR ). If V=;, the algorithm returns xR . If (V ) 3 (H )=d, we double the weight of each constraint in V; in any case, we repeat the sampling procedure. See Figu
re 1 for a pseudocode of the algorithm. function procedure RANDOM lp (H )/* H: n constraints in Rd 2 then if n 6d/* h= 1 for all h 2 H return Simplex(H )/* returns v(H )else
r= 6d2repeat
R2 H r xR:= Simplex(R) V:= fh 2 H j xR violates hg if (V ) (H )=3d then for all h 2 V do h:= 2 h until V=; return xRchoose random
?
Figure 1: Clarkson's randomized LP algorithm Let B be the set of d constraints whose boundaries are incident to the optimal solution. A round is called successful if (V ) 3 (H )=d. Using the fact that R is a random subset, one can argue that each round is successful with probability at least 1=2. Every successful round increases (H ) by a factor of at most (1+ 1=3d), so the total weight (H ) after kd successful rounds is at most n(1+ 1=3d)kd< nek=3 . On the other hand, each successful iteration doubles the weight of at least one constraint in B (it is easily veri ed that V must contain such a constraint), which implies that after kd iterations (H ) (B ) 2k . Hence,Geometric Optimization April 1, 1998
We review the recent progress in the design of efficient algorithms for various problems in geometric optimization. We present several techniques used to attack these problems, such as parametric searching, geometric alternatives to parametric searching, p
Abstract Linear Programming
15
after kd successful rounds, 2k (H ) nek=3 . This implies that the above algorithm terminates in at most 3d ln n successful rounds. Since each round takes O(dd ) time to compute xR and O(dn) time to compute V, the expected running time of the algorithm is O((d2 n+ dd+1 ) log n). By combining this algorithm with a randomized recursive algorithm, Clarkson improved the expected running time to O(d2 n)+ (d)d=2+O(1) log n.
6 Abstract Linear ProgrammingIn this section we present an abstract framework that captures linear programming, as well as many other geometric optimization problems, including computing smallest enclosing balls (or ellipsoids) of nite point sets in Rd, computing largest balls (ellipsoids) circumscribed in convex polytopes in Rd, computing the distance between polytopes in d-space, general convex programming, and many other problems. Sharir and Welzl 256] and Matousek et al. 207] (see also Kalai 174]) presented a randomized algorithm for optimization problems in this framework, whose expected running time is linear in terms of the number of constraints whenever the combinatorial dimension d (whose precise de nition, in this abstract framework, will be given below) is xed. More importantly, the running time is subexponential in d for many of the LP-type problems, including linear programming. This is the rst subexponential\combinatorial" bound for linear programming (a bound that counts the number of arithmetic operations and is independent of the bit complexity of the input), and is a rst step toward the major open problem of obtaining a strongly polynomial algorithm for linear programming. The papers by Gartner and Welzl 130] and Goldwasser 132] also survey the known results on LP-type problems.
6.1 An abstract frameworkLet us consider optimization problems speci ed by a pair (H; w), where H is a nite set, and w: 2H ! W is a function into a linearly ordered
set (W; ); we assume that W has a minimum value?1. The elements of H are called constraints, and for G H, w(G) is called the value of G. Intuitively, w(G) denotes the smallest value attainable by a certain objective function while satisfying all the constraints of G. The goal is to compute a minimal subset BH of H with w(BH )= w(H ) (from which, in general, the value of H is easy to determine), assuming the availability of three basic operations, which we specify below. Such a minimization problem is called LP-type if the following two axioms are satis ed:
Axiom 1. (Monotonicity) For any F; G with F G H, we havew(F ) w(G).Geometric Optimization April 1, 1998