Automated Traffic and The Finite Size Resonance

0 downloads 0 Views 440KB Size Report
Jan 28, 2008 - cars (numbered from 0 to N) moves along a one-lane road “in ... [5]) at a constant velocity and with a unit distance between successive cars.
Manuscript Click here to download Manuscript: flocks5.tex

Automated Traffic and The Finite Size Resonance J. J. P. Veerman∗(1,2), B. D. Stoˇsi´c (3), (1) Dept. of Math. & Stat., Portland State University, Portland, OR 97201, USA. (2) Dip. Ing. Inf. & Mat. App., Univ. di Salerno, Salerno, Italy. (3) Dept. Estad. & Inf., Univ. Federal Rural de Pernambuco, 52171-900, Recife-PE, Brazil. January 28, 2008

Abstract We investigate in detail what one might call the canonical (automated) traffic problem: A long string of N + 1 cars (numbered from 0 to N ) moves along a one-lane road “in formation” at a constant velocity and with a unit distance between successive cars. Each car monitors the relative velocity and position of only the cars next to it. This information is then fed back to its own engine which decelerates (brakes) or accelerates according to the information it receives. The question is: What happens when due to an external influence — a traffic light turning green — the ‘zero’th’ car (the “leader”) accelerates? As a first approximation, we analyze linear(ized) equations and show that in this scenario the traffic flow has a tendency to be stop-and-go, as a result of a finite non-zero response time. We give approximate solutions for the global traffic as function of all the relevant parameters (the feed back parameters as well as cruise velocity and so on), and numerically check our findings. We discuss general design principles for these algorithms, that is: how does the choice of parameters influence the performance.

1

Introduction

The movement of collections of agents (or ‘flocks’) with a single leader (and a directed path from it to every agent) can be stabilized over time as has been shown before (for details see [5] and prior references therein, shorter descriptions are given in [2, 8]). But for large flocks, perturbations in the movement of the leader nonetheless may grow to a considerable size as they propagate throughout the flock and before they die out over time. This is due (in the setting of this study) to a curious resonance phenomenon which we discussed in [6] and is related to the existence of an intrinsic response latency (or finite response time) over finite distances. We dubbed this phenomenon the “finite size resonance” because it occurs precisely when the number of agents is large but finite. In this paper we apply these ideas to study in detail what one might call the canonical (automated) traffic problem: A long string of N + 1 cars (numbered from 0 to N ) moves along a one-lane road “in formation” (or coherently, see [5]) at a constant velocity and with a unit distance between successive cars. Each car monitors the relative velocity and position of only the cars next to it. This information is then fed back to its own engine which decelerates (brakes) or accelerates according to the information it receives. The question is: What happens when due to an external influence — a traffic light turning green or red for example — the ‘zero’th’ car (the “leader”) accelerates or decelerates? The problem translates into a discretized version of a second order PDE, with an external forcing term and rather awkward ‘mixed’ boundary conditions. Due to these complications, we tackle only the most naive version of this problem. The equations are linear, the masses of the cars are equal, and so on. The literature abounds with studies of more sophisticated linear and nonlinear models (see e.g. [9] and references therein), but a thorough linear analysis seems to be lacking in all of these. We give ourselves a few natural feed-back parameters for the cars, as well ∗ e-mail:

[email protected]

1

as the desired velocity (or cruise speed), and a constant acceleration for the leader, and ask the simple question of how these parameters influence the performance of the system. We expect the phenomena studied here also to be relevant in the case of movements of large flocks in two or three dimensions (birds, for example). In these cases we expect higher dimensional versions of the wave-like phenomena described here to occur in flocks that are close to ‘in formation’ flight (see for example [10]). Our analysis crucially uses recently developed mathematical tools to study the movement of — large but finite — flocks (see the quoted papers and references therein).

Acknowledgements JJPV gratefully acknowledges grants through the Dip. Ing. Inf. & Mat. App., Univ. di Salerno, Salerno, Italy, and the Ist. ‘Mauro Piccone’ per le Applicazioni del Calcolo, Rome, Italy, that made an extended stay at these institutions possible, as well as the generous hospitality offered by these institutions.

2

The Model

Let us suppose that the flock (ie: the collection of agents) is traveling along the line and that the individual agents have positions xi in the real line where i ∈ {0, · · · N }. We will take the point of view that these positions represent cars on a one-lane road. Driver i adjusts his acceleration according to a pre-programmed algorithm considering (with equal weight) the positions and velocities of his ‘neighbors’: one car in front and one car behind him. The zeroth car (or ‘leader’), however, does not pay attention to the cars behind it (no-one is in front) and simply accelerates or decelerates according, for example, to the traffic lights it encounters. Under these assumptions, the equations of motion (together with the initial conditions) become: ∀i ∈ {1, · · · N − 1} : x˙ i u˙ i

and

x˙ N u˙ N x0

= ui    1 1 = f (xi − hi ) − (xi−1 − hi−1 + xi+1 − hi+1 ) + g ui − (ui−1 + ui+1 ) 2 2 = uN     = f (xN − hN ) − (xN −1 − hN −1 ) + g uN − uN −1 = x0 (t) given (2.1)

The fixed parameters hi determine the desired relative distance between the cars i and i − 1 as hi − hi−1 . The parameters f and g are feed-back parameters and are the same for every car. It is advantageous to write these equations in a more compact form. Following [5], Section 6, we introduce x ≡ (x1 , u1 , x2 , u2 , · · · , xN , uN )T . Now let h ≡ (h1 , 0, h2 , 0 · · · , hN , 0)T be the vector encoding the desired relative positions of each agent with respect to the other. We can then write these equations of motion (2.1)) with a single independent leader more succinctly in vector form as:    x0 (t) − h0 . x˙ = IN ⊗ Ax + P ⊗ K(x − h) + L0 ⊗ K u0 (t) The notation is taken from [5] and we only briefly define it here. Thus IN is the N -dimensional identity. The N dimensional matrix P and the N vector L0 are obtained from the Directed Laplacian L which is the most important object in this context. It is given by ⎛ ⎞ 0 0 0 0 ··· ··· ⎜ −1/2 1 −1/2 0 ··· ··· ⎟ ⎜ ⎟ ⎜ 0 −1/2 1 −1/2 · · · · · · ⎟ L=⎜ ⎟ . ⎜ .. ⎟ ⎝ . ⎠ ···

···

··· 2

0

−1

1

The matrix P is obtained by removing the first row and column (corresponding to the leader) from it: ⎛ ⎞ 1 −1/2 0 0 ··· ··· ⎜ −1/2 1 −1/2 0 ··· ··· ⎟ ⎜ ⎟ ⎜ 0 −1/2 1 −1/2 · ·· ··· ⎟ P =⎜ ⎟ . ⎜ .. ⎟ ⎝ . ⎠ ···

···

···

0

−1

1

The vector L0 consists of the last N entries of the first (the leaders’) column of L. We also have the constant matrices:     0 1 0 0 A≡ and K≡ . 0 0 f g Thus I ⊗ A encodes the ‘geodesic’ part of the equations, K contains control parameters, P ⊗ K encodes the interaction of the cars with the appropriate neighbors and L0 takes care of their interaction with the independent leader. For more details of the notation, see ([5]). It turns out that with z = x − h we obtain the simple linear equation: z˙ = (IN ⊗ A + P ⊗ K)z + Γ(t) := M z + Γ(t)

,

(2.2)

where Γ(t) represents the non-autonomous part of linear equation (2.1). Since z = 0 corresponds to an in formation solution, the (real part of the) eigenvalues of M determine whether this in formation motion is stable. As usual, we will interpret these equations as complex equations in which the real parts correspond to physical quantities.

3

Preliminary Results

We first want to insure that the formation is stable, so that the cars effectively have a tendency to travel together. This means that the eigenvalues of M must have negative real part. The spectrum of the matrix P is given by (see [6], or also Appendix II below):     2 + 1 π . (3.1) 1 − cos 2N ∈{0,···N −1}

According to [2, 5, 8] the eigenvalues of M are given by the solutions of   1 ν 2 − λ gν − λ f = 0 =⇒ ν± = λ g ± (λ g)2 + 4λ f 2

,

where λ runs through the spectrum of P . Putting together these facts, one easily concludes that: Proposition 3.1 The system is stabilized if and only if both f and g are strictly smaller than zero. Furthermore (assuming that N is large and f and g negative), the eigenvalues with smallest (in modulus) real part are given by:  (2 + 1)π |f | (2 + 1)2 π 2 |g| −4 √ + O(N −3 ) . + O(N ) and ν = ± ν± = − ± 16N 2 2 2N Proof: The first of these statements can be found in [6]. For the second statement, use the formulae above: λ =

(2 + 1)2 π 2 + O(N −4 ) . 8N 2

Substitute this into to the formula for the roots. You need to assume that 1−

(2 + 1)2 π 2 g 2 >0 32N 2 |f | 3

,

which of course is true for N large. Remark: We note that it would appear generally desirable to have the real part of these eigenvalues as far away from zero as possible. This would imply cranking up g. Note that here as well as in the cited papers we assume that f and g are real. The possibility that they may be complex parameters is to the best of our knowledge unexplored at the date of this writing. We will from now on always assume that f and g are negative. When the leader sustains a harmonic motion eiωt , the asymptotic orbit of the k-th car is easily seen to be ak eiωt . The complex-valued function ak (ω) is its frequency response. Proposition 3.2 The frequency response function of the k-th car is given by −k −k + μN μN − + N μN − + μ+  f + iωg + ω 2 where μ± ≡ γ ± γ 2 − 1 and γ= f + iωg

ak (f, g, ω) =

.

Proof: The full details are in [6]. However, since the result here is slightly more general, we give an outline of the proof. The solution of the full system of equations is naturally given by  t Mt Mt z(t) = e z(0) + e e−Ms Γ(s)(s) ds . 0 Mt

Since the system is stabilized, terms like e z(0) tend to zero, exponentially fast (though the exponents may be small, see Proposition 3.1). Writing Γ(s) as g0 · eiωt where g0 is a 2N -vector, the integral can be performed, and again neglecting exponentials, we get z(t) −→ −(M − iω)−1 g0 eiωt . (Recall that (M − iω) is invertible by Proposition 3.2.) Thus we are allowed to set zk = ak · eiωt . This leads to a recursive equation on the ak , ak−1 − 2γak + ak+1 = 0

,

which in turn gives rise to a characteristic polynomial P (x) = 1 − 2γx + x2 whose roots are of course precisely μ± . The hard part of the problem consists in manipulating the boundary conditions. These are given by: a0 = 1

and

− aN −1 + γaN = 0 .

We know the two local eigenvectors of the local Laplacian have the form   1 v± = . μ± Here μ± are the roots of the characteristic equation. Set for the solution: s = c− v− + c+ v+ and solve for c± . Since we are primarily interested in aN , we modify this procedure slightly. N Set d1 = c− μN − and d2 = c+ μ+ . Now we want to solve for aN = d1 + d2 . The boundary conditions can be rewritten as      d1 1 μ−N μ−N − + = . d2 0 γ − μ−1 γ − μ−1 − + Use the characteristic polynomial to show that γ − μ−1 ± =

μ± − μ−1 ± 2

.

And use that μ−1 ± = μ∓ . Divide the second row by a common factor (which happens to tend to 0 as ω tends to 0) to get:       −N     −N μ−N μ−N d1 1 d1 1 μ μ − + − + = =⇒ = . μ+ −μ−1 μ− −μ−1 − + d2 0 d2 0 1 −1 2

2

4

Denote this last matrix by M ; the same matrix with the first row replaced by ones, by M ∗ . Then it is easy to see that aN = det M ∗ / det M

,

which gives the desired answer. The form of the other response functions (ak for k = N ) follow by noting that   ak −k −k + d2 μN . = c+ μk+ v+ + c− μk− v− =⇒ ak = d1 μN − + ak+1

We end the section by quoting a last result from [6]. Small divisor issues in the expression for aN given in iπ Proposition 3.2 may cause that expression to grow. Near cancelation in the denominator occurs when |μ± e± 2N | = O(N −2 ). This is precisely what happens for certain ω close to zero, and this is called ‘near resonance’, or ‘resonance’ for short. Thus we have in terms of the function aN of ω (and f and g): Theorem 3.3 For large flocks (near) resonance only occurs for |ω| small, with the main resonance located at ω = ωN where  |f |π ωN = √ + O(N −3 ) , 2 2N and its peak size is given by

 8 2|f | aN (ωN ) = N + O(N −1 ) π 2 |g|

4

.

Main Results

As stated before, the canonical problem consists of deciding the consequences of having the leader start and go, say with constant acceleration a0 , and continue with constant velocity v0 as soon as that velocity has been reached:   v0 , x˙ 0 (0) = 0 , x0 (0) = 0 , x¨0 (t) = a0 H(t) − H( − t) a0 where H is the Heaviside function. One problem resides in the fact that x0 is not a sum of harmonic terms, in fact x0 is not even integrable. To circumvent this, we consider the second derivative of the equations of motion (2.1). Set yi ≡ x ¨i = z¨i . We have:     ∀i ∈ {1, · · · N − 1} : y¨i = f yi − 12 (yi−1 + yi+1 ) + g y˙ i − 12 (y˙ i−1 + y˙ i+1 ) y¨N = f {y N − y˙ N −1 } (4.1) N − yN −1 } + g {y˙ and

y0

=

a0 H(t) − H( av00 − t)

This leads to the equation in vector form:    y0 (t) y˙ = (IN ⊗ Ay + P ⊗ K) y + L0 ⊗ K = M y + Γ (t) . y˙ 0 (t) Similar to what is done in the proof of Proposition 3.2, we set Γ (t) = x ¨0 (t) · g0 , where g0 is the 2N -vector as defined in that proof. We further define the Fourier Transform of x¨0 (t): 1 x¨0 (t) = 2π

 q(ω)e

iωt

 dω

where

5

q(ω) ≡ a0

0

v0 /a0

e−iωt dt

.

The strategy is to solve this system and then integrate twice and add the appropiate Galilean movement. Proposition 4.1 ( Linear Superposition) Let ak be as given in Proposition 3.2. Then  1 ak (ω)q(ω)eiωt dω . yk = 2π IR solves Equation (4.1) including boundary conditions. Proof: According to the proof of the Proposition the ak satisfy ak−1 − 2γak + ak+1 = 0 , and − aN −1 + γaN = 0 . a0 = 1 Using differentiation under the integral sign and linearity of the integral, one sees that this implies that the y = (y1 (t), y˙1 (t), y2 (t), · · · , y˙ N (t))T solves Equation (4.1) including boundary conditions. The function aN (ω) is rather complicated (see Figure below). We thus need to find an approximation of it. This 20

20

16

16

12

12

8

8

4

4

0 −0.1

−0.05

0

0.0 −4

omega

0.1

0.05

−0.05

−0.1

0.0 −4

omega

−8

0.05

0.1

−8

RE

IM −12

−12

−16

−16

−20

−20

Figure 4.1: A close-up of the real and imaginary parts of aN (ω) for f = g = −1. is accomplished in the Theorem below by means of these linear functions   2 N D+ = (−1) − ωπN (ω − (2 + 1)ωN ) + 2i(2 + 1)2 r0 ωN   2 N D− = (−1) + ωπN (ω + (2 + 1)ωN ) − 2i(2 + 1)2 r0 ωN where we have set ωN

π = 2



|f | 1 2 N

and

|g| r0 = √ 2|f |3/2

Theorem 4.2 The function aN (ω) can be approximated by aN (ω) =

 0   2 2 + D− D+ =0

6

.

.

,

(4.2)

Remark: The exact sense in which aN (ω) may be approximated is left out of this statement for the following reason. The numerical reality far exceeds anything that we can prove here, and so it seems reasonable to suppress these technicalities in the statement. However, to the extent that we can specify it, the nature of the approximation can be extracted from the proof below if desired. Proof: Start with the convention

μ± (ω) ≡ (1 + r± (ω))eiθ± (ω)

.

We need to calculate aN (ω) (see Proposition 3.2). The idea is to find a piecewise approximation for the denominator D(ω) ≡ (1 + r+ (ω))N eiN θ+ (ω) + (1 + r− (ω))N eiN θ− (ω)

.

Then from [6] copy that (using the notation given above): r± (ω) = θ± (ω) =

±r0 ω 2 + O(ω 4 ) π ± ω + O(ω 3 ) 2N ωN

It follows that outside a O(N −1/2 ) neighborhood of ω = 0, we have that r(ω) is O(N −1 ) or bigger and thus aN (ω) will be small. Within this neighborhood of 0, there are certain values for ω where the two terms of D nearly cancel, and thus where aN has sharp peaks. This of course happens wherever (2 + 1)π . 2 Observe that ωN given in Theorem 3.3 and in Equation (4.2) is indeed the (principal) resonance frequency (see [6]) and furthermore that the harmonics are (neglecting higher order terms) given by: N θ+ (ωN,± ) = ±

ωN ± = ±(2 + 1)ωN

.

Thus there are O(N 1/2 ) peaks in the O(N −1/2 ) neighborhood of ω = 0. We thus approximate D in the neighborhood of ωN,± by a function we call D± . Use Lemma 6.1, the definition of the ωN,+, and the fact that θ− = −θ− (because the product of the μ’s is 1), to see that for ω ∈ (2, 2 + 2)ωN :      π π 2 2 ) eiN [θ+ (ω)−(2+1) 2N ] − 1 + N r− + O(r− ) e−iN [θ+ (ω)−(2+1) 2N ] D+ = (−1) i 1 + N r+ + O(r+ Of course for D− , we obtain a very similar expression for ω ∈ (−2 + 2, −2)ωN :      π π 2 2 D− = (−1)+1 i 1 + N r+ + O(r+ ) eiN [θ+ (ω)+(2+1) 2N ] − 1 + N r− + O(r− ) e−iN [θ+ (ω)+(2+1) 2N ] For ω ∈ (2, 2 + 2)ωN we approximate using D+ and use the second part of Lemma 6.1 to simplify. π



eiN [θ+ (ω)−(2+1) 2N ] = e 2ωN

(ω−(2+1)ωN )+N O(ω 3 )



= e 2ωN

(ω−(2+1)ωN )

+ O(ω 2 ) .

Dropping O(ω 2 )-terms, we can now write:   iπ (ω−(2+1)ωN )   − iπ (ω−(2+1)ωN )  − 1 − N r0 ω 2 e 2ωN D+ = (−1) i 1 + N r0 ω 2 e 2ωN   iπ (ω+(2+1)ωN )   − iπ (ω+(2+1)ωN )  − 1 − N r0 ω 2 e 2ωN D− = (−1)+1 i 1 + N r0 ω 2 e 2ωN The last step isto restrict ω to the narrow intervals around ωN,±. For instance for D+ approximated in the 2 interval ω − ωN,+ ∈ − const , const , we perform the following steps. Replace ω 2 by (2 + 1)2 ωN + O(ν/N ) + O(ν 2 ) N 3/2 N 3/2 iN ν 2 2 = 1 + iN ν + +O(N ν ). Retaining only the lowest order we then obtain the following. and e π 2 D+ = (−1) (− ν + 2i(2 + 1)2 r0 ωN N ) + O(ν) + O(N 2 ν 2 ) + O(ω 2 ) . ωN (Here we have set ν ≡ ω − ωN,+ = ω − (2 + 1)ωN .) This leads to the conclusion that in the stated interval aN (ω) is approximated by 2 aN (ω) = . D+ A similar calculation can be done for D− . Remark: The estimates in Theorem 3.3 are nothing but the locus and the value of the maximum of |D0+ |. 7

2.0 1.8 1.6 1.4 1.2 ABS 1.0 0.8 0.6 0.4 0.2 0.0 0.0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

0.2

omega

Figure 4.2: The parameters in this picture are N = 100, f = g = −1. Yellow: |aN |, red: |a100 − D0+ − D0− |, green: 4 6 |a100 − =0 D+ + D− |, blue: |a100 − =0 D+ + D− |. The upper bound in the sum counts the number of peaks of aN that are well approximated. Definition 4.3 Under the following conditions we speak of the stop and go limit: • N is very large. •

v0 a0

is very small.

• The times considered are less than a few times the fundamental period: t < K 2π ω , where K is a small positive integer. To take a formal limit as N −→ ∞ seems impossible since the ‘final’ boundary condition in Equation 2.1 cannot be formulated on an infinite set of cars. Theorem 4.4 In the stop-and-go limit, the orbit of the trailing car tends to: √ (2+1)2 π 2 |g| ∞ t  (−1) e− 16N 2 sin((2 + 1)ωN t) 8 2 xN (t) ≡ xN (0) + v0 t − 2 1/2 N v0 + Galilean. (2 + 1)2 π |f | =0 where the eigenvalues ν± of M are given in Proposition 3.1. Proof: From Proposition 4.1 we have that x ¨N =

 1 aN (ω)q(ω)eiωt dω 2π IR

.

Theorem 4.2 allows us to replace this with x ¨N ≈

1 2π

   0  2 2 + eiωt q(ω) dω D− D+

.

=0

Switch summation and integration. Note that for ω >> √1N the function aN will tend to zero exponentially fast rendering that part of the integral to zero. In the stop and go limit, we can thus set q(ω) = v0 . Furthermore since 0 √ of Theorem 4.2 is or order N it will be big and we replace it by ∞ (as long as the sum converges we make an error that tends to 0 as N tends to ∞ by doing so). This yields    ∞   2eiωt 2eiωt 1 1 x¨N ≈ v0 dω + dω . 2π D− 2π D+ =0

8

Using Lemma 6.2 we conclude (H is the Heaviside function): x ¨N ≈ v0 H(t)

∞ 

(−1)

=0

3 N 4ωN − 2(2+1)2 r0 ωN π e sin((2 + 1)ωN t) π

.

We need to integrate twice. Even doing this term-wise is a rather complicated affair due to the fact that 3 2(s+1)2 r0 ωN N eat . sin(bt) is not conveniently twice integrated. However, the constant is of order N −2 and for the π purpose of integrating we may consider it zero. Thus performing the integrations, we get ∞ 3 N 2(2+1)2 r0 ωN 4v0  π xN (t) − Galilean ≈ (−1)+1 e− πωN =0

t

sin((2 + 1)ωN t) (2 + 1)2

.

(4.3)

The right hand side of this equality is an exponentially decaying term. Since we know that the in formation motion is globally attracting (see [5]), the ‘Galilean’ term of this motion is entirely determined by the asymptotic motion of the leader and in particular the velocity must be equal to the leader’s asymptotic velocity v0 . The initial position must of course be equal to xN (0) (notice how this solution assumes the leader reaches cruise velocity instantaneously). Substituting the constants, and adding the Galilean motion gives the desired result.

15

15

Numerics Theory l=0 Theory l=4 Theory l=6

10

10

5

0

X100

X100

5

Numerics Theory l=6

-10

-5

0

-11

-5 -12

-10

-13

-14 100

-15 0

100

200

300

-10 110

120

130

140

400

150

160

170

180

500

-15

600

0

2000

t

4000

6000

t

Figure 4.3: Comparison with Theorem 4.4. The 0 = 6 approximation coincides almost perfectly with the numerics. In order to exhibit the result better we have plotted the difference xN (t) − x0 (t) in the case where f = g = −1, N = 100, v0 = 0.1, and a0 = 10. Remark: Also here it is appropriate to say that the approximation is spectacularly better than one might initially guess (see Figure 4.3). Remark: The expression in Equation (4.3) can also — more suggestively — be written in terms of the eigenvalues given in Proposition 3.1. xN (t) − Galilean ≈

∞  2v0  (−1)+1  iν+ t e − eiν− t 2 iπωN (2 + 1)

.

(4.4)

=0

5

Discussion of Results

In this Section we will assume in expressions like Equation 4.3 that the time variable t has a value greater than 9

0. We can then drop the Heaviside function from the equations. Definition 5.1 Denote amplitude, leader velocity, and resonant frequency by: A, v0 , and ωN . The response time Tr is the first time at which the trailing car and the leader have the same velocity. One of the more striking conclusions from numerical experiments (in the stop and go limit) is that after the leader starts moving, the last car stands still for about a quarter of a period and then starts moving even though no delay has been built into these equations. More precisely: the response time (of the last car) is a quarter of the fundamental period. In fact the response time is very close to linear. Car k will have a response time that equals k/N times that of the last car. In this sense we can speak of a signal velocity vsig which is equal to Tr /N . In this situation, the last car tends to stand still for a while, then move fast, and then again tends to stand still (or move very slowly). The following Proposition explains how this comes about mathematically. Proposition 5.2 Assume the stop and go limit. In the reference frame of the leader (that is: seen from a car that goes πv0 at constant velocity v0 ) the orbit of the trailing car is a sawtooth function (see Figure 4.3) with amplitude A = 2ω N and velocity v0 . Proof: In the stop-and-go limit and neglecting the exponential decay, the orbit of the trailing car (see Theorem 4.4) tends to: ∞ sin((2 + 1)ωN t) 4v0  (−1)+1 . xN (t) ≈ Galilean + πωN (2 + 1)2 =0

The periodic part of this motion is in fact the familiar sawtooth function. Its amplitude can be determined as follows: at t = 2ωπN each term (−1)+1 sin((2 + 1)ωN t is minimized and equal to -1. Using that ∞  =0

−1 −π 2 = (2 + 1)2 8

,

the expression for the amplitude follows immediately. Differentiating the periodic part of xN (t) we obtain its velocity (relative to the leader): ∞ cos((2 + 1)ωN t) 4v0  . (−1)+1 x˙ N (t) − v0 ≈ π (2 + 1) =0

This expression has an extremum at t = 0 and so, using that ∞  (−1)+1 =0

(2 + 1)

=

−4 π

,

we get that x˙ N (0) − v0 = −v0 . In the stop and go limit the last car will have a velocity that alternates between the values 0 and 2v0 . This explains the tendency for traffic to be stop-and-go (and the name of the limit), even though the leader happily sails along at constant velocity v0 , blissfully unaware — as (s)he receives no information — of the troubles others are having. This limit is not altogether physically reasonable, as clearly accelerations tend to become very large in the regions of the velocity changes. Nonetheless, this predicts a tendency for stop-and-go traffic to develop in certain situations. Exactly the same phenomena will occur when the flock travels at constant velocity and the leader decides to decelerate to a stop. It is easy to see that our model will in this situation give rise to cars traveling backwards, which is unrealistic. However, if one imposes that the cars will stop as soon as their velocity has reached zero (and don’t go in reverse), we get an estimate for the safe distance cars need to be traveling at, so that the leader can stop without causing collisions. Assume the cars were traveling in formation and thus that the distance between leader and N-th car was given by (see Equation (2.1)) h0 − hN . Now suppose the leader starts breaking at t = 0 and assume the stop and go limit. Every-one has stopped at t = Tr and √ 2v0 N x0 (Tr ) − xN (Tr ) = h0 − hN − A = h0 − hN −  . (5.1) |f | Clearly in the design of this algorithm we want this number to be positive to avoid collisions, even though the stopping distance of a single car (in the stop and go limit) approaches zero. 10

1.0

0.5 omega 0

2

4

6

8

10

12

14

16

18

20

0.0

−0.5 x_N −1.0

0.5 omega 0

2

4

6

8

10

12

14

16

18

20

0.0

v_N −0.5

Figure 5.1: −

6

∈IN

(−1) sin((2+1)ωN t) (2+1)2

and its derivative.

We summarize the main dynamical characteristics in the Table below and comment on the design of an algorithm for the automated pilot. While any likely algorithm for designing automated traffic would almost certainly be nonlinear, nonetheless the linearization of the equations around an in formation solution would yield a system similar to the one studied here. The thing to avoid is to have nonlinear terms ‘fight’ bad linearizations, thereby probably burning up large amounts of precious fuel. Quantity

Symbol

Resonance Freq.

ωN

Value Expressed in Parameters √ π |f | √ N −1 2 2

Freq. Resp. Fn. (for last car)

aN



Peak Ampl of 1st Res

aN (ωN )

Peak Ampl. Canon. Traffic Probl.

A

Leading Lyapunov Expon.

ν0



8 2|f | π 2 |g| N √ √2v0 N |f | 2

− π16|g| N −2

Other Relations ωN = ν0 Peaks of Size ∝ N at ω ∝ N1 Exp. (in N ) Small elsewhere See Theorem 3.3 A=

See Proposition 3.1



Response Time

Tr

Conjectured Signal Velocity

vsig

√2 N |f | √ |f | √ 2

πv0 2ωN

Tr =

A v0

=

vsig =

π 2ωN

N Tr

The design of the algorithm consists in choosing the parameters. We note that the most obvious way to implement Equation (5.1) is to keep the desired distances hi − hi+1 (see Equation (2.1) between cars constant and to require √ 2v0 hi − hi+1 >  . |f | In the ideal design one would also want A and aN (ωN ) to be small (see Table). The role of |f | here is rather 11

complicated and the choice of its value might depend on what kind of perturbations the systems is expected to suffer. The role of g is less complicated and it appears wise to generally assign it a large value. By far the worst drawback of our linear design is the weak attenuation (for large N ) of the oscillation due to the (leading) Lyapunov exponent ν0 which is ∝ |g|N −2 (see Figure 5.2). It is clear that also here we want |g| to be as large as possible to damp oscillations for large N .

-0,00004

ln(Ai+2/Ai)/(Ti+2-Ti)

-0,00006 -0,00008 -0,00010

Numerical values Theoretical F=-0.000061685

-0,00012 -0,00014 -0,00016 0

10

20

30

40

50

60

70

80

90

100

i

Figure 5.2: Calculation of the decay rate of the oscillation. The value converges to the predicted one ν0 = 6.2 · 10−5 (here f = g = −1). The “∝ N −2 ” term in the Lyapunov exponent is ultimately due to the nature of the spectrum of P or L (see Section 3): the exponent is directly related to the smallest eigenvalue of P . For undirected communication graphs this number is sometimes called the Fiedler number and it is well-known that its value is an algebraic measure of the connectivity of the graph (see for example [4]). In this study we rely on extensions of these ideas to directed graphs (see [1] and [6], [3]). The difficulty of course resides in finding graph Laplacians with ‘better’ eigenvalues that are still compatible with traffic models. It is interesting to note that some of the most naive changes in the Laplacian immediately lead to exponential (in stead of linear) growth of trouble for outlying cars (see [7]). Of course in linear systems, exponential behavior is no surprise. The surprise is that linear (in N ) growth of perturbations appears to be very hard to beat by linear models. Either exponential or bounded behavior is of course what we are taught to generally expect from linear systems (barring Jordan normal forms). Here we are in a situation where if the number of individuals is large and variable, the growth of perturbations appears to be at least linear (even though they die out over time). The challenge is to decide whether it is possible to beat linear (in N ) growth of perturbations.

6

Appendix I

Lemma 6.1 Let C > 0 so that for large N we have 0 ≤ N ρ(x), N ρ(y) ≤ C (the range of x and y depends on N ). Then (1 + ρ(x))N − (1 + N ρ(x)) ≤ eCρ(x) − 1 − Cρ(x) = O(ρ2 ) .

12

Proof: This can easily be established using the binomial formula. Namely: N N   1 N · · · (N − k + 1) 1 k (1 + ρ(x)) − (1 + N ρ(x)) = (N ρ(x))k (N ρ(x)) ≤ k! Nk k! N

k=2

.

k=2

Last, apply Taylor’s Theorem to f (ρ) ≡ eCρ − 1 − Cρ and use the inequality N ρ < C.

Lemma 6.2 Let H be the Heaviside function.  3 N 1 2iωN −2(2+1)2 r0 ωN 2eiωt t ±i(2+1)ωN t π   e e H(t) dω = ±(−1)+1 π 2  2 2π π ±(−1) (− ωN ω ∓ (2 + 1)ωN + 2i(2 + 1) r0 ωN N ) Proof: If we simplify this a bit, we get (in one of the two cases):  2eiωt 1   f (t) ≡ dω 2π −a ω − z + ib

,

where a b, and z are strictly positive. Let  be the (oriented) real line and m its translate by 2ib/a. Then  ! 2eiωt 1 2eiωt 1     dω = dω , 2π −m −a ω − z + ib 2π −a ω − z + ib The integral along −m is easily evaluated by substituting x for ω where a(ω − z) = 2ib − a(x − z). The residue is evaluated by setting by setting a(ω − z) − ib = ρeiφ and taking the limit as ρ tends to zero. The integral equation becomes:     ! 2ib ib 2eiωt 2ei(−x+ a −2z)t 2ei(z+ a )t 1 ib ρ iφ 1 1     d z+ + e . dω + dx = 2π 2π 2π −ρeiφ a a −a ω − z + ib +a x − z − ib Now the second integral is in fact a multiple of f (−t). But in our context all time signals are ‘causal’ (are zero for negative times). So this term gives zero. We obtain 1 i(z+ ib )t a 2e f (t) = 2π

7

 0

2π iρ iφ ae −ρeiφ

dφ =

−2i − b t+izt e a H(t) . a

Appendix II (with A. Olvera, IIMAS-UNAM, Mexico)

Proof of Equation (3.1): We need to calculate eigenvalues of: ⎛ 0 ⎜ 1/2 ⎜ ⎜ I −P =⎜ 0 ⎜ .. ⎝ . ···

the numbers λ = 1 − r where r (for  in {0, ...N − 1}) are the 1/2 0 0 ··· 0 1/2 0 · · · 1/2 0 1/2 · · · ···

···

13

0

1

··· ··· ··· 0

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

.

.

Let vn be defined fo n in {0, ...N + 1}. Then the equation defining the -th ( ∈ {0, · · · N − 1}) eigenvector can be rewritten as: ⎧ ⎨ for n ∈ {1, · · · N } r vn = 12 (vn−1 + vn+1 ) where v0 = 0 . (7.1) ⎩ and vN +1 = vN −1 The recursion equation (first without the boundary conditions) can naturally be seen as a linear system whose matrix is:   0 1 . −1 2r Use the Ansatz that r ∈ [−1, 1], and one can check that eigenvalues of this last matrix must be conjugates and on the unit circle. It is easy to see that thus   and φ = σπ , (7.2) vn = c einφ − e−inφ solves the recursion together with the first boundary condition. (The extra condition serves to insure that the vn are different from zero.) We may take c = 1 without loss of generality. From the recursion in Equation (7.1), we calculate r : r vn =

 1  i(n+1)φ e + ei(n−1)φ − e−i(n−1)φ − e−i(n+1)φ = cos(φ) vn =⇒ r = cos φ 2

.

The values of r are now entirely determined by the second boundary condition of Equation (7.1):   0 = vN +1 − vN −1 = ei(N +1)φ − ei(N −1)φ + e−i(N −1)φ − e−i(N +1)φ = 4i sin φ cos N φ . Together with the condition in Equation (7.2), this gives that sin φ = 0 and thus φ =   the Ansatz.) This yields the desired eigenvalues λ = 1 − cos 2+1 2N π .

2+1 2N π.

(Notice that this justifies

Remark: This material is added for completeness. We do not claim originality: in fact, our reasoning is based on the exercises on pages 216-218 of [11].

References [1] R. Agaev, P. Chebotarev, On the spectra of nonsymmetric laplacian matrices, Linear Algebra App., 399:157168, 2005. [2] John S. Caughman, G. Lafferriere, J. J. P. Veerman, A. Williams, Decentralized Control of Vehicle Formations, Systems & Control Letters, 54, 899-910, 2005. [3] John S. Caughman, J. J. P. Veerman, Kernels of Directed Graph Laplacians, Electr. J. Comb. 13, Vol 1, R39, 2006. [4] F. R. K. Chung, Spectral Graph Theory, Providence, RI: Amer. Math. Soc., 1997. [5] J. J. P. Veerman, John S. Caughman, G. Lafferriere, A. Williams, Flocks and Formations, J. Stat.Phys. 121, Vol 5-6, 901-936, 2005. [6] J. J. P. Veerman, B. D. Stosic, A. Olvera, Spatial Instabilities and Size Limitations of Flocks, Networks and Heterogneous Media 2, Vol 4, 647-660, 2007. [7] R. Manzo, L. Rarit` a, J. J. P. Veerman, ea, What is the Use of a Rearview Mirror?, In Preparation. [8] A. Williams, G. Lafferriere, J. J. P. Veerman, Stable Motions of Vehicle Formations, Proc. 44th Conference of Decision and Control, 72-77, 12-15 Dec. 2005. 14

[9] M. Treiber, A. Hennecke, D. Helbing, Congested traffic states in empirical observations and microscopic simulations, Phys. Rev E 62, Vol 2, 1805-1824, 2005. [10] Some beautiful photographs of these phenomenona http://epod.usra.edu/archive/images/sortsolsum-05042006-hw.jpg [11] G. L. Kotkin, V. G. Serbo, Problemas de Mecanica Clasica, Mir, Moscow, 1980.

15

can

be

found

in