Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Spectral dimension and random walks on the two dimensional uniform spanning tree Martin T. Barlow∗ and Robert Masson† December 23, 2009 Abstract We study simple random walk on the uniform spanning tree on Z2 . We obtain estimates for the transition probabilities of the random walk, the distance of the walk from its starting point after n steps, and exit times of both Euclidean balls and balls in the intrinsic graph metric. In particular, we prove that the spectral dimension of the uniform spanning tree on Z2 is 16/13 almost surely. Keywords: Uniform spanning tree, loop erased random walk, random walk on a random graph Subject Classification: 60G50, 60J10 1 Introduction A spanning tree on a finite graph G = (V, E) is a connected subgraph of G which is a tree and has vertex set V . A uniform spanning tree in G is a random spanning tree chosen uniformly from the set of all spanning trees. Let Qn = [−n, n]d ⊂ Zd , and write UQn for a uniform spanning tree on Qn . Pemantle [Pem91] showed that the weak limit of UQn exists and is connected if and only if d ≤ 4. (He also showed that the limit does not depend on the particular sequence of sets Qn chosen, and that ‘free’ or ‘wired’ boundary conditions give rise to the same limit.) We will be interested in the case d = 2, and will call the limit the uniform spanning tree (UST) on Z2 and denote it by U. For further information on USTs, see for example [BLPS01, BKPS04, Lyo98]. The UST can also be obtained as a limit as p, q → 0 of the random cluster model – see [Häg95]. A loop erased random walk (LERW) on a graph is a process obtained by chronologically erasing the loops of a random walk on the graph. There is a close connection between the UST and the LERW. Pemantle [Pem91] showed that the unique path between any two vertices v and w in a UST on a finite graph G has the same distribution as the loop-erasure ∗ Research partially supported by NSERC (Canada) and by the Peter Wall Institute of Advanced Studies (UBC) † Research partially supported by NSERC (Canada) 1 of a simple random walk on G from v to w. Wilson [Wil96] then proved that a UST could be generated by a sequence of LERWs by the following algorithm. Pick an arbitrary vertex v ∈ G and let T0 = {v}. Now suppose that we have generated the tree Tk and that Tk does not span. Pick any point w ∈ G \ Tk and let Tk+1 be the union of Tk and the loop-erasure of a random walk started at w and run until it hits Tk . We continue this process until we generate a spanning tree Tm . Then Tm has the distribution of the UST on G. We now fix our attention on Z2 . By letting the root v in Wilson’s algorithm go to infinity, one sees that one can obtain the UST U on Z2 by first running an infinite LERW from a point x0 (see Section 2 for the precise definition) to create the first path in U, and then using Wilson’s algorithm to generate the rest of U. This construction makes it clear that U is a 1-sided tree: from each point x there is a unique infinite (self-avoiding) path in U. Both the LERW and the UST on Z2 have conformally invariant scaling limits. Lawler, Schramm and Werner [LSW04] proved that the LERW in simply connected domains scales to SLE2 – Schramm-Loewner evolution with parameter 2. Using the relation between LERW and UST, this implies that the UST has a conformally invariant scaling limit in the sense of [Sch00] where the UST is regarded as a measure on the set of triples (a, b, γ) where a, b ∈ R2 ∪ {∞} and γ is a path between a and b. In addition [LSW04] proves that the UST Peano curve – the interface between the UST and the dual UST – has a conformally invariant scaling limit, which is SLE8 . In this paper we will study properties of the UST U on Z2 . We have two natural metrics on U; the intrinsic metric given by the shortest path in U between two points, and the Euclidean metric. For x, y ∈ Z2 let γ(x, y) be the unique path in U between x and y, and let d(x, y) = |γ(x, y)| be its length. If U0 is a connected subset of U then we write γ(x, U0 ) for the unique path from x to U0 . Write γ(x, ∞) for the path from x to infinity. We define balls in the intrinsic metric by Bd (x, r) = {y : d(x, y) ≤ r} and let |Bd (x, r)| be the number of points in Bd (x, r) (the volume of Bd (x, r)). We write B(x, r) = {y ∈ Zd : |x − y| ≤ r}, for balls in the Euclidean metric, and let BR = B(R) = B(0, R), Bd (R) = Bd (0, R). Our goals in this paper are to study the volume of balls in the d metric, to obtain estimates of the degree of ‘metric distortion’ between the intrinsic and Euclidean metrics, and to study the behaviour of simple random walk (SRW) on U. To state our results we need some further notation. Let G(n) be the expected number of steps of an infinite LERW started at 0 until it leaves B(0, n). Clearly G(n) is strictly increasing; extend G to a continuous strictly increasing function from [1, ∞) to [1, ∞), with G(1) = 1. Let g(t) be the inverse of G, so that G(g(t)) = t = g(G(t)) for all t ∈ [1, ∞). By [Ken00, Mas09] we have 5 log G(n) = . (1.1) lim n→∞ log n 4 Our first result is on the relation between balls in the two metrics. 2 Theorem 1.1 (a) There exist constants c, C > 0 such that for all r ≥ 1, λ ≥ 1,  2/3 P Bd (0, λ−1G(r)) 6⊂ B(0, r) ≤ Ce−cλ . (1.2) (b) For all ε > 0, there exist c(ε), C(ε) > 0 and λ0 (ε) ≥ 1 such that for all r ≥ 1 and λ ≥ 1,  P B(0, r) 6⊂ Bd (0, λG(r) ≤ Cλ−4/15+ε , (1.3) and for all r ≥ 1 and all λ ≥ λ0 (ε),  P B(0, r) 6⊂ Bd (0, λG(r) ≥ cλ−4/5−ε . (1.4) We do not expect any of these bounds to be optimal. In fact, we could improve the exponent in the bound (1.2), but to simplify our proofs we have not tried to find the best exponent that our arguments yield when we have exponential bounds. However, we will usually attempt to find the best exponent given by our arguments when we have polynomial bounds, as in (1.3) and (1.4). The reason we have a polynomial lower bound in (1.4) is that if we have a point w such that |w| = r, then the probability that γ(0, w) leaves the ball B(0, λr) is bounded below by λ−1 (see Lemma 2.6). This in turn implies that the probability that w ∈ / Bd (0, λG(r)) is bounded from below by cλ−4/5−ε (Proposition 2.7). Theorem 1.1 leads immediately to bounds on the tails of |Bd (0, R)|. However, while (1.2) gives a good bound on the upper tail, (1.3) only gives polynomial control on the lower tail. By working harder (see Theorem 3.4) we can obtain the following stronger bound. Theorem 1.2 Let R ≥ 1, λ ≥ 1. Then  1/3 P |Bd (0, R)| ≥ λg(R)2 ≤ Ce−cλ ,  1/9 P |Bd (0, R)| ≤ λ−1 g(R)2 ≤ Ce−cλ . (1.5) (1.6) So in particular there exists C such that for all R ≥ 1, C −1 g(R)2 ≤ E [|Bd (0, R)|] ≤ Cg(R)2. (1.7) We now discuss the simple random walk on the UST U. To help distinguish between the various probability laws, we will use the following notation. For LERW and simple random walk in Z2 we will write Pz for the law of the process started at z. The probability law of the UST will be denoted by P, and the UST will be defined on a probability space (Ω, P); we let ω denote elements of Ω. For the tree U(ω) write x ∼ y if x and y are connected by an edge in U, and for x ∈ Z2 let µx = µx (ω) = |{y : x ∼ y}| be the degree of the vertex x. The random walk on U(ω) is defined on a second space D = (Z2 )Z+ . Let Xn be the coordinate maps on D, and for each ω ∈ Ω let Pωx be the probability on D which makes 3 X = (Xn , n ≥ 0) a simple random walk on U(ω) started at x. Thus we have Pωx (X0 = x) = 1, and 1 if y ∼ x. Pωx (Xn+1 = y|Xn = x) = µx (ω) We remark that since the UST U is a subgraph of Z2 the SRW X is recurrent. We define the heat kernel (transition density) with respect to µ by x pωn (x, y) = µ−1 y Pω (Xn = y). (1.8) Define the stopping times τR = min{n ≥ 0 : d(0, Xn ) > R}, τer = min{n ≥ 0 : |Xn | > r}. (1.9) (1.10) Given functions f and g we write f ≈ g to mean log f (n) = 1, n→∞ log g(n) lim and f ≍ g to mean that there exists C ≥ 1 such that C −1 f (n) ≤ g(n) ≤ Cf (n), n ≥ 1. The following summarizes our main results on the behaviour of X. Some more precise estimates, including heat kernel estimates, can be found in Theorems 4.3 – 4.7 in Section 4. Theorem 1.3 We have for P -a.a. ω, Pω0 -a.s., p2n (0, 0) ≈ n−8/13 , τR ≈ R τer ≈ r 13/5 13/4 , (1.11) (1.12) , (1.13) max d(0, Xk ) ≈ n5/13 . (1.14) 0≤k≤n We now explain why these exponents arise. If G is a connected graph, with graph metric d, we can define the volume growth exponent (called by physicists the fractal dimension of G) by log |Bd (0, R)| , df = df (G) = lim R→∞ log R if this limit exists. Using this notation, Theorem 1.2 and (1.1) imply that df (U) = 8/5, P − a.s. Following work by mathematical physicists in the early 1980s, random walks on graphs with fractal growth of this kind have been studied in the mathematical literature. (Much of the initial mathematical work was done on diffusions on fractal sets, but many of the same 4 results carry over to the graph case). This work showed that the behaviour of SRW on a (sufficiently regular) graph G can be summarized by two exponents. The first of these is the volume growth exponent df , while the second, denoted dw , and called the walk dimension, can be defined by E 0 τR R→∞ log R (if this limit exists). dw = dw (G) = lim Here 0 is a base point in the graph, and τR is as defined in (1.9); it is easy to see that if G is connected then the limit is independent of the base point. One finds that df ≥ 1, 2 ≤ dw ≤ 1 + df , and that all these values can arise – see [Bar04]. Many of the early papers required quite precise knowledge of the structure of the graph in order to calculate df and dw . However, [BCK05] showed that in some cases it is sufficient to know two facts: the volume growth of balls, and the growth of effective resistance between points in the graph. Write Reff (x, y) for the effective resistance between points x and y in a graph G – see Section 3 for a precise definition. The results of [BCK05] imply that if G has uniformly bounded vertex degree, and there exist α > 0, ζ > 0 such that c1 Rα ≤ |Bd (x, R)| ≤ c2 Rα , x ∈ G, R ≥ 1, c1 d(x, y)ζ ≤ Reff (x, y) ≤ c2 d(x, y)ζ , x, y ∈ G, (1.15) (1.16) then writing τRx = min{n : d(x, Xn ) > R}, p2n (x, x) ≍ n−α/(α+ζ) , E x τRx ≍ Rα+ζ , x ∈ G, n ≥ 1, x ∈ G, R ≥ 1. (1.17) (1.18) (They also obtained good estimates on the transition probabilities P x (Xn = y) – see [BCK05, Theorem 1.3].) From (1.17) and (1.18) one sees that if G satisfies (1.15) and (1.16) then df = α, dw = α + ζ. The decay n−df /dw for the transition probabilities in (1.17) can be explained as follows. If R ≥ 1 and 2n = Rdw then with high probability X2n will be in the ball B(x, cR). This ball has cRdf ≈ cndf /dw points, and so the average value of p2n (x, y) on this ball will be n−df /dw . Given enough regularity on G, this average value will then be close to the actual value of p2n (x, x). In the physics literature a third exponent, called the spectral dimension, was introduced; this can be defined by log Pωx (X2n = x) , n→∞ log 2n ds (G) = −2 lim (if this limit exists). (1.19) This gives the rate of decay of the transition probabilities; one has ds (Zd ) = d. The discussion above indicates that the three indices df , dw and ds are not independent, and that given enough regularity in the graph G one expects that ds = 2df . dw 5 For graphs satisfying (1.15) and (1.16) one has ds = 2α/(α + ζ). Note that if G is a tree and satisfies (1.15) then Reff (x, y) = d(x, y) and so (1.16) holds with ζ = 1. Thus 2α . (1.20) df = α, dw = α + 1, ds = α+1 For random graphs arising from models in statistical physics, such as critical percolation clusters or the UST, random fluctuations will mean that one cannot expect (1.15) and (1.16) to hold uniformly. Nevertheless, providing similar estimates hold with high enough probability, it was shown in [BJKS08] and [KM08] that one can obtain enough control on the properties of the random walk X to calculate df , dw and ds . An additional contribution of [BJKS08] was to show that it is sufficient to estimate the volume and resistance growth for balls from one base point. In section 4, we will use these methods to show that (1.20) holds for the UST, namely that Theorem 1.4 We have for P -a.a. ω 8 df (U) = , 5 dw (U) = 13 , 5 ds (U) = 16 . 13 (1.21) The methods of [BJKS08] and [KM08] were also used in [BJKS08] to study the incipient infinite cluster (IIC) for high dimensional oriented percolation, and in [KN09] to show the IIC for standard percolation in high dimensions has spectal dimension 4/3. These critical percolation clusters are close to trees and have df = 2 in their graph metric. Our results for the UST are the first time these exponents have been calculated for a two-dimensional model arising from the random cluster model. It is natural to ask about critical percolation in two dimensions, but in spite of what is known via SLE, the values of dw and ds appear at present to be out of reach. The rest of this paper is laid out as follows. In Section 2, we define the LERW on Z2 and recall the results from [Mas09, BM09] which we will need. The paper [BM09] gives bounds on MD , the length of the loop-erasure of a random walk run up to the first exit of a simply connected domain D. However, in addition to these bounds, we require estimates on d(0, w) which by Wilson algorithm’s is the length of the loop-erasure of a random walk started at 0 and run up to the first time it hits w; we obtain these bounds in Proposition 2.7. In Section 3, we study the geometry of the two dimensional UST U, and prove Theorems 1.1 and 1.2. In addition (see Proposition 3.6) we show that with high probability the electrical resistance in the network U between 0 and Bd (0, R)c is greater than R/λ. The proofs of all of these results involve constructing the UST U in a particular way using Wilson’s algorithm and then applying the bounds on the lengths of LERW paths from Section 2. In Section 4, we use the techniques from [BJKS08, KM08] and our results on the volume and effective resistance of U from Section 3 to prove Theorems 1.3 and 1.4. Throughout the paper, we use c, c′ , C, C ′ to denote positive constants which may change between each appearance, but do not depend on any variable. If we wish to fix a constant, we will denote it with a subscript, e.g. c0 . 6 2 Loop erased random walks In this section, we look at LERW on Z2 . We let S be a simple random walk on Z2 , and given a set D ⊂ Z2 , let σD = min{j ≥ 1 : Sj ∈ Z2 \ D} be the first exit time of the set D, and ξD = min{j ≥ 1 : Sj ∈ D} be the first hitting time of the set D. If w ∈ Z2 , we write ξw for ξ{w}. We also let σR = σB(R) and use a similar convention for ξR . The outer boundary of a set D ⊂ Z2 is ∂D = {x ∈ Z2 \ D : there exists y ∈ D such that |x − y| = 1}, and its inner boundary is ∂i D = {x ∈ D : there exists y ∈ Z2 \ D such that |x − y| = 1}. Given a path λ = [λ0 , . . . , λm ] in Z2 , let L(λ) denote its chronological loop-erasure. More precisely, we let s0 = max{j : λ(j) = λ(0)}, and for i > 0, si = max{j : λ(j) = λ(si−1 + 1)}. Let n = min{i : si = m}. Then L(λ) = [λ(s0 ), λ(s1 ), . . . , λ(sn )]. We note that by Wilson’s algorithm, L(S[0, ξw ]) has the same distribution as γ(0, w) – the unique path from 0 to w in the UST U. We will therefore use γ(0, w) to denote L(S[0, ξw ]) even when we make no mention of the UST U. For positive integers l, let Ωl be the set of paths ω = [0, ω1, . . . , ωk ] ⊂ Z2 such that ωj ∈ Bl , j = 1, . . . , k − 1 and ωk ∈ ∂Bl . For n ≥ l, define the measure µl,n on Ωl to be the distribution on Ωl obtained by restricting L(S[0, σn ]) to the part of the path from 0 to the first exit of Bl . For a fixed l and ω ∈ Ωl , it was shown in [Law91] that the sequence µl,n (ω) is Cauchy. Therefore, there exists a limiting measure µl such that lim µl,n (ω) = µl (ω). n→∞ The µl are consistent and therefore there exists a measure µ on infinite self-avoiding paths. b We denote the exit We call the associated process the infinite LERW and denote it by S. b ∞) has the same distribution time of a set D for Sb by σ bD . By Wilson’s algorithm, S[0, 7 as γ(0, ∞), the unique infinite path in U starting at 0. Depending on the context, either notation will be used. For a set D containing 0, we let MD be the number of steps of L(S[0, σD ]). Notice that if D = Z2 \ {w} and S is a random walk started at x, then MD = d(x, w). In addition, if D ′ ⊂ D then we let MD′ ,D be the number of steps of L(S[0, σD ]) while it is in D ′ , or equivalently the number of points in D ′ that are on the path L(S[0, σD ]). cn be the number of steps of S[0, b σ cn ], We let M bn ]. As in the introduction, we set G(n) = E[M extend G to a continuous strictly increasing function from [1, ∞) to [1, ∞) with G(1) = 1, and let g be the inverse of G. It was shown [Ken00, Mas09] that G(n) ≈ n5/4 . In fact, the following is true. Lemma 2.1 Let ε > 0. Then there exist positive constants c(ε) and C(ε) such that if r ≥ 1 and λ ≥ 1, then cλ5/4−ε G(r) ≤ G(λr) ≤ Cλ5/4+ε G(r), (2.1) cλ4/5−ε g(r) ≤ g(λr) ≤ Cλ4/5+ε g(r). (2.2) Proof. The first equation follows from [BM09, Lemma 6.5]. Note that while the statement there holds only for all r ≥ R(ε), by choosing different values of c and C, one can easily extend it to all r ≥ 1. The second statement follows from the first since g = G−1 and G is increasing.  cn and of MD′ ,D for a The following result from [BM09] gives bounds on the tails of M broad class of sets D and subsets D ′ ⊂ D. We call a subset of Z2 simply connected if all connected components of its complement are infinite. Theorem 2.2 [BM09, Theorems 5.8 and 6.7] There exist positive global constants C and c, and given ε > 0, there exist positive constants C(ε) and c(ε) such that for all λ > 0 and all n, the following holds. 1. Suppose that D ⊂ Z2 contains 0, and D ′ ⊂ D is such that for all z ∈ D ′ , there exists a path in D c connecting B(z, n + 1) and B(z, 2n)c (in particular this will hold if D is simply connected and dist(z, D c ) ≤ n for all z ∈ D ′ ). Then 2. For all D ⊃ Bn , 3. 4. P (MD′ ,D > λG(n)) ≤ 2e−cλ . (2.3)  4/5−ε P MD < λ−1 G(n) ≤ C(ε)e−c(ε)λ . (2.4)   cn > λG(n) ≤ Ce−cλ . P M   cn < λ−1 G(n) ≤ C(ε)e−c(ε)λ4/5−ε . P M 8 (2.5) (2.6) We would like to use (2.3) in the case where D = Z2 \ {w} and D ′ = B(0, n) \ {w}. However these choices of D and D ′ do not satisfy the hypotheses in (2.3), so we cannot use Theorem 2.2 directly. The idea behind the proof of the following proposition is to get the distribution on γ(0, w) using Wilson’s algorithm by first running an infinite LERW γ (whose complement is simply connected) and then running a LERW from w to γ. Proposition 2.3 There exist positive constants C and c such that the following holds. Let n ≥ 1 and w ∈ B(0, n). Let Yw = w if γ(0, w) ⊂ B(0, n); otherwise let Yw be the first point on the path γ(0, w) which lies outside B(0, n). Then, P (d(0, Yw ) > λG(n)) ≤ Ce−cλ . (2.7) e = Z2 \ γ. Then D e is the union Proof. Let γ be any infinite path starting from 0, and let D 2 of disjoint simply connected subsets Di of Z ; we can assume w ∈ D1 and let D1 = D. By (2.3), (taking D ′ = Bn ∩ D) there exist C < ∞ and c > 0 such that Pw (MD′ ,D > λG(n)) ≤ Ce−cλ . (2.8) Now suppose that γ has the distribution of an infinite LERW started at 0. By Wilson’s algorithm, if S w is an independent random walk started at w, then γ(0, w) has the same distribution as the path from 0 to w in γ ∪ L(S w [0, σD ]). Therefore, and so, cn + MD′ ,D , d(0, Yw ) = |γ(0, Yw )| ≤ M   cn > (λ/2)G(n) + max Pw (MD′ ,D > (λ/2)G(n)) . P (d(0, Yw ) > λG(n)) ≤ P M D The result then follows from (2.5) and (2.8).  Lemma 2.4 There exists a positive constant C such that for all k ≥ 2, n ≥ 1, and K ⊂ Z2 \ B4kn , the following holds. The probability that L(S[0, ξK ]) reenters Bn after leaving Bkn is less than Ck −1 . This also holds for infinite LERWs, namely   b σkn , ∞) ∩ Bn 6= ∅ ≤ Ck −1 . P S[b (2.9) Proof. The result for infinite LERWs follows immediately by taking K = Z2 \ Bm and letting m tend to ∞. We now prove the result for L(S[0, ξK ]). Let α be the part of the path L(S[0, ξK ]) from 0 up to the first point z where it exits Bkn . Then by the domain Markov property for LERW [Law91], conditioned on α, the rest of L(S[0, ξK ]) has the same distribution as the loop-erasure of a random walk started at z, conditioned on the event {ξK < ξα }. Therefore, it is sufficient to show that for any path α from 0 to ∂Bkn and z ∈ ∂Bkn , Pz {ξn < ξK ξK < ξα } = Pz (ξn < ξK ; ξK < ξα ) ≤ Ck −1 . Pz (ξK < ξα ) 9 (2.10) On the one hand, Pz (ξn < ξK ; ξK < ξα )  ≤ Pz ξkn/2 < ξα max Px (ξn < ξα ) max Pw (σ2kn < ξα ) max Py (ξK < ξα ) . x∈∂i Bkn/2 w∈∂Bn y∈∂B2kn However, by the discrete Beurling estimates (see [LL, Theorem 6.8.1]), for any x ∈ ∂i Bkn/2 and w ∈ ∂Bn , Px (ξn < ξα ) ≤ Ck −1/2 ; Pw (σ2kn < ξα ) ≤ Ck −1/2 . Therefore, Pz (ξn < ξK ; ξK < ξα ) ≤ Ck −1 Pz ξkn/2 < ξα  max Py (ξK < ξα ) . y∈∂B2kn On the other hand, Pz (ξK < ξα ) ≥ Pz (σ2kn < ξα ) min Py (ξK < ξα ) . y∈∂B2kn By the discrete Harnack inequality, max Py (ξK < ξα ) ≤ C min Py (ξK < ξα ) . y∈∂B2kn y∈∂B2kn Therefore, in order to prove (2.10), it suffices to show that  Pz (σ2kn < ξα ) ≥ cPz ξkn/2 < ξα . Let B = B(z; kn/2). By [Mas09, Proposition 3.5], there exists c > 0 such that Pz {|arg(S(σB ) − z)| ≤ π/3 σB < ξα } > c. Therefore, Pz (σ2kn < ξα ) ≥ X Py (σ2kn < ξα ) Pz (σB < ξα ; S(σB ) = y) y∈∂B |arg(y−z)|≤π/3 ≥ cPz (σB < ξα ; |arg(S(σB ) − z)| ≤ π/3) ≥ cPz (σB < ξα )  ≥ cPz ξkn/2 < ξα . Remark 2.5 One can also show that there exists δ > 0 such that   b P S[b σkn , ∞) ∩ Bn 6= ∅ ≥ ck −δ . 10  (2.11) As we will not need this bound we only give a sketch of the proof. Since it will not be close to being optimal, we will not try to find the value of δ that the argument yields. First, we have     b σkn , ∞) ∩ Bn 6= ∅ ≥ P S[b b σkn , σ P S[b b4kn ) ∩ Bn 6= ∅ . However, by [Mas09, Corollary 4.5], the latter probability is comparable to the probability that L(S[0, σ16kn ]) leaves Bkn and then reenters Bn before leaving B4kn . Call the latter event F. Partition Z2 into the three cones A1 = {z ∈ Z2 : 0 ≤ arg(z) < 2π/3}, A2 = {z ∈ Z2 : 2π/3 ≤ arg(z) < 4π/3} and A3 = {z ∈ Z2 : 4π/3 ≤ arg(z) < 2π}. Then the event F contains the event that a random walk started at 0 (1) (2) (3) (4) (5) leaves B2kn before leaving A1 ∪ Bn/2 , then enters A2 while staying in B4kn \ Bkn , then enters Bn while staying in A2 ∩ B4kn , then enters A3 while staying in A2 ∩ Bn \ Bn/2 , then leaves B16kn while staying in A3 \ Bn/2 . One can bound the probabilities of the events in steps (1), (3) and (5) from below by ck −β for some β > 0. The other steps contribute terms that can be bounded from below by a constant; combining these bounds gives (2.11). Lemma 2.6 There exists a positive constant C such that for all k ≥ 1 and w ∈ Z2 ,  1 −1 k ≤ P γ(0, w) 6⊂ Bk|w| ≤ Ck −1/3 . 8 (2.12) Proof. We first prove the upper bound. By adjusting the value of C we may assume that k ≥ 4. As in the proof of Proposition 2.3, in order to obtain γ(0, w), we first run an infinite LERW γ started at 0 and then run an independent random walk started at w until it hits γ and then erase its loops. By Wilson’s algorithm, the resulting path from 0 to w has the same distribution as γ(0, w). By Lemma 2.4, the probability that γ reenters Bk2/3 |w| after leaving Bk|w| is less than Ck −1/3 . Furthermore, by the discrete Beurling estimates [LL, Proposition 6.8.1],  Pw σk2/3 |w| < ξγ ≤ C(k 2/3 )−1/2 = Ck −1/3 . Therefore,  P γ(0.w) 6⊂ Bk|w| ≤ Ck −1/3 . To prove the lower bound, we follow the method of proof of [BLPS01, Theorem 14.3] where it was shown that if v and w are nearest neighbors then P (diam γ(v, w) ≥ n) ≥ 11 1 . 8n If w = (w1 , w2 ), let u = (w1 − w2 , w1 + w2 ) and v = (−w2 , w1 ) so that {0, w, u, v} form four vertices of a square of side length |w|. Now consider the sets Q1 = {jw : j = 0, . . . , 2k} Q2 = {2kw + j(u − w) : j = 0, . . . , 2k} Q3 = {jv : j = 0, . . . , 2k} Q4 = {2kv + j(u − v) : j = 0, . . . , 2k} S and let Q = 4i=1 Qi . Then Q consists of 8k lattice points on the perimeter of a square of side length 2k |w|. Let x1 , . . . , x8k be the ordering of these points obtained by letting x1 = 0 and then travelling along the perimeter of the square clockwise. Thus |xi+1 − xi | = |w|. Now consider any spanning tree U on Z2 . If for all i, γ(xi , xi+1 ) stayed in the ball B(xi , k |w|) then the concatenation of these paths would be a closed loop, which contradicts the fact that U is a tree. Therefore, 1 = P (∃i : γ(xi , xi+1 ) 6⊂ B(xi , k |w|)) ≤ 8k X P (γ(xi , xi+1 ) 6⊂ B(xi , k |w|)) . i=1 Finally, using the fact that Z2 is transitive and is invariant under rotations by 90 degrees, all the probabilities on the right hand side are equal. This proves the lower bound.  Proposition 2.7 For all ε > 0, there exist c(ε), C(ε) > 0 and λ0 (ε) ≥ 1 such that for all w ∈ Z2 and all λ ≥ 1, P (d(0, w) > λG(|w|)) ≤ C(ε)λ−4/15+ε , (2.13) and for all w ∈ Z2 and all λ ≥ λ0 (ε), P (d(0, w) > λG(|w|)) ≥ c(ε)λ−4/5−ε . (2.14) Proof. To prove the upper bound, let k = λ4/5−3ε . Then by Lemma 2.1, there exists C(ε) < ∞ such that G(k |w|) ≤ C(ε)k 5/4+ε G(|w|) ≤ C(ε)λ1−ε G(|w|). (2.15) Then,   P (d(0, w) > λG(|w|)) ≤ P γ(0, w) 6⊂ Bk|w| + P d(0, w) > λG(|w|); γ(0, w) ⊂ Bk|w| . However, by Lemma 2.6,  P γ(0, w) 6⊂ Bk|w| ≤ Ck −1/3 = Cλ−4/15+ε , (2.16) while by Proposition 2.3 and (2.15),   P d(0, w) > λG(|w|); γ(0, w) ⊂ Bk|w| ≤ P d(0, w) > c(ε)λε G(k |w|); γ(0, w) ⊂ Bk|w| ≤ C exp(−c(ε)λε ). 12 Therefore, P (d(0, w) > λG(|w|)) ≤ C exp(−c(ε)λε ) + Cλ−4/15+ε ≤ C(ε)λ−4/15+ε . (2.17) To prove the lower bound we fix k = λ4/5+ε and assume k ≥ 2 and ε < 1/4. Then by Lemma 2.1, there exists C(ε) < ∞ such that G((k − 1) |w|) ≥ C(ε)−1 k 5/4−ε G(|w|) ≥ C(ε)−1λ1+ε/3 G(|w|). Hence,  P (d(0, w) > λG(|w|)) ≥ P d(0, w) > C(ε)λ−ε/3G((k − 1) |w|) . Now consider the UST on Z2 and recall that γ(0, ∞) and γ(w, ∞) denote the infinite paths starting at 0 and w. We write Z0w for the unique point where these meet: thus γ(Z0w , ∞) = γ(0, ∞)∩γ(w, ∞). Then γ(0, w) is the concatenation of γ(0, Z0w ) and γ(w, Z0w ). By Lemma 2.6,  1 . P γ(0, w) 6⊂ Bk|w| ≥ 8k Therefore,  1 . P γ(0, Z0w ) 6⊂ Bk|w| or γ(w, Z0w ) 6⊂ Bk|w| ≥ 8k By the transitivity of Z2 , the paths γ(0, Z0,−w ) and γ(w, Z0w )−w have the same distribution, and therefore  1 . P γ(0, Z0w ) 6⊂ B(k−1)|w| ≥ 16k Since Z0w is on the path γ(0, ∞), by (2.6),  P d(0, w) > C(ε)λ−ε/3 G((k − 1) |w|)  ≥ P d(0, Z0w ) > C(ε)λ−ε/3G((k − 1) |w|)   −ε/3 c ≥ P M(k−1)|w| > C(ε)λ G((k − 1) |w|); γ(0, Z0w ) 6⊂ B(k−1)|w|    c(k−1)|w| < C(ε)λ−ε/3G((k − 1) |w|) ≥ P γ(0, Z0w ) 6⊂ B(k−1)|w| − P M ≥ 1 − C exp{−cλε/4 }. 16k Finally, since k = λ4/5+ε , the previous quantity can be made greater than c(ε)λ−4/5−ε for λ sufficiently large.  3 Uniform spanning trees We recall that U denotes the UST in Z2 , and we write x ∼ y if x and y are joined by an edge in U. 13 Let E be the quadratic form given by P E(f, g) = 21 x∼y (f (x) − f (y))(g(x) − g(y)), (3.1) If we regard U as an electrical network with a unit resistor on each edge, then E(f, f ) is the energy dissipation when the vertices of Z2 are at a potential f . Set H 2 = {f : Z2 → R : E(f, f ) < ∞}. Let A, B be disjoint subsets of G. The effective resistance between A and B is defined by: Reff (A, B)−1 = inf{E(f, f ) : f ∈ H 2 , f |A = 1, f |B = 0}. (3.2) Let Reff (x, y) = Reff ({x}, {y}), and Reff (x, x) = 0. For general facts on effective resistance and its connection with random walks see [AF, DS84, LP09]. In this section, we establish the volume and effective resistance estimates for the UST U that will be used in the next section to study random walks on U. Theorem 3.1 There exist positive constants C and c such that for all r ≥ 1 and λ > 0, (a)  2/3 P Bd (0, λ−1 G(r)) 6⊂ B(0, r) ≤ Ce−cλ . (3.3) (b)  2/3 P Reff (0, B(0, r)c ) < λ−3 G(r) ≤ Ce−cλ . (3.4) Proof. By adjusting the constants c and C we can assume λ ≥ 4. For k ≥ 1, let −1 −k δk = λ 2 , and ηk = (2k)−1 . Let k0 be the smallest integer such that rδk0 < 1. Set Ak = B(0, r) − B(0, (1 − ηk )r), k ≥ 1. Let Dk be a finite collection of points in Ak such that |Dk | ≤ Cδk−2 and [ B(z, δk r). Ak ⊂ z∈Dk Write U1 , U2 , . . . for the random trees obtained by running Wilson’s algorithm (with root 0) with walks first starting at all points in D1 , then adding those points in D2 , and so on. So S Uk is a finite tree which contains ki=1 Di ∪ {0}, and the sequence (Uk ) is increasing. Since rδk0 < 1 we have ∂i B(0, r) ⊂ Ak0 ⊂ Uk0 . We then complete a UST U on Z2 by applying Wilson’s algorithm to the remaining points in Z2 . For z ∈ D1 , let Nz be the length of the path γ(0, z) until it first exits from B(0, r/8). By first applying [Mas09, Proposition 4.4] and then (2.6), so if cr/8 < λ−1 G(r)) ≤ Ce−cλ2/3 , P(Nz < λ−1 G(r)) ≤ CP(M Fe1 = {Nz < λ−1 G(r) for some z ∈ D1 } = 14 [ z∈D1 {Nz < λ−1 G(r)}, then 2/3 P(Fe1 ) ≤ |D1 |Ce−cλ 2/3 ≤ Cδ1−2 e−cλ 2/3 ≤ Cλ2 e−cλ . (3.5) For z ∈ Ak+1 , let Hz be the event that the path γ(z, 0) enters B(0, (1 − ηk )r) before it hits Uk . For k ≥ 1, let [ Hz . Fk+1 = z∈Dk+1 z Let z ∈ Dk+1 and S be a simple random walk started at z and run until its hits Uk . Then by Wilson’s algorithm, for the event Hz to occur, S z must enter B(0, (1 − ηk )r) before it hits Uk . Since each point in Ak is within a distance δk r of Uk , Uk is a connected set, and z is a distance at least (ηk − ηk+1 )r from B(0, (1 − ηk )r), we have P (Hz ) ≤ exp(−c(ηk − ηk+1 )/δk ). Hence −2 P(Fk+1 ) ≤ Cδk+1 exp(−c(ηk − ηk+1 )/δk ) ≤ Cλ2 4k exp(−cλ2k k −2 ). (3.6) Now define G by Gc = Fe1 ∪ so that 2/3 P (Gc ) ≤ Cλe−cλ + ∞ X k0 [ Fk , k=2 2/3 Cλ2 4k exp(−cλ2k k −2 ) ≤ Ce−cλ . (3.7) k=2 Now suppose that ω ∈ G. Then we claim that: (1) For every z ∈ D1 the part of the path γ(0, z) until its first exit from B(0, r/2) is of length greater than λ−1 G(r), (2) If z ∈ Dk for any k ≥ 2 then the path γ(z, 0) hits U1 before it enters B(0, r/2). Of these, (1) is immediate since ω 6∈ Fe1 , while (2) follows by induction on k using the fact that ω 6∈ Fk for any k. Hence, if ω ∈ G, then |γ(0, z)| ≥ λ−1 G(r) for every z ∈ ∂i B(0, r), which proves (a). To prove (b) we use the Nash-Williams bound for resistance [NW59]. For 1 ≤ k ≤ λ−1 G(r) let Γk be the set of z such that d(0, z) = k and z is connected to B(0, r)c by a path in {z} ∪ (U − γ(0, z)). Assume now that the event G holds. Then the Γk are disjoint sets disconnecting 0 and B(0, r)c, and so λ−1 G(r) Reff (0, B(0, r)c) ≥ X |Γk |−1 . k=1 Furthermore, each z ∈ Γk is on a path from 0 to a point in D1 , and so |Γk | ≤ |D1 | ≤ Cδ1−2 ≤ Cλ2 . Hence on G we have Reff (0, B(0, r)c ) ≥ cλ−3 G(r), which proves (b).  A similar argument will give a (much weaker) bound in the opposite direction. We begin with a result we will use to control the way the UST fills in a region once we have constructed some initial paths. 15 Proposition 3.2 There exist positive constants c and C such that for each δ0 ≤ 1 the following holds. Let r ≥ 1, and U0 be a fixed tree in Z2 connecting 0 to B(0, 2r)c with the property that dist(x, U0 ) ≤ δ0 r for each x ∈ B(0, r) (here dist refers to the Euclidean distance). Let U be the random spanning tree in Z2 obtained by running Wilson’s algorithm with root U0 . Then there exists an event G such that −1/3 P(Gc ) ≤ Ce−cδ0 , (3.8) d(x, U0 ) ≤ G(δ0 r); γ(x, U0 ) ⊂ B(0, r). (3.9) (3.10) and on G we have that for all x ∈ B(0, r/2), 1/2 Proof. We follow a similar strategy to that in Theorem 3.1. Define sequences (δk ) and −1/2 (λk ) by δk = 2−k δ0 , λk = 2k/2 λ0 , where λ0 = 5−1 δ0 . For k ≥ 0, let Ak = B(0, 21 (1 + (1 + k)−1 )r), and let Dk ⊂ Ak be such that for k ≥ 1, |Dk | ≤ Cδk−2 , [ B(z, δk r). Ak ⊂ z∈Dk Let U0 = U0 and as before let U1 , U2 , . . . be the random trees obtained by performing Wilson’s algorithm with root U0 and starting first at points in D1 , then in D2 etc. Set Mz = d(z, Uk−1 ), z ∈ Dk , Fz = {γ(z, Uk−1 ) 6⊂ Ak−1 }, Mk = max Mz , z∈Dk [ Fz . Fk = z ∈ Dk , z∈Dk For z ∈ Dk , P (Mz > λk G(δk−1 r)) ≤ P (Fz ) + P (Mz > λk G(δk−1 r); Fzc) . (3.11) Since z is a distance at least 21 r(k −1 − (k + 1)−1) from Ack−1, and each point in Ak−1 is within a distance δk−1 r of Uk−1 , −1 −1 −2 P (Fz ) ≤ C exp(−cδk−1 (k −1 − (k + 1)−1 )) ≤ C exp(−cδk−1 k ). (3.12) By (2.3), again using the fact that each point in Ak−1 is within distance δk−1 r of Uk−1 , 2/3 P (Mz > λk G(δk−1r); Fzc ) ≤ C exp(−cλk ). 16 (3.13) So, combining (3.11)–(3.13), for k ≥ 1, h i 2/3 −1 −2 P (Mk > λk G(δk−1 r)) + P (Fk ) ≤ C |Dk | exp(−cδk−1 k ) + exp(−cλk ) . Now let G= ∞ \ Fkc ∩ {Mk ≤ λk G(δk−1 r)}. (3.14) (3.15) k=1 Summing the series given by (3.14), and using the bound |Dk | ≤ cδk−2 , we have h i X −1 k −2 k/3 −1/3 2k c −2 2 exp(−cδ0 2 k ) + exp(−c2 δ0 ) P (G ) ≤ Cδ0 k −1/3 ≤ Cδ0−2 e−cδ0 ′ −1/3 ≤ Ce−c δ0 Using Lemma 2.1 with ε = 1 4 . gives 1/2 1/2 1/2 1/2 λk G(δk−1r) ≤ λk δ0 2−(k−1) G(δ0 r) = 2λ0 δ0 2−k/2 G(δ0 r). So T ∞ X 1/2 1/2 1/2 λk G(δk−1 r) ≤ 5λ0 δ0 G(δ0 r) = G(δ0 r). k=1 S Since B(0, r/2) ⊂ k Ak , we have B(0, r/2) ⊂ k Uk . Therefore on the event G, for any x ∈ 1/2 B(0, r/2), d(x, U0 ) ≤ G(δ0 r). Further, on G, for each z ∈ Dk , we have γ(z, Uk−1 ) ⊂ Ak−1 . Therefore if x ∈ B(0, r/2) the connected component of U − U0 containing x is contained in B(0, r), which proves (3.10).  Theorem 3.3 For all ε > 0, there exist c(ε), C(ε) > 0 and λ0 (ε) ≥ 1 such that for all r ≥ 1 and λ ≥ 1,  P B(0, r) 6⊂ Bd (0, λG(r) ≤ Cλ−4/15+ε , (3.16) and for all r ≥ 1 and all λ ≥ λ0 (ε),  P B(0, r) 6⊂ Bd (0, λG(r) ≥ cλ−4/5−ε . Proof. The lower bound follows immediately from the lower bound in Proposition 2.7. To prove the upper bound, let E ⊂ B(0, 4r) be such that |E| ≤ Cλε/2 and [ B(0, 4r) ⊂ B(z, λ−ε/4 r). z∈E We now let U0 be the random tree obtained by applying Wilson’s algorithm with points in E and root 0. Therefore, by Proposition 2.7, for any z ∈ E,   P d(0, z) > λG(r)/2 ≤ P d(0, z) > cλG(|z|)/2 ≤ C(ε)λ−4/15+ε/2 . 17 Let F = {d(0, z) ≤ λG(r)/2 for all z ∈ E}; then P(F c ) ≤ |E| C(ε)λ−4/15+ε/2 ≤ C(ε)λ−4/15+ε . We have now constructed a tree U0 connecting 0 to B(0, 4r)c and by the definition of the set E, for all z ∈ B(0, 2r), dist (z, U0 ) ≤ λ−ε/4 r. We now use Wilson’s algorithm to produce the UST U on Z2 with root U0 . Let G be the event given by applying Proposition 3.2 (with r replaced by 2r), so that ε/12 P (Gc ) ≤ Ce−cλ . On the event G we have d(x, U0 ) ≤ G(λ−ε/2 r) ≤ λG(r)/2 for all x ∈ B(0, r). Therefore, on the event F ∩ G we have d(x, 0) ≤ λG(r) for all x ∈ B(0, r). Thus,   ε/12 P max d(x, 0) > λG(r) ≤ C(ε)λ−4/15+ε + Ce−cλ ≤ C(ε)λ−4/15+ε . x∈B(0,r)  Theorem 1.1 is now immediate from Theorem 3.1 and Theorem 3.3. While Theorem 3.1 immediately gives the exponential bound (1.5) on the upper tail of |Bd (0, r)| in Theorem 1.2, it only gives a polynomial bound for the lower tail. The following theorem gives an exponential bound on the lower tail of |Bd (0, r)| and consequently proves Theorem 1.2. Theorem 3.4 There exist constants c and C such that if R ≥ 1, λ ≥ 1 then 1/9 P(|Bd (0, R)| ≤ λ−1 g(R)2 ) ≤ Ce−cλ . (3.17) Proof. Let k ≥ 1 and let r = g(R/k 1/2 ), so that R = k 1/2 G(r). Fix a constant δ0 < 1 such that the right side of (3.8) is less than 1/4. Fix a further constant θ < 1, to be chosen later but which will depend only on δ0 . We begin the construction of U with an infinite LERW Sb started at 0 which gives the path b σr ] chosen such that Bi = B(zi , r/k) γ0 = U0 = γ(0, ∞). Let zi , i = 1, . . . k be points on S[0.b are disjoint. (We choose these according to some fixed algorithm so that they depend only b σ on the path S[0, br ].) Let b σ2r , ∞) hits more than k/2 of B1 , . . . Bk }, F1 = { S[b b σ F2 = {|S[0, b2r ]| ≥ 1 k 1/2 G(r)}. 2 We have (3.19) 1/3 , (3.20) −ck 1/2 . (3.21) P(F1 ) ≤ Ce−ck P(F2 ) ≤ Ce 18 (3.18) Of these, (3.21) is immediate from (2.5) while (3.20) will be proved in Lemma 3.7 below. If either F1 or F2 occurs, we terminate the algorithm with a ‘Type 1’ or ‘Type 2’ failure. Otherwise, we continue as follows to construct U using Wilson’s algorithm. We define Bj′ = B(zi , θr/k), Bj′′ = B(zi , θ2 r/k). The algorithm is at two ‘levels’ which we call ‘ball steps’ and ‘point steps’. We begin b σ2r , ∞) = ∅. The nth with a list J0 of good balls. These are the balls Bj such that Bj ∩ S[b ball step starts by selecting a good ball Bj from the list Jn−1 of remaining good balls. We then run Wilson’s algorithm with paths starting in Bj′ . The ball step will end either with success, in which case the whole algorithm terminates, or with one of three kinds of failure. In the event of failure the ball Bj , and possibly a number of other balls also, will be labelled ‘bad’, and Jn is defined to be the remaining set of good balls. If more than k 1/2 /4 balls are labelled bad at any one ball step, we terminate the whole algorithm with a ‘Type 3 failure’. Otherwise, we proceed until, if we have tried k 1/2 balls steps without a success, we terminate the algorithm with a ‘Type 4 failure’. We write Un for the tree obtained after n ball steps. After ball step n, any ball Bj in Jn will have the property that Bj′ ∩ Un = Bj′ ∩ U0 . We now describe in detail the second level of the algorithm, which works with a fixed (initially good) ball Bj . We assume that this is the nth ball step (where n ≥ 1), so that we have already built the tree Un−1 . Let D ′ ⊂ B(0, θ2 r/k) satisfy [ B(x, δ0 θ2 r/k). |D ′ | ≤ cδ0−2 , B(0, θ2 r/k) ⊂ x∈D ′ Let Dj = zj + D ′ , so that Dj ⊂ Bj′′ . We now proceed to use Wilson’s algorithm to build the paths γ(w, Un−1 ) for w ∈ Dj . For w ∈ Dj let S w be a random walk started at w. For each w ∈ Dj let Gw be the event that γ(w, Un−1 ) ⊂ Bj′ . If Fw is the event that S w exits from Bj′ before it hits U0 , then P(Gcw ) ≤ P(Fw ) ≤ cθ1/2 . (3.22) Here the first inequality follows from Wilson’s algorithm, while the second is by the discrete Beurling estimates ([LL, Proposition 6.8.1]). Let Mw = d(w, Un−1 ), and Tw be the first time S w hits Un−1 . Then by Wilson’s algorithm and (2.3), −1 P(Mw ≥ θ−1 G(θr/k); Gw ) = P(Mw ≥ θ−1 G(θr/k); L(S w [0, Tw ]) ⊂ Bj′ ) ≤ ce−cθ . We now define sets corresponding to three possible outcomes to this procedure: [ H1,n = Gcw , w∈Dj H2,n = H3,n =  −1 max Mw ≥ θ G(θr/k) ∩ w∈Dj   −1  max Mw < θ G(θr/k) ∩ w∈Dj 19 \ Gw , \ Gw . w∈Dj w∈Dj (3.23) By (3.22), P(H1,n ) ≤ X P(Gw ) ≤ cδ0−2 θ1/2 , (3.24) w∈Dj and by (3.23), P(H2,n ) ≤ X −1 P(Mw ≥ θ−1 G(θr/k); Gw ) ≤ cδ0−2 e−cθ . (3.25) w∈Dj We now choose the constant θ small enough so that each of P(Hi,n ) ≤ therefore P(H3,n ) ≥ 21 . 1 4 for i = 1, 2, and (3.26) Un′ If H3,n occurs then we have constructed a tree which contains Un−1 and Dj . Further, we have that for each point w ∈ Dj , the path γ(w, 0) hits U0 before it leaves Bj′ . Hence, d(w, 0) ≤ Mw + max d(0, z) ≤ 21 k 1/2 G(r) + θ−1 G(θr/k). z∈U0 ∩Bj We now use Wilson’s algorithm to fill in the remainder of Bj′ . Let Gn be the event given by applying Proposition 3.2 to the ball Bj′′ with U0 = Un′ . Then −1/3 P(Gcn ) ≤ ce−cδ0 ≤ 1 4 by the choice of δ0 , and therefore P(H3,n ∩ Gn ) ≥ 41 . If this event occurs, then all points 1/2 in B(zj , θ2 r/2k) are within distance G(δ0 θ2 r/k) of Un′ in the graph metric d; in this case we label ball step n as successful, and we terminate the whole algorithm. Then for all z ∈ B(zj , θ2 r/2k), d(0, z) ≤ d(z, Un′ ) + max′ d(w, 0) w∈Un ≤ 1/2 G(δ0 θ2 r/k) 1/2 ≤k + 12 k 1/2 G(r) + θ−1 G(θr/k) G(r), provided that k is large enough. So there exists k0 ≥ 1 such that, provided that k ≥ k0 , if H3,n ∩ Gn occurs then B(zj , θ2 r/2k) ⊂ Bd (0, k 1/2 G(r)). Since R = k 1/2 G(r) ≤ G(k 1/2 r) we have g(R) ≤ k 1/2 r, and therefore |Bd (0, R)| ≥ |B(zj , θ2 r/2k)| ≥ ck −2 r 2 ≥ cg(R)2 /k 3 . (3.27) If H1,n ∪ H2,n ∪ (H3,n ∩ Gcn ) occurs then as soon as we have a random walk S w that ‘misbehaves’ (either by leaving Bj′ before hitting U0 , or by having Mw too large), then we terminate the ball step and mark the ball Bj as ‘bad’. If ω ∈ H2,n only the ball Bj becomes bad, but if ω ∈ H1,n ∪ (H3,n ∩ Gcn ) then S w may hit several other balls Bi′ before it hits Un−1 . Let NwB denote the number of such balls hit by S w . By Beurling’s estimate, the probability that S w enters a ball Bi′ and then exits Bi without hitting U0 is less than cθ1/2 . Since the balls Bi are disjoint, ′ P(NwB ≥ m) ≤ (cθ1/2 )m ≤ e−c m . (3.28) 20 A Type 3 failure occurs if NwB ≥ k 1/2 /4; using (3.28) we see that the probability that a ball step ends with a Type 3 failure is bounded by exp(−ck 1/2 ). If we write F3 for the event that some ball step ends with a Type 3 failure, then since there are at most k 1/2 ball steps, P(F3 ) ≤ k 1/2 exp(−ck 1/2 ) ≤ C exp(−c′ k 1/2 ). (3.29) The final possibility is that k 1/2 ball steps all end in failure; write F4 for this event. Since each ball step has a probability at least 1/4 of success (conditional on the previous steps of the algorithm), we have 1/2 1/2 P(F4 ) ≤ (3/4)k ≤ e−ck . (3.30) Thus either the algorithm is successful, or it ends with one of four types of failure, corresponding to the events Fi , i = 1, . . . 4. By Lemma 3.7 and (3.21), (3.29), (3.30) we have P(Fi ) ≤ C exp(−ck 1/3 ) for each i. Therefore, we have that provided k ≥ k0 , (3.27) holds except on an event of probability C exp(−ck 1/3 ). Taking k = cλ1/3 for a suitable constant c, and adjusting the constant C so that (3.17) holds for all λ completes the proof.  The reason why we can only get a polynomial bound in the Theorem 3.3 is that one cannot get exponential estimates for the probability that γ(0, w) leaves B(0, k |w|) (see Lemma 2.6). However, if we let Ur be the connected component of 0 in U ∩ B(0, r), then the following proposition enables us to get exponential control on the length of γ(0, w) for w ∈ Ur . This will allow us to obtain an exponential bound on the lower tail of Reff (0, Bd (0, R)c ) in Proposition 3.6. Proposition 3.5 There exist positive constants c and C such that for all λ ≥ 1 and r ≥ 1, P (Ur 6⊂ Bd (0, λG(r))) ≤ Ce−cλ . (3.31) Proof. This proof is similar to that of Theorem 3.3. Let E ⊂ B(0, 2r) be such that |E| ≤ Cλ6 and [ B(0, 2r) ⊂ B(z, λ−3 r), z∈E and let U0 be the random tree obtained by applying Wilson’s algorithm with points in E and root 0. For each z ∈ E, let Yz be defined as in Proposition 2.3, so that Yz = z if γ(0, z) ⊂ B(0, 2r), and otherwise Yz is the first point on γ(0, z) which is outside B(0, 2r). Let G1 = {d(Yz , 0) ≤ 21 λG(r) for all z ∈ E}. Then by Proposition 2.3, X P(Gc1 ) ≤ P(d(Yz , 0) > 21 λG(2r)) ≤ |E| Ce−cλ ≤ Cλ6 e−cλ . (3.32) z∈E We now complete the construction of U by using Wilson’s algorithm. Then Proposition 3.2 with δ0 = λ−3 implies that there exists an event G2 with −1/3 P (Gc2 ) ≤ e−cδ0 21 = e−cλ , (3.33) and on G2 , max d(x, U0 ) ≤ G(λ−3/2 r). x∈B(0,r) Suppose G1 ∩ G2 occurs, and let x ∈ Ur . Write Zx for the point where γ(x, 0) meets U0 . Since x ∈ Ur , we must have Zx ∈ B(0, r), and γ(Zx , 0) ⊂ B(0, r). As Zx ∈ U0 , there exists z ∈ E such that Zx ∈ γ(0, z). Since G1 occurs, d(0, Zx ) ≤ d(0, Yz ) ≤ 12 λG(r), while since G2 occurs d(x, Zx ) ≤ G(λ−3/2 r). So, provided λ is large enough, d(0, x) ≤ d(0, Zx ) + d(Zx , x) ≤ 21 λG(r) + G(λ−3/2 r) ≤ λG(r). Using (3.32) and (3.33), and adjusting the constant C to handle the case of small λ completes the proof.  Proposition 3.6 There exist positive constants c and C such that for all R ≥ 1 and λ ≥ 1, (a) 2/11 P(Reff (0, Bd (0, R)c ) < λ−1 R) ≤ Ce−cλ ; (3.34) (b) E(Reff (0, Bd (0, R)c )|Bd (0, R)|) ≤ CRg(R)2 . (3.35) Proof. (a) Recall the definition of Ur given before Proposition 3.5, and note that for all r ≥ 1, Reff (0, B(0, r)c) = Reff (0, Urc ). Given R and λ, let r be such that R = λ2/11 G(r). By monotonicity of resistance we have that if Ur ⊂ Bd (0, R), then Reff (0, Bd (0, R)c ) ≥ Reff (0, Urc ). So, writing Bd = Bd (0, R), P(Reff (0, Bdc ) < λ−1 R) = P(Reff (0, Bdc ) < λ−1 R; Ur 6⊂ Bd ) + P(Reff (0, Bdc ) < λ−1 R; Ur ⊂ Bd ) ≤ P(Ur 6⊂ Bd (0, λ2/11 G(r))) + P(Reff (0, Urc ) < λ−9/11 G(r)). By Proposition 3.5, 2/11 P(Ur 6⊂ Bd (0, λ2/11 G(r))) ≤ Ce−cλ , while by (3.4), 2/11 P(Reff (0, Urc ) < λ−9/11 G(r)) ≤ Ce−cλ . This proves (a). (b) Since Reff (0, Bd (0, R)c ) ≤ R, this is immediate from Theorem 1.2.  We conclude this section by proving the following technical lemma that was used in the proof of Theorem 3.4. Lemma 3.7 Let F1 be the event defined by (3.18). Then P(F1 ) ≤ Ce−ck 22 1/3 . (3.36) Proof. Let b = ek 1/3 . Then by Lemma 2.4   b σbr , ∞) ∩ Br 6= ∅ ≤ Cb−1 ≤ Ce−k1/3 . P S[b (3.37) b σ2r , ∞) hits more than k/2 balls then either Sb hits Br after time σ b σ2r , σ If S[b bbr , or S[b bbr ] hits more than k/2 balls. Given (3.37), it is therefore sufficient to prove that b σ2r , σ P(S[b bbr ] hits more than k/2 balls) ≤ Ce−ck 1/3 . (3.38) Let S be a simple random walk started at 0, and let L′ = L(S[0, σ4br ]). Then by [Mas09, Corollary 4.5], in order to prove (3.38), it is sufficient to prove that P(L′ hits more than k/2 balls) ≤ Ce−ck 1/3 . (3.39) Define stopping times for S by letting T0 = σ2r and for j ≥ 1, Rj = min{n ≥ Tj−1 : Sn ∈ B(0, r)}, Tj = min{n ≥ Rj : Sn ∈ / B(0, 2r)}. Note that the balls Bj can only be hit by S in the intervals [Rj , Tj ] for j ≥ 1. Let M = min{j : Rj ≥ σ4br }. Then P(M = j + 1|M > j) = log 2 log(2r) − log(r) = ≥ ck −1/3 . log(4br) − log r log(4b) Hence P(M ≥ k 2/3 ) ≤ C exp(−ck 1/3 ). For each j ≥ 1 let Lj = L(S[0, Tj ]), let αj be the first exit by Lj from B(0, 2r), and βj be the number of steps of Lj . If L′ hits more than k/2 balls then there must exist some j ≤ M such that Lj [αj , βj ] hits more than k/2 balls Bi . (We remark that since the balls Bi are defined in terms of the loop erased walk path, they will depend on Lj [0, αj ]. However, they will be fixed in each of the intervals [Rj , Tj ].) Hence, if M ≤ k 2/3 and L′ hits more than k/2 balls then S must hit more than ck 1/3 balls in one of the intervals [Rj , Tj ], without hitting the path Lj [0, αj ]. However, by Beurling’s estimate the probability of this event is less than C exp(−ck 1/3 ). Combining these estimates concludes the proof.  4 Random walk estimates We recall the notation of random walks on the UST given in the introduction. In addition, define P ∗ on Ω × D by setting P ∗ (A × B) = E[1A Pω0 (B)] and extending this to a probability measure. We write ω for elements of D. Finally, we recall the definitions of the stopping 23 times τR and τer from (1.9) and (1.10) and the transition densities pωn (x, y) from (1.8). To avoid difficulties due to U being bipartite, we also define peωn (x, y) = pωn (x, y) + pωn+1 (x, y). (4.1) Throughout this section, we will write C(λ) to denote expressions of the form Cλp and c(λ) to denote expressions of the form cλ−p , where c, C and p are positive constants. As in [BJKS08, KM08] we define a (random) set J(λ): Definition 4.1 Let U be the UST. For λ ≥ 1 and x ∈ Z2 , let J(x, λ) be the set of those R ∈ [1, ∞] such that the following all hold: (1) |Bd (x, R)| ≤ λg(R)2 , (2) λ−1 g(R)2 ≤ |Bd (x, R)|, (3) Reff (x, Bd (x, R)c ) ≥ λ−1 R. Proposition 4.2 For R ≥ 1, λ ≥ 1 and x ∈ Z2 , (a) 1/9 P(R ∈ J(x, λ)) ≥ 1 − Ce−cλ ; (4.2) (b) E(Reff (0, Bd (0, R)c )|Bd (0, R)|) ≤ CRg(R)2 . Therefore conditions (1), (2) and (4) of [KM08, Assumption 1.2] hold with v(R) = g(R)2 and r(R) = R. Proof. (a) is immediate from Theorem 1.2 and Proposition 3.6(a), while (b) is exactly Proposition 3.6(b). We note that since r(R) = R, the condition Reff (x, y) ≤ λr(d(x, y)) in [KM08, Definition 1.1] always holds for λ ≥ 1, so that our definition of J(λ) agrees with that in [KM08].  We will see that the time taken by the random walk X to move a distance R is of order Rg(R)2 . We therefore define F (R) = Rg(R)2 , (4.3) and let f be the inverse of F . We will prove that the heat kernel peT (x, y) is of order g(f (T ))−2 and so we let k(t) = g(f (t))2, t ≥ 1. (4.4) Note that we have f (t)k(t) = f (t)g(f (t))2 = F (f (t)) = t, so 1 1 f (t) = = . k(t) g(f (t))2 t (4.5) Furthermore, since G(R) ≈ R5/4 , we have G(R) ≈ R5/4 , g(R) ≈ R4/5 , f (R) ≈ R5/13 , k(R) ≈ R8/13 , 24 F (R) ≈ R13/5 , R2 G(R) ≈ R13/4 . (4.6) (4.7) We now state our results for the SRW X on U, giving the asymptotic behaviour of d(0, Xn ), the transition densities peωn (x, y), and the exit times τR and τer . We begin with three theorems which follow directly from Proposition 4.2 and [KM08]. The first theorem gives tightness for some of these quantities, the second theorem gives expectations with respect to P, and the third theorem gives ‘quenched’ limits which hold P-a.s. In various ways these results make precise the intuition that the time taken by X to escape from a ball of radius R is of order F (R), that X moves a distance of order f (n) in time n, and that the probability of X returning to its initial point after 2n steps is the same order as 1/|B(0, f (n))|, that is g(f (n))−2 = k(n)−1 . Theorem 4.3 Uniformly with respect to n ≥ 1, R ≥ 1 and r ≥ 1,   Eω0 τR −1 ≤θ →1 P θ ≤ F (R)   Eω0 τer −1 ≤θ →1 P θ ≤ 2 r G(r) P(θ−1 ≤ k(n)pω2n (0, 0) ≤ θ) → 1   1 + d(0, Xn ) P ∗ θ−1 < <θ →1 f (n) as θ → ∞, (4.8) as θ → ∞, (4.9) as θ → ∞, (4.10) as θ → ∞. (4.11) Theorem 4.4 There exist positive constants c and C such that for all n ≥ 1, R ≥ 1, r ≥ 1, cF (R) ≤ E(Eω0 τR ) ≤ CF (R), cr 2 G(r) ≤ E(Eω0 τer ) ≤ Cr2 G(r), ck(n)−1 ≤ E(pω2n (0, 0)) ≤ Ck(n)−1 , cf (n) ≤ E(Eω0 d(0, Xn )). (4.12) (4.13) (4.14) (4.15) Theorem 4.5 There exist αi < ∞, and a subset Ω0 with P(Ω0 ) = 1 such that the following statements hold. (a) For each ω ∈ Ω0 and x ∈ Z2 there exists Nx (ω) < ∞ such that (log log n)−α1 k(n)−1 ≤ pω2n (x, x) ≤ (log log n)α1 k(n)−1 , n ≥ Nx (ω). (4.16) In particular, ds (U) = 16/13, P-a.s. (b) For each ω ∈ Ω0 and x ∈ Z2 there exists Rx (ω) < ∞ such that Hence (log log R)−α2 F (R) ≤ Eωx τR ≤ (log log R)α2 F (R), R ≥ Rx (ω), (log log r)−α3 r 2 G(r) ≤ Eωx τer ≤ (log log r)α3 r 2 G(r)2 , r ≥ Rx (ω). log Eωx τR 13 = , R→∞ log R 5 dw (U) = lim 25 13 log Eωx τer = . r→∞ log r 4 lim (4.17) (4.18) (4.19) (c) Let Yn = max0≤k≤n d(0, Xk ). For each ω ∈ Ω0 and x ∈ Z2 there exist N x (ω), Rx (ω) such that Pωx (N x < ∞) = Pωx (Rx < ∞) = 1, and such that (log log n)−α4 f (n) ≤ Yn (ω) ≤ (log log n)α4 f (n), −α4 n ≥ N x (ω), α4 (log log R) F (R) ≤ τR (ω) ≤ (log log R) F (R), (log log r)−α4 r 2 G(r) ≤ τer (ω) ≤ (log log r)α4 r 2 G(r), R ≥ Rx (ω), r ≥ Rx (ω). (4.20) (4.21) (4.22) (d) Let Wn = {X0 , X1 , . . . , Xn } and let |Wn | denote its cardinality. For each ω ∈ Ω0 and x ∈ Z2 , 8 log |Wn | = , Pωx -a.s.. (4.23) lim n→∞ log n 13 The papers [BJKS08, KM08] studied random graphs for which information on ball volumes and resistances were only available from one point. These conditions were not strong enough to bound Eω0 d(0, Xn ) or peωT (x, y) – see [BJKS08, Example 2.6]. Since the UST is stationary, we have the same estimates available from every point x, and this means that stronger conclusions are possible. Theorem 4.6 There exist N0 (ω) with P(N0 < ∞) = 1, α > 0 and for all q > 0, Cq such that Eω0 d(0, Xn )q ≤ Cq f (n)q (log n)αq for n ≥ N0 (ω). (4.24) Further, for all n ≥ 1, E(Eω0 d(0, Xn )q ) ≤ Cq f (n)q (log n)αq . (4.25) Write Φ(T, x, x) = 0, and for x 6= y let Φ(T, x, y) = d(x, y) . G((T /d(x, y))1/2) (4.26) Theorem 4.7 There exists a constant α > 0 and r.v. Nx (ω) with 2 P(Nx ≥ n) ≤ Ce−c(log n) (4.27) such that provided F (T ) ∨ |x − y| ≥ Nx (ω) and T ≥ d(x, y), then writing A = A(x, y, T ) = C(log(|x − y| ∨ F (T )))α ,     A 1 exp − AΦ(T, x, y) ≤ peT (x, y) ≤ exp − A−1 Φ(T, x, y) . Ak(T ) k(T ) (4.28) Remark 4.8 If we had G(n) ≍ n5/4 then since df = 8/5 and dw = 1 + df , we would have Φ(T, x, y) ≍  d(x, y)dw 1/(dw −1) T , (4.29) so that, except for the logarithmic term A, the bounds in (4.28) would be of the same form as those obtained in the diffusions on fractals literature. 26 Before we prove Theorems 4.3 – 4.7, we summarize some properties of the exit times τR . Proposition 4.9 Let λ ≥ 1 and x ∈ Z2 . (a) If R, R/(4λ) ∈ J(x, λ) then c1 (λ)F (R) ≤ Eωx τ (x, R) ≤ C2 (λ)F (R). (4.30) (b) Let 0 < ε ≤ c3 (λ). Suppose that R, εR, c4 (λ)εR ∈ J(x, λ). Then Pωx (τ (x, R) < c5 (λ)F (εR)) ≤ C6 (λ)ε. (4.31) Proof. This follows directly from [BJKS08, Proposition 2.1] and [KM08, Proposition 3.2, 3.5].  Proof of Theorems 4.3, 4.4, and 4.5. All these statements, except those relating to τer , follow immediately from Proposition 4.2 and Propositions 1.3 and 1.4 and Theorem 1.5 of [KM08]. Thus it remains to prove (4.9), (4.13), (4.18) and (4.22). By the stationarity of U it is enough to consider the case x = 0. Recall that Ur denotes the connected component of 0 in U ∩ B(0, r), and therefore τer = min{n ≥ 0 : Xn 6∈ Ur }. Let H1 (r, λ) = {Bd (0, λ−1 G(r)) ⊂ Ur ⊂ Bd (0, λG(r)}. On H1 (r, λ) we have τλ−1 G(r) ≤ τer ≤ τλG(r) , (4.32) while by Theorem 3.1 and Proposition 3.5 we have for r ≥ 1, λ ≥ 1, 2/3 P(H1 (r, λ)c ) ≤ e−cλ . The upper bound in (4.9) will follow from (4.13). For the lower bound, on H1 (r, λ) we have, writing R = λ−1 G(r), Eω0 τR F (R) Eω0 τer ≥ · , r 2 G(r) F (R) r 2 G(r) (4.33) while F (R)/r 2G(r) ≥ λ−3 by Lemma 2.1. So P  E 0 τe   E0 τ  ω r ω R c −4 −1 ≤ P(H (r, λ) ) + P , < λ < λ 1 r 2 G(r) F (R) (4.34) and the bound on the lower tail in (4.9) follows from (4.8). We now prove the remaining statements in Theorem 4.5. Let rk = ek , and λk = a(log k)3/2 , and choose a large enough so that X 2/3 exp(−cλk ) < ∞. k 27 Hence by Borel-Cantelli there exists a r.v. K(ω) with P(K < ∞) = 1 such that H1 (rk , λk ) holds for all k ≥ K. So if k is sufficiently large, and α2 is as in (4.17), Eω0 τerk ≤ Eω0 τλk G(rk ) ≤ [log log(λk G(rk ))]α2 λk G(rk )g(λk G(rk ))2 ≤ C(log k)α3 rk2 G(rk ) = C(log log rk )α3 rk2 G(rk ). Since τer is monotone in r, the upper bound in (4.18) follows. A very similar argument gives the lower bound, and also (4.22). It remains to prove (4.13). A general result on random walks (see e.g. [BJKS08], (2.21)) implies that X µx ≤ Cr2 Reff (0, Urc ). Eω0 τer ≤ Reff (0, Urc ) x∈Ur Let z be the first point on the path γ(0, ∞) outside B(0, r). Then Reff (0, Urc ) ≤ d(0, z), and cr+1 ≤ CG(r). Hence since γ(0, ∞) has the law of an infinite LERW, Ed(0, z) ≤ EM E(Eω0 τer ) ≤ Cr2 G(r). For the lower bound, let H2 (r, λ) = {λ−1 G(r), (2λ)−2 G(r) ∈ J(λ)}. Choose λ0 large enough so that P(H1 (λ0 , r) ∩ H2 (λ0 , r)) ≥ 21 . If H2 (r, λ0 ) holds then by Proposition 4.9, writing R = λ−1 0 G(r), Eω0 τR ≥ c(λ0 )Rg(R)2 . So, since Rg(R)2 ≥ c(λ0 )r 2 G(r), EEω0 τer ≥ E(Eω0 τer ; H1 (λ0 , r) ∩ H2 (λ0 , r)) ≥ E(Eω0 τR ; H1 (λ0 , r) ∩ H2 (λ0 , r)) ≥ 21 c(λ0 )Rg(R)2 ≥ c(λ0 )r 2 G(r).  We now turn to the proofs of Theorems 4.6 and 4.7, and begin with a slight simplification of Lemma 1.1 of [BB89]. Lemma 4.10 There exists c0 > 0 such that the following holds. Suppose we have nonnegative r.v. ξi which satisfy, for some t0 > 0, P(ξi ≤ t0 |ξ1 , . . . , ξi−1 ) ≤ 21 . Then P( n X ξi < T ) ≤ exp(−c0 n + T /t0 ). i=1 28 (4.35) Proof. Write Fi = σ(ξ1 , . . . ξi). Let θ = 1/t0 , and let e−c0 = 12 (1 + e−1 ). Then E(e−θξi |Fi−1 ) ≤ P(ξi < t0 |Fi−1 ) + e−θt0 P(ξi > t0 |Fi−1) = P(ξi < t0 |Fi−1)(1 − e−θt0 ) + e−θt0 ≤ 21 (1 + e−θt0 ) = e−c0 . Then P( n X ξi < T ) = P(e−θ Pn i=1 ξi > e−θT ) ≤ eθT E(e−θ Pn i=1 ξi ) ≤ eθT e−nc0 . i=1  We also require the following lemma which is an immediate consequence of the definitions of the functions F and G. Lemma 4.11 Let R ≥ 1, T ≥ 1, and b0 = R . G((T /R)1/2 ) (4.36) Then, R/b0 = G((T /R)1/2 ) = f (T /b0 ), b ≤ b0 ⇔ T /b ≤ F (R/b) ⇔ f (T /b) ≤ R/b. (4.37) (4.38) Also, if θ < 1 and θR ≥ 1, then c7 θ3 F (R) ≤ F (θR) ≤ C8 θ2 F (R), c7 θ 1/2 f (R) ≤ f (θR) ≤ C8 θ 1/3 (4.39) f (R). (4.40) For x ∈ Z2 , let Ax (λ, n) = {ω : R′ ∈ J(y, λ) for all y ∈ B(x, n2 ), 1 ≤ R′ ≤ n2 }. and let A(λ, n) = A0 (λ, n). Proposition 4.12 Let λ ≥ 1 and suppose that 1 ≤ R ≤ n, T ≥ C9 (λ)R, (4.41) and A(λ, n) occurs. Then, Pω0 (τR  < T ) ≤ C10 (λ) exp −c11 (λ) 29 R G((T /R)1/2 )  . (4.42) Proof. In this proof, the constants ci (λ), Ci (λ) for 1 ≤ i ≤ 8 will be as in Proposition 4.9 and Lemma 4.11, and c0 will be as in Lemma 4.10. We work with the probability Pω0 , so that X0 = 0. Let b0 = R/G((T /R)1/2 ) be as in (4.36), and define the quantities θ = 41 C8−1 c0 c5 (λ)ε2 , R′ = R/m, ε = (2C6 (λ))−1 , m = ⌊θb0 ⌋, C ∗ (λ) = 2θ−1 , t0 = c5 (λ)F (εR′). We now establish the key facts that we will need about the quantities defined above. We can assume that b0 ≥ C ∗ (λ) for if b0 ≤ C ∗ (λ), then by adjusting the constants C10 (λ) and c11 (λ) we will still obtain (4.42). Therefore, 1 ≤ 21 θb0 ≤ m ≤ θb0 . (4.43) Furthermore, since m/θ ≤ b0 , θR/m = G((T /R)1/2 ) ≥ 1 and θ/ε < 1, we have by Lemma 4.11 that T /m ≤ θ−1 F (θR/m) ≤ C8 θε−2 F (εR/m) ≤ 41 c0 c5 (λ)F (εR/m) = 41 c0 t0 . Therefore, T /t0 < 12 c0 m. (4.44) Finally, we choose C9 (λ) ≥ g(c4(λ)−1 ε−1 θ)2 , so that if T /R ≥ C9 (λ), then G((T /R)1/2 ) ≥ c4 (λ)−1 ε−1 θ, and therefore c4 (λ)εR′ ≥ c4 (λ)εRθ−1 b−1 0 ≥ 1. (4.45) Having established (4.43), (4.44) and (4.45), the proof of the Proposition is straightforward. Let Fn = σ(X0 , . . . , Xn ). Define stopping times for X by T0 = 0, Tk = min{j ≥ Tk−1 : Xj 6∈ Bd (XTk−1 , R′ − 1)}, and let ξk = Tk − Tk−1 . Note that Tm ≤ τR , and that if k ≤ m, then XTk ∈ Bd (0, kR′ ) ⊂ Bd (0, n) ⊂ B(0, n). Therefore, since (4.45) holds and A(λ, n) occurs, we can apply Proposition 4.9 to obtain that Pω0 (ξk < c5 (λ)F (εR′)|Fk−1 ) ≤ C6 (λ)ε = 21 . 30 Hence by Lemma 4.10 and (4.44), Pω0 (τR < T ) ≤ Pω0 ( m X ξi < T ) 1 ≤ exp(−c0 m + T /t0 ) ≤ exp(−c0 /2m)   R . ≤ exp −c11 (λ) G((T /R)1/2 )  Proof of Theorem 4.6 We will prove Theorem 4.6 with T replacing n. Let R = f (T ); we can assume that T is large enough so that R ≥ 2. We also let C9 (λ), C10 (λ) and c11 (λ) be as in Proposition 4.12, and let p > 0 be such that Ci (λ) ≤ Cλp , i = 9, 10 and c11 (λ) ≥ cλ−p . We have Eω0 d(0, XT )q q ≤R + Eω0 ≤ Rq + R ∞ X 1(ek−1 R≤d(0,XT )<ek R) d(0, XT )q k=1 ∞ X kq q  e Pω0 (ek−1 R ≤ d(0, XT ) ≤ ek R). (4.46) k=1 By (4.2) we have 1/9 P(A(λ, n)c ) ≤ 4n3 e−cλ ≤ exp(−cλ1/9 + C log n). (4.47) P Let λk = k 10 . Then k P(A(λk , ek )c ) < ∞, and so by Borel-Cantelli there exists K0 (ω) such that A(λk , ek ) holds for all k ≥ K0 . Furthermore, we have P(K0 ≥ n) ≤ Ce−cn 10/9 . (4.48) Suppose now that k ≥ K0 . To bound the sum (4.46), we consider two ranges of k. If C9 (λk )ek−1 R > T , then we let Ak = Bd (0, ek R)−Bd (0, ek−1R), and by the Carne-Varopoulos bound (see [Car85]), X Pω0 (XT = y) ekq Pω0 (ek−1 R ≤ d(0, XT ) ≤ ek R) ≤ ekq y∈Ak kq ≤e X C exp(−d(0, y)2/2T ) y∈Ak kq ≤ Ce (ek R)2 exp(−(ek−1 R)2 /2T ) ≤ C exp(−C9 (λk )−1 ek R + 2 log(ek R) + kq) ≤ C exp(−ck −10p ek + Cq k). 31 (4.49) On the other hand, if C9 (λk )ek−1 R ≤ T , then we let m = ⌈k + log R⌉, so that ek R ≤ em < e R. Then by Proposition 4.12, k+1 ekq Pω0 (ek−1 R ≤ d(0, XT ) ≤ ek R) ≤ ekq Pω0 (τek−1 R < T )   ek−1 R kq ≤ e C10 (λm ) exp −c11 (λm ) G((e−k+1 T /R)1/2 )   R kq 10p −10p k ≤ e Cm exp −cm e G((T /R)1/2 ) ≤ C(k + log R)10p exp(−c(k + log R)−10p ek + kq). (4.50) Let k1 = 20p log log R. Then if k ≥ k1 , (k + log R)10p ≤ (k + ek/(20p) )10p ≤ Cek/2 . Hence for k ≥ k1 , ekq Pω0 (ek−1 R ≤ d(0, XT ) ≤ ek R) ≤ C exp(−cek/2 + Cq k). (4.51) Let K ′ = K0 ∨ k1 . Then since the series given by (4.49) and (4.51) both converge, ∞ X kq e Pω0 (ek−1 R k ≤ d(0, XT ) ≤ e R) ≤ k=1 ′ −1 K X k=1 K ′q ≤e ekq + Cq + Cq ≤ eK0 q + (log R)20pq + Cq . Hence since R ≤ T , we have that for all T ≥ N0 = ee K0 Eω0 d(0, XT )q ≤ Cq Rq ((log T )q + (log T )20pq ), so that (4.24) holds. Taking expectations in (4.52) and using (4.48) gives (4.25). (4.52)  Remark 4.13 It is natural to ask if (4.25) holds without the term in log T , as with the averaged estimates in Theorem 4.4. It seems likely that this is the case; such an averaged estimate was proved for the incipient infinite cluster on regular trees in [BK06, Theorem 1.4(a)]. The key to obtaining such a bound is to control the exit times τek R ; this was done above using the events A(λ, n), but this approach is far from optimal. The argument of Proposition 4.12 goes through if only a positive proportion of the points XTk are at places where the estimate (4.31) can be applied. This idea was used in [BK06] – see the definition of the event G2 (N, R) on page 48. Suppose we say that Bd (x, R) is λ-bad if R 6∈ J(x, λ). Then it is natural to conjecture that there exists λc such that for λ > λc the bad balls fail to percolate on U. Given such a result (and suitable control on the size of the clusters of bad balls) it seems plausible that the methods of this paper and [BK06] would then lead to a bound of the form E(Eω0 d(0, XT )q ) ≤ Cq f (T )q . 32 We now use the arguments in [BCK05] to obtain full heat kernel bounds for pT (x, y) and thereby prove Theorem 4.7. Since the techniques are fairly standard, we only give full details for the less familiar steps. Lemma 4.14 Suppose A(λ, n) holds. Let x, y ∈ B(0, n). Then (a) pT (x, y) ≤ C12 (λ)k(T )−1 , if 1 ≤ T ≤ F (n). (4.53) (b) peT (x, y) ≥ c13 (λ)k(T )−1 , if 1 ≤ T ≤ F (n) and d(x, y) ≤ c14 (λ)f (T ). (4.54) Proof. If x = y then (a) is immediate from [KM08, Proposition 3.1]. Since pT (x, y)2 ≤ peT (x, x)e pT (y, y), the general case then follows. (b) The bound when x = y is given by [KM08, Proposition 3.3(2)]. We also have, by [KM08, Proposition 3.1], c |e pT (x, y) − peT (x, z)|2 ≤ d(y, z)p2⌊T /2⌋ (x, x). T Therefore using (a), peT (x, y) ≥ peT (x, x) − |e pT (x, x) − peT (x, y)| 1/2  ≥ c(λ)k(T )−1 − C(λ)d(x, y)T −1k(T )−1  1/2  −1 −1 = c(λ)k(T ) 1 − C(λ)d(x, y)T k(T ) . Since k(T )/T = f (T )−1 , (4.54) follows.  Recall that Φ(T, x, x) = 0, and for x 6= y, Φ(T, x, y) = d(x, y) . G((T /d(x, y))1/2) Proposition 4.15 Suppose that A(λ, n) holds. Let x, y ∈ B(0, n). If d(x, y) ≤ T ≤ F (n), then     C(λ) c(λ) exp − C(λ)Φ(T, x, y) ≤ peT (x, y) ≤ exp − c(λ)Φ(T, x, y) . (4.55) k(T ) k(T ) Proof. Let R = d(x, y). In this proof we take c13 (λ) and c14 (λ) to be as in (4.54). We will choose a constant C ∗ (λ) ≥ 2 later. Suppose first that R ≤ T ≤ C ∗ (λ)R. Then the upper bound in (4.55) is immediate from the Carne-Varopoulos bound. If R + T is even and then we have pT (x, y) ≥ 4−T , and this gives the lower bound. We can therefore assume that T ≥ C ∗ (λ)R. The upper bound follows from the bounds (4.53) and (4.42) by the same argument as in [BCK05, Proposition 3.8]. It remains to prove the lower bound in the case when T ≥ C ∗ (λ)R, and for this we use a standard chaining technique which derives (4.55) from the ‘near diagonal lower bound’ 33 (4.54). For its use in a discrete setting see for example [BCK05, Section 3.3]. As in Lemma 4.11, we set R . (4.56) b0 = G((T /R)1/2 ) 2/3 2/3 If b0 < 1 then we have from Lemma 4.11 that R ≤ C8 b0 f (T ). If C8 b0 ≤ c14 (λ) then R ≤ c14 (λ)f (T ) and the lower bound in (4.55) follows from (4.54). We can therefore assume 2/3 that C8 b0 > c14 (λ). We will choose θ > 2(c14 /C8 )−3/2 later; this then implies that θb0 ≥ 2. Let m = ⌊θb0 ⌋; we have 21 θb0 ≤ m ≤ θb0 . Let r = R/m, t = T /m; we will require P that both r and t are greater than 4. Choose integers t1 , . . . , tm so that |ti − t| ≤ 2 and ti = T . Choose a chain x = z0 , z1 , . . . , zm = y of points so that d(zi−1 , zi ) ≤ 2r, and let Bi = B(zi , r). If xi ∈ Bi for 1 ≤ i ≤ m then d(xi−1 , xi ) ≤ 4r. We choose θ so that we have peti (xi−1 , xi ) ≥ c13 (λ)k(t)−1 whenever xi−1 ∈ Bi−1 , xi ∈ Bi . (4.57) 4R/m = 4r ≤ c14 (λ)f (t/2) = c14 (λ)f (T /2m). (4.58) By (4.54) it is sufficient for this that Since 2m/θ ≥ b0 , Lemma 4.11 implies that f (θT /(2m)) ≥ θR/(2m), and therefore 4R/m ≤ 8θ−1 f (θT /(2m)) ≤ Cθ−1/3 f (T /2m), (4.59) and so taking θ = max(2(c14 /C8 )−3/2 , (C/c3 (λ))3 ) gives (4.58). The condition T ≥ C ∗ (λ)R implies that f (T /b0 ) = R/b0 ≥ G(C ∗ (λ)), so taking C ∗ large enough ensures that both r and t are greater than 4. The Chapman-Kolmogorov equations give X X pt1 (x0 , x1 )µx1 pt2 (x1 , x2 )µx2 . . . ··· peT (x, y) ≥ x1 ∈B1 xm−1 ∈Bm−1 ptm−1 (xm−2 , xm−1 )µxm−1 petm (xm−1 , y). (4.60) Since xm−1 ∈ Bm−1 we have petm (xm−1 , y) ≥ c13 (λ)k(t)−1 ≥ c13 (λ)k(T )−1. Note that exactly one of pt (x, y) and pt+1 (x, y) can be non-zero. Using this, and (4.57) we deduce that for 1 ≤ i ≤ m − 1, X pti (xi−1 , xi )µxi ≥ c(λ)k(t)−1 g(r)2. (4.61) xi ∈Bi The choice of m implies that c′ (λ)f (t) ≤ r ≤ c(λ)f (t), and therefore k(t)−1 g(r)2 = g(r)2/g(f (t))2 ≥ c(λ). So we obtain peT (x, y) ≥ k(T )c(λ)m ≥ k(T ) exp(−c(λ)R/G((T /R)1/2 )). 34 (4.62)  Proof of Theorem 4.7 As in the proof of Theorem 4.6, we have that by by (4.2) 1/9 P(A(λ, n)c ) ≤ 4n3 e−cλ ≤ exp(−cλ1/9 + C log n). Therefore if we let λn = (log n)18 , then by Borel Cantelli, for each x ∈ Z2 there exists Nx such that Ax (λn , n) holds for all n ≥ Nx . Further we have that 2 P(Nx ≥ n) ≤ Ce−c(log n) . Let x, y ∈ Z2 and T ≥ 1. To apply the bound in Proposition 4.15 we need to find n such that T ≤ F (n), y ∈ B(x, n) and n ≥ Nx . Hence if F (T ) ∨ |x − y| ≥ Nx we can take n = F (T ) ∨ |x − y|, to obtain (4.55) with constants c(λn ) = c(log n)p . Choosing α suitably then gives (4.28).  Remark 4.16 If both d(x, y) = R and T are large then since dw = 13/5 Φ(x, y) ≃ R((T /R)1/2 )−5/4 = R13/8 = (Rdw /T )1/(dw −1) . 5/8 T Thus the term in the exponent takes the usual form one expects for heat kernel bounds on a regular graph with fractal growth – see the conditions UHK(β) and LHK(β) on page 1644 of [BCK05]. Acknowledgment The first author would like to thank Adam Timar for some valuable discussions on stationary trees in Zd . The second author would like to thank Greg Lawler for help in proving Lemma 2.4. References [AF] D. Aldous and J. Fill. Reversible Markov Chains and Random Walks on Graphs. Book in preparation. http://www.stat.berkeley.edu/∼aldous/RWG/book.html. [Bar04] M.T. Barlow. Which values of the volume growth and escape time exponent are possible for a graph? Rev. Mat. Iberoamericana 20 (2004), no. 1, 1–31. [BB89] M. T. Barlow and R. F. Bass. The construction of Brownian motion on the Sierpinski carpet. Ann. Inst. H. Poincaré 25 (1989), 225–257. [BCK05] M.T. Barlow, T. Coulhon and T. Kumagai. Characterization of sub-Gaussian heat kernel estimates on strongly recurrent graphs. Comm. Pure Appl. Math. 58 (2005), 1642–1677. [BJKS08] M.T. Barlow, A. Járai, T. Kumagai and G. Slade. Random walk on the incipient infinite cluster for oriented percolation in high dimensions. Comm. Math. Phys. 278 (2008), no. 2, 385–431. 35 [BK06] M.T. Barlow and T. Kumagai. Random walk on the incipient infinite cluster on trees. Illinois J. Math. 50 (2006), no. 1-4, 33–65. [BM09] M.T Barlow and R. Masson. Exponential tail bounds for loop-erased random walk in two dimensions. Preprint (2009), arXiv:0910.5015. [BKPS04] I. Benjamini, H. Kesten, Y. Peres and O. Schramm. Geometry of the uniform spanning forest: transitions in dimensions 4, 8, 12, . . . . Ann. of Math. (2) 160 (2004), no. 2, 465–491. [BLPS01] I. Benjamini, R. Lyons, Y. Peres and O. Schramm. Uniform spanning forests. Ann. Probab. 29 (2001), no. 1, 1–65. [Car85] T.K. Carne. A transmutation formula for Markov chains. Bull. Sci. Math. 109 (1985) 399–405. [DS84] P.G. Doyle and J.L. Snell. Random Walks and Electric Networks. Mathematical Association of America, Washington DC, 1984. http://xxx.lanl.gov/abs/math/0001057. [Häg95] O. Häggstrøm. Random-cluster measures and uniform spanning trees. Stoch. Proc. App. 59 (1995), 267-275. [Ken00] R. Kenyon. The asymptotic determinant of the discrete Laplacian. Acta Math. 185 (2000), no. 2, 239–286. [KN09] G. Kozma and A. Nachmias. The Alexander-Orbach conjecture holds in high dimensions. Inventiones Math. 178(3) (2009), 635–654. [KM08] T. Kumagai and J. Misumi. Heat kernel estimates for strongly recurrent random walk on random media. J. Theoret. Probab. 21 (2008), no. 4, 910–935. [Law91] G. F. Lawler. Intersections of random walks. Probability and its Applications. Birkhäuser Boston Inc., Boston, MA, 1991. [LL] G. F. Lawler and V. Limic. Random walk: a modern introduction. Book in preparation. http://www.math.uchicago.edu/∼lawler/books.html. [LSW04] G. F. Lawler, O. Schramm and W. Werner. Conformal invariance of planar looperased random walks and uniform spanning trees. Ann. Probab. 32 (2004), no. 1B, 939–995. [Lyo98] R. Lyons. A bird’s-eye view of uniform spanning trees and forests. Microsurveys in discrete probability (Princeton, NJ, 1997), 135–162, DIMACS Ser. Discrete Math. Theoret. Comput. Sci., 41, Amer. Math. Soc., Providence, RI, 1998. [LP09] R. Lyons and Y. Peres. Probability on Trees and Networks. Book in preparation. http://mypage.iu.edu/∼rdlyons/prbtree/prbtree.html. [Mas09] R. Masson. The growth exponent for planar loop-erased random walk. Electron. J. Probab. 14 (2009), paper no. 36, 1012 – 1073. 36 [NW59] C. St J. A. Nash-Williams. Random walks and electric currents in networks. Proc. Camb. Phil. Soc. 55 (1959), 181–194. [Pem91] R. Pemantle. Choosing a spanning tree for the integer lattice uniformly. Ann. Probab. 19 (1991), no. 4, 1559–1574. [PR04] Y. Peres and D. Revelle. Scaling limits of the uniform spanning tree and loop-erased random walk on finite graphs. Preprint, available at http:// front.math.ucdavis.edu/ 0410430 (2004). [Sch00] O. Schramm. Scaling limits of loop-erased random walks and uniform spanning trees. Israel J. Math. 118 (2000), 221–288. [Sch08] J. Schweinsberg. Loop-erased random walk on finite graphs and the Rayleigh process. J. Theoret. Probab. 21 (2008), no. 2, 378–396. [Sch09] J. Schweinsberg. The loop-erased random walk and the uniform spanning tree on the four-dimensional discrete torus. Probab. Theory Related Fields 144 (2009), no. 3-4, 319–370. [Wil96] D. B. Wilson. Generating random spanning trees more quickly than the cover time. In Proceedings of the Twenty-eighth Annual ACM Symposium on the Theory of Computing (Philadelphia, PA, 1996), 296–303, ACM, New York, 1996. 37