Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

On The Distribution of Rank of A Random Matrix Over A Finite Field

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

On the distribution of rank of a random matrix over

a finite field

C. Cooper∗
School of Mathematical Sciences
University of North London
London N7 8DB, UK

September 12, 2005

Abstract

Let M = (mij ) be a random n × n matrix over GF(t) in which each matrix


entry mij is independently and identically distributed, with Pr(mij = 0) =
1 − p(n) and Pr(mij = r) = p(n)/(t − 1), r 6= 0. If we choose t ≥ 3, and
condition on M having no zero rows or columns, then the probability that M
is non-singular tends to ct ∼ ∞ −j
j=1 (1 − t ) provided p ≥ (log n + d)/n, where
Q

d → −∞ slowly.

1 Introduction
For convenience denote the elements of the finite field GF(t) by 0, 1, ..., t − 1. We con-
sider a space of random (m × n)-matrices over GF(t) which we denote by M(m, n, p; t).
Let M = (mij ) be a (m×n)-matrix with entries in GF(t), independently and identically
distributed as:
 1 − p, r = 0



Pr(mij = r) =  p (1)

 , r ∈ {1, 2, ..., t − 1}
t−1

The simplest case is where p = (t − 1)/t and thus all matrices are equiprobable. Let
m = n. If the first l columns of the matrix M are linearly independent, they span a

Research supported by the STORM Research Group

1
vector space of dimension
 l and size tl . The probability that the next column avoids
this space is 1 − tl /tn , and thus
n 
1
Y 
P r(M is non-singular ) = 1− l . (2)
l=1 t

In the paper, The Rank of Sparse Random Matrices Over Finite Fields, J. Blömer,
R. Karp and E. Welzl [BKW], posed a question on the existence of a threshold p(n)
for random (n × n)-matrices over GF(2) to be linearly independent with constant
probability c. Motivated by this paper, Cooper [CC] proved that we can choose c =
0.28879. This value of c is the limiting value of (2) when t = 2. This result holds
provided p(n) does not tend to either zero or one too rapidly. Specifically, let p0 =
(log n + d(n))/n where d(n) → ∞ arbitrarily slowly. We require p0 ≤ p ≤ 1 − p0 . The
result is also true for d(n) ≥ −(log log n), conditional on no zero rows or columns and
at most one unity (all 1’s) row or column in the matrix.
For finite m, n the number η(m, n, k, t) of (m × n)-matrices of rank n − k is given
by a standard formula (13). Thus the probability of rank n − k in the equiprobable
model is η(m, n, k, t)/tnm . The general asymptotics for m = n − s, s constant, in
the equiprobable model, are obtained by the method of (14),(17) in Section 5 or by a
recurrence argument [KLS],[Ko]. They are given by the following theorem.

Theorem 1 Let M ∈ M(n − s, n, (t − 1)/t; t). Let pn (k, t) be the probability that
rank(M ) = n − s − k, then
   j 
Q∞ 1

 1 − k=0
 j=s+1  t



lim pn (k, t) = πs (k, t) = Q∞
1− ( 1 j
t )  k(s+k) (3)
n→∞ j=s+k+1 1


 Qk 
j

t
k≥1
1−( 1 )


j=1 t

It is difficult to do justice to, or even to determine the full extent of the work on random
matrices over finite structures produced by researchers in Russia and the Ukraine. We
can point to the work (among others) of G. V. Balakin, V. F. Kolchin, I. N. Kovalenko,
A. A. Levitskaya , V. I. Masol and M Savchuk. A survey of many results is given in
[KL].
A remarkable early result of I. N. Kovalenko [K1] shows that the asymptotic distribu-
tion of rank of random matrices in M(n − s, n, p; 2) is invariant for constant p. That
is, given δ > 0, (3) holds for δ ≤ p ≤ 1 − δ.
Another noteable early result was that of Balakin [B], who studied random (N × n)-
matrices A over GF(2), where N/n = a + O(1/n) and 0 < a < 1. Balakin proved that

2
if p = (log n + d)/n, then

(ae−d )k −ae−d
Pr( Rank of A is N − k) ∼ e .
k!
See also Kolchin [Ko] for a full discussion of this case.
Many of the asymptotic results concerning the rank of random matrices M are given
in the book Selected Problems in Probabilistic Combinatorics [KLS] by Kovalenko,
Levitskya and Savchuk. For example, for GF(2) the invariant distribution of Theorem 1
holds for a generalization of M(m, n, p; 2) to the case where the elements of M although
independent, are not identically distributed, but rather, Pr(mij = 0) = 1 − pij and it is
only required that log(ωn n n) ≤ pij ≤ 1− log(ωn n n) , where ωn → ∞. This result also extends
to GF(t). The authors also prove results on random matrices, for the case where the
matrix entries come from an arbitrary finite ring.
Theorem 2 summarizes results for GF(t), for the case of a random ((n − s) × n)-matrix,
where (n − s)/n → 1. As we have pointed out above, many of these are established
results. The novel results, proved in this paper are (in (i)) the sharp threshold (d(n)
constant), and (in (ii)) for d(n) constant or tending slowly to −∞.

Theorem 2 Let GF(t) be a finite field, t ≥ 3 and t = O(log log n). Let s be a nonneg-
ative integer, M ∈ M((n − s), n, p; t) be a (n − s × n)-matrix with entries independently
and identically distributed according to (1). Let ct = πs (0, t). Let p = (log n + d(n))/n
where d(n) ≥ − log(log n/9t).

(i)


 0 d(n) → −∞
lim Pr(M is non-singular ) = −2e−d
n→∞ 
ct e d constant

ct d(n) → ∞

(ii) Let C be the event that there are no zero rows or columns in M . For any finite
non-negative integer k,

lim Pr(M has rank n − s − k | C) = πs (k, t),


n→∞

and in particular
lim Pr(M is non-singular | C) = ct .
n→∞

If s → ∞ the matrix M has rank n − s whp1 provided the condition C or the condition
d(n) → ∞ of Theorem 2 hold. This follows from Corollary 4 of Section 2.
1
whp. with high probability. With probability tending to 1 as n → ∞

3
If we condition on the event C of the matrix having no zero rows or columns, then the
results of Theorem 2 can be shown to hold down to pL ∼ ( 12 log n + log log n)/n. Below
pL two or more columns with a single non-zero entry in the same row may occur. We
do not persue the case p → 0 here for limitations of space, see rather [CC1].
A major obstacle to proofs in this field has been the fact that the moments of the
number of solutions of the associated system of homogeneous linear equations (Ax = 0)
do not satisfy the Carlemann condition for unique reconstruction of the probability
distribution. However, it was pointed out by a referee, that very recently, Alekseychuk
[A1,A2], has proved that the distribution of rank (3) is uniquely determined by the
aforementioned moments. This is indeed an interesting development, and should lead
to an alternative proof of Theorem 2.

1.1 Basic definitions

The definitions and calculations of the type made in Sections 1,2 are common through-
out the papers on this subject. However, the details vary and we repeat them here in
order to give a full, as opposed to an indicative proof.
For simplicity we restrict our initial discussion to (n×n)-matrices. Let M ∈ M(n, n, p; t).
Let D = (a1 , ..., am ) be a matrix consisting of m columns of M and let c = (c1 , ..., cm )
a sequence of non-zero values from GF(t). The matrix M is singular if and only if
there exist c and D such that

c1 a1 + c2 a2 + · · · + cm am = 0n ci 6= 0, i = 1, ..., m. (4)

Because GF(t) is a field, for j 6= 0 and according to (1)

P r(ai j = k) = P r(ai = kj −1 ) = P r(ai = k).

By a simple induction, for fixed non-zero ci , i = 1, ..., m the event given in (4) has the
same probability as the zero sum event a1 + · · · + am = 0.
For r ∈ GF(t), the probability ρm (r) that row j of D has sum aj1 + · · · + ajm = r is
  
1 t


 t
1 + (t − 1)(1 − t−1
p)m r=0
ρm (r) =   (5)

1 t
1 + (−1)(1 − p)m r 6= 0


t t−1

This result is derived from the recurrence relations


X p
ρm (r) = ρm−1 (r)(1 − p) + ρm−1 (s) (r, s ∈ GF (t)),
s6=r t − 1

4
subject to the initial conditions given by (1).
Let Wm (M ) be the number of linearly dependent m-subsets of columns of M , then
!
n
EWm = (t − 1)m (ρm (0))n .
m

For r ∈ GF (t) let N (r) be the set of indices i of c for which ci = r, and let |N (r)| = nr .
The event (4) can be expressed as
X X
rai = 0.
r=1,...,t−1 i∈N (r)

A solution c of the form given in (4), which we write as Dc = 0n , extends naturally


to a unique vector y ∈ (GF (t))n satisfying M y = 0n . We simply put yi = 0 if i does
not index a column of the submatrix D of M and yi = cj if aj is column i of M . The
expected number of such vectors y with partition (n0 , n1 , ..., nt−1 ) is
! !
n n − n0
(ρn−n0 (0))n . (6)
n0 n1 , ..., nt−1

Let M be an ((n − s) × n)-matrix of rank n − r. Let N = {x : M x = 0} be the


null space of M . We will refer to N as the row null space, as any x ∈ N maps the
rows of M to zero. By the matrix rank and nullity theorem (eg. [Ha] Chapter 5)
Rank(M ) + Dimension(N (M )) = n. Similarly, the dimension of the column null space
{y : yM = 0} is k = r − s.
Let C be the subset of M = M(n − s, n, p; t) consisting of those matrices with no
zero rows or columns. Let C ∈ C. To prove Theorem 2, we obtain the distribution
of dimension of the row null space Y(C) of C. For lack of a better method, we are
obliged to use a rather indirect approach.
A
 
We write C = where A is an (n − l) × n matrix and B = (b1 · · · bl−s )0 is an
B
(l − s) × n matrix. We will choose l = log log n.
The layout of the proof of Theorem 2 in this paper is as follows.

(i) In Section 2 we prove the matrix A is non-singular whp.


(ii) In Sections 2.3 and 3 we establish the relevant structure of X (A), the null space
of the rows of A.
(iii) In Section 4 we calculate E(W )k , the expected number of linearly independent
k-subsets of the null space Y(C) of C. This is obtained as the number of linearly
independent k-subsets of X (A) which also lie within the null space of the rows
of B.

5
(iv) In Section 5 we derive the distribution of the dimension of the row null space
Y(C) of C. To find this distribution we use inclusion-exclusion type counting on
E(W )k over the lattice of vector subspaces of the null space of C.

It is easier to consider a subset D of M, with elements C satisfying the condition


that A has no zero rows or columns, but B may have zero rows. The  probability of
log2 n

those matrices D ∈ D \ C such that that B contains a zero row is o n , and the
probability of those matrices C ∈ C \ D which havea submatrix A with zero columns
3
which extend to non-zero columns of C is o logn n . Thus |C| = (1 + o(1))|D|. All
subsequent proofs refer to D.

1.2 Joint probability of no zero rows or columns

Let A ∈ M(m, n, p; t) where m = n − l. We wish to condition on the event E that A


has no zero rows or columns. We show that the probability of this event is given by
−d
Pr( No zero rows or columns in A) ∼ e−2e . (7)

The number of zero rows is binomial B(m, (1 − p)n ), and thus


−d
Pr( No zero rows ) = (1 − (1 − p)n )m ∼ e−e .

If X is the number of zero columns, then for constant k


 m
(n)k (1 − p)k (1 − (1 − p)n−k )
E((X)k | No zero rows ) =
(1 − (1 − p)n )m
 
= (n)k (1 − p)nk 1 + O(kpe−d ) .

Thus if np = log n + d where d = o(log log n), then

log2 n
!!
n n
Pr( No zero columns | No zero rows ) = (1 − (1 − p) ) 1+O
n
log3 n
!!
−e−d
= e 1+O .
n

Thus provided we can prove Theorem 2(ii), then for d constant, Theorem 2(i) follows.

6
2 Linear dependencies in the matrix A
Definition 3 Let X (A) = {x : Ax = 0} be the row null space of A. Let x =
(x1 , ..., xn ) and let ni , i = 0, 1, ..., t − 1 be the number of entries xj , j = 1...n having
value i. If for all x ∈ X \ 0 and for all i = 0, 1, ..., t − 1 we have
 s   s 
n t log n  n t log n 
1−4 ≤ ni ≤ 1 + 4
t n t n

we will say the non-zero elements of X are nearly uniform in the elements of GF(t).

In this section we prove the following theorem.

Theorem 4 Let 0 ≤ l ≤ log log n. Let E be the subspace of M(n − l, n, p; t) consisting


of those matrices A with no zero rows or columns.
 q 
t−1 log n
Let p = (log n + d)/n where d ≥ − log(log n/9t). Let m2 = t
n 1−4 n
and let
 q 
t−1 log n
m3 = t
n 1+4 n
. Let D be a linearly dependent subset of the rows or columns
of A, and let |D| = m.
The following properties hold whp. in E.

(i) There are no linear dependencies in A of size outside the range m2 < m < m3 .

(ii) The expected number of linear dependencies of size m2 < m < m3 among the
rows of A is t−l (1 + o(1)) and among the columns of A is tl (1 + o(1)).

(iii) The non-zero elements of X are nearly uniform in the elements of GF(t).

Thus we have the following corollary.

Corollary 5 Let l → ∞, then conditional on E, whp.

(i) The ((n − l) × n)-matrix A is of rank n − l.

(ii) The dimension of X (A) is l.

7
2.1 Expected number of linear dependencies of the matrix A

The calculations we make here are for rows. Similar calculations hold for columns.
Let ρm (0) = ρm , as given by (5). The probability a subset D = (a1 , ..., am ) of the
rows of A satisfies a1 + · · · + am = 0 is ρnm . The number, Wm , of linearly dependent
m-subsets has expectation
!
n−l
EWm = (t − 1)m ρnm .
m

The main contribution to EW is from |D| ∼ (t−1)n/t. When d is constant or d → −∞


linearly dependent sets of constant size are possible. However, for most values of m
and d, the results of (eg) [CC] apply directly. We state these results in the lemma
below.


Lemma 6 (CC) Let the field be GF(t), where t = o( log n). Let p = c/n where
c = log n + d(n) and d(n) > −(log log n)

(i) With high probability


 noq linearly
 dependent set, D, of
 columns ofM has log n ≤
q
|D| ≤ m2 = n t−1
t
1 − 4 logn n or |A| ≥ m3 = n t−1
t
1 + 4 logn n

(ii) If d(n) = ω → ∞ then with high probability there are no linearly dependent sets
of size 1 ≤ |D| ≤ log n.

Let EWm∗ denote the expected number of linearly dependent subsets conditional on the
event E of no zero rows or columns. From (7),
−d
EWm∗ ≤ 2EWm × e2e = EWm O(n2/9t ).

Case of 1 ≤ m ≤ log n. Consider the column (ak1 , ..., akm ) of D. Suppose exactly
j entries are non-zero. Let P (j) be the conditional probability that the sum of these
entries is zero, then P (1) = 0 and
1
P (j) = (1 − P (j − 1)), j ≥ 2.
t−1
This has the solution
(−1)j−2
!
1 1
P (j) = 1− + ··· + .
t−1 t−1 (t − 1)j−2

8
Thus, we can write ρm = ρm (0) as
! !
m m 2 m j
ρm = (1 − p) + p (1 − p)m−2 P (2) + · · · + p (1 − p)m−j P (j) + · · ·
2 j
2
!
m p
= (1 − p)m + (1 − p)m−2 (1 + O(mp)).
2 t−1

For m ≥ 2 let fm denote the probability that D is zero-sum and has no zero rows,
! !
n m m
fm = (ρm ) − (1 − p)n fm−1 − (1 − p)2n fm−2 − · · · − (1 − p)mn
1 2
n
!
ρm
≤ (1 − p)mn −1
(1 − p)mn
m p2
! !
mn
= (1 − p) n (1 + O(mp))
2 t−1
m e−md (log n + d)2
!
≤ .
2 nm n

The expected number of linearly dependent column subsets containing no zero rows is
log
Xn (log n + d)2 −2d+(t−1)e−d
!
n−l
(t − 1)m fm ≤ t2 e .
m=2 m n

Thus the conditional expectation is at most


log
Xn
EWm∗ = o(n−7/9 ).
m=2

 q   q 
log n log n
Case of m2 = n t−1
t
1−4 n
≤ m ≤ m3 = n t−1
t
1+4 n
.

Let X = n−l
P
i=1 Xi where Xi is an indicator for the event that row ai of A is zero, that
D is linearly dependent and that A contains no zero columns. Let E(X)s denote the
s-factorial moment of X.
s
! !
E(X)s n−l−m m
(1 − p)ns
X
=
s! j=0 s−j j
n
1 t
  
× 1 + (t − 1)(1 − p)m−j − (1 − p)n−l−s
t t−1
e−sd 1 −e−d log3/2 n
!!
= e 1+O √ .
s! tn n

9
Thus, we can use standard inclusion-exclusion followed by (7) to deduce
1
P r(D is linearly dependent | no zero rows or columns) ∼ .
tn
and
m3 m3
n − l (t − 1)m
!
EWm∗
X X
= (1 + o(1))
m2 m=m2 m tn
1
= (1 + o(1)) × .
tl
Pm3
We note that a similar calculation for the columns of A gives m2 EWm∗ = tl (1 + o(1)).

2.2 The structure of solutions of Ax = 0

Let Ax = 0, and let ni = |{j : xj = i, j = 1..n}|, be the number of entries in x with


√ i. We prove that whp. all values ni , i = 0, 1, ..., t − 1 are concentrated within
value
O( n log n) of the expected number n/t.
We know that whp. any dependencies among the columns of A are of size m2 ≤ m ≤
m3 . Applying (6), the expected number of vectors x satisfying Ax = 0 with partition
(n0 , n1 , ..., nt−1 ) is for any A ∈ M
n−l
! !
m
n m 1 t
    
1 + (t − 1) 1 − p
m n1 , ..., nt−1 t t−1
m
!
(t−1)e−d l n (t − 1)
= O(1)e t
m tn
! n1 nt−1
m 1 1

× ··· .
n1 , ..., nt−1 t−1 t−1

The variables ni of a multinomial distribution on m objects in t − 1 classes with


probabilities 1/(t − 1) are sharply concentrated around
q m/(t − 1). The probability we
m t log n
see a deviation of at least t−1 (1 ± ) where  = 4 n
in any variable ni is o(1/n2 )
by the Hoeffding inequality.
From Corollary 4, |X | ∼ tl , and l ≤ log log n. Thus the conditional expectation of
partitions not satisfying the near-uniformity conditions of Theorem 4(iii) is bounded
by
−d −d
tl e(t−1)e (e2e )(tl+1 )/n2 = o(1/n).

10
3 Properties of the null space X of the rows of A
The fact that all elements of X \ 0 have about n/t entries of each r in GF (t), combines
with the vector space structure of X to induce a structure on the linearly independent
k-subsets of X . We now describe a partition approach introduced in [KLS].
Denote by V (k) = {ui : i = 1, ..., tk } the space of k-vectors over GF(t). Let z i , i =
1, ..., k be arbitrary n-vectors and let y be the k × n matrix,
 
z1 z11 z12 · · · z1n
 
 ..   .. .. 
y= . = .  = ( y1, y2, · · · , yn ) .
. 
zk zk1 zk2 · · · zkn

Each column of y is an element of V (k), so we can partition the columns of y in terms


of the elements u of V (k). This also partitions [n] by
z1j
 
 .. 
S(u) = {j ∈ [n] :  .  = u}.
zkj

Theorem 7 Let l ≤ log log n/ log t. Let the non-zero elements of X be nearly uniform
in the elements of GF(t).
The following property holds for all linearly independent subsets {z 1 , ..., z k } of X , for
all u ∈ V (k) and for all k = 1, 2, .., l.
)0 . The number
Let y be the (k × n)-matrix y = (z 1 , ..., z k  |S(u)| of occurrences of u
q
k log n
in the columns of y satisfies |S(u)| = n/t 1 + O nt
.

Proof Let J = V (k) \ 0, and let a ∈ J, a = (a1 , ..., ak ). If z 1 , ..., z k ∈ X are


linearly independent, then ay = a1 z 1 + · · · + ak z k ∈ X \ 0. Let N (s) be the set of
indices j ∈ [n] which satisfy a · y j = a1 z1j + · · · + ak zkj = s. By the near-uniform
q
property of Definition 3, N (s) = n/t(1 + O( t log n/n)).
For w ∈ V (k), let T = {a ∈ V (k) : a · w = 0}, then T is a subgroup of (V (k), +). If
w 6= 0 then |T | = tk−1 as T has exactly t cosets, for,

a · w = j and b · w = j =⇒ (a − b) · w = 0.

For fixed u, v, u 6= v, how many times λ does a · u = a · v, a ∈ J?

{a ∈ J : a · u = a · v} = {a ∈ V : a · (u − v) = 0} ∩ J
=⇒ λ = |T | − 1 = tk−1 − 1.

11
We consider a table with rows a ∈ J, columns i ∈ [n] and entries a · y i = s. Let
X
η(u) = |{i ∈ [n] : a · y i = a · u}|,
a∈J
then
η(u) = (n − |S(u)|)(tk−1 − 1) + |S(u)|(tk − 1).
Any columns i of the table which are in S(u) are counted |J| = tk −1 times. Any column
j 6∈ S(u) is in S(v) for some v and coincides with u on λ = tk−1 − 1 occasions.
q In row
a of the table where a · u = s say, the contribution to η(u) is n/t(1 + O( t log n/n))
 q 
so that |S(u)| = n/tk 1 + O log n
nt
as required. 2

4 Estimating the moments of the null space of C


A
 
We will first consider the (n × n)-matrix C = , C ∈ D where D is defined in
B
Section 1.1. The matrix A has null space X and the matrix B is a random (l × n)-
matrix. Let Y = {x : Cx = 0}. Let (W )k be the number of linearly independent
k-tuples in Y. Now,

Cx = 0 ⇐⇒ Ax = 0 and Bx = 0
⇐⇒ x ∈ X and Bx = 0
⇐⇒ (b1 x = 0) ∧ (b2 x = 0) ∧ · · · ∧ (bl x = 0).

We consider k-tuples y = (z 1 , ..., z k ) of linearly independent vectors from X such that

Bz 1 = ... = Bz k = 0l . (8)

If we write B = ( b1 , ..., bl )0 then (8) is true iff bi z j = 0 for all i = 1, ..., l; j = 1, ..., k.
Thus (W )k is the number of linearly independent k-tuples in X such that (8) holds.

Theorem 8 Let D ⊆ M(n, n, t; p) and k ≤ log(log n/3 log log n)/ log t, then E(W )k ∼
1.

Proof Let b = (β1 , ...βn ) be a fixed row of B and let z i ∈ X , i = 1..k be linearly
independent. Let the (n × k)-matrix y = (z 1 , ..., z k ) be written as y = (y 1 , ..., y n ).
We partition the columns of y according to the vectors of V (k) = (GF (t))k which we

12
write as V (k) = {ui : i = 1, ..., tk }. Let m = tk − 1. For convenience we assume that
um+1 = 0k .
by = 0k ⇐⇒ β1 y 1 + · · · + βn y n = 0
⇐⇒ x1 u1 + · · · + xm+1 um+1 = 0
⇐⇒ x1 u1 + · · · + xm um = 0. (9)
Here xi = j∈S(ui ) βj and S(ui ) = {j ∈ [n] : y j = ui }. By Theorem 4 we know that
P

|S(ui )| ∼ n/tk .
Let G = ({(x1 , ..., xm )}, +) be the finite Abelian group of addition of m-vectors with
entries in GF(t). Thus |G| = tm . Let the solutions of (9) be
T = {(x1 , ..., xm ) ∈ G : x1 u1 + · · · + xm um = 0k }
then T is a subgroup of G. Let c = (c1 , · · · , ck )0 . The cosets of T in G are the sets
H(c) = {(y1 , ..., ym ) : y1 u1 + · · · + ym um = c}.
Thus there are |{c}| = tk cosets and
tm k
|T | = k
= tt −k−1 .
t
From (5)
π(y) = Pr( by = 0 )
m
1 t
 
p)|S(uj )|
X Y
= 1 + λ(xj )(1 −
(x1 ,...,xm )∈T j=1
t t−1
|T | t n
  
= m exp α exp − p(1 + o(1))
t t − 1 tk
where λ(xj ) = [(t − 1)1{xj =0} + (−1)1{xj 6=0} ] and (−1) ≤ α ≤ (t − 1).
The set Ω = {y} of ordered k-tuples of linearly independent vectors in X has size
|Ω| = ν(k, l) = (tl − 1)(tl − t) · · · (tl − tk−1 ).
Thus
(π(y))l
X
E(W )k = (10)
y ∈Ω
 (1+o(1))

−d
!
1 e (t−1)tk−1
= ν(k, l) exp lα
 
tkl n

  ! 1 
   log n tk 
= exp +O tk−l + O lt 
n 
!
1
= 1+o . (11)
log n

13
2

Corollary 9 Let D ⊆ M(n − s, n, t; p) and let k ≤ log(log n/3 log log n)/ log t, then
E(W )k ∼ tsk .

Proof As B is now an ((l − s) × n)-matrix, in (10) we have


(π(y))l−s
X
E(W )k =
y ∈Ω
= tsk (1 + o(1/ log n)).
2

5 Limiting probability distribution of matrix rank


The initial discussion concerns the distribution (πs (k, t), k ≥ 0) given in (3). The
t-nomial theorem (see [vLW] (p291)) states that, for r ≥ 1,
r
" #
r k
r−1
t(2) xk ,
X
(1 + x)(1 + tx) · · · (1 + t x) = (12)
k=0
k t
" #
r
where the Gaussian coefficients are defined, for t > 0 by = 1 and
0 t

(tr − 1)(tr−1 − 1) · · · (tr−k+1 − 1)


" #
r
= .
k t
(tk − 1)(tk−1 − 1) · · · (t − 1)

The number of (m × n)-matrices over GF(t) with rank n − r is (see [vLW] (p303)),
n−r
" # " #
m n−r l
(−1)l tn(n−(r+l))+(2) .
X
η(m, n, r, t) = (13)
r t l=0
l t

When p = (t − 1)/t and all matrices of the space M(m, n, p; t) are equiprobable, we
have P r( Rank = n − r) = η(m, n, r, t)/tnm . It can easily be shown that
η(n, n, r, t) X
π0 (r, t) = lim = c(r, l), (14)
n→∞ tn2 l≥0
Q0 j
where, using the convention that j=1 (t − 1) = 1, we have defined c(r, l) by
r

lt−(2) t−rl
c(r, l) = (−1) Qr j l = (−1)l a(r)b(r, l). (15)
j=1 (t − 1)
j
j=1 (t − 1)
Q

14
If we define " #
∞ 1
= |z| < 1, k ≥ 1,
k z
(1 − z k ) · · · (1 − z)
then
∞ ∞
" #
∞ k
k
z ( 2 ) xk .
Y X
(1 + z x) = (16)
k=0 k=0
k z

We note that if t > 1 then


" #  (k)  k
1 ∞ 1 2 1
Qk = .
j
j=1 (t − 1)
k
( 1t ) t t

Thus from (15), (16) and (3),


r
t−(2) trs X
 ( l )  l
−ts
" #
X
s(r+l) ∞ 1 2
c(r, l)t = Qr (17)
j
j=1 (t − 1) l≥0
l tr+1
l≥0 ( 1t ) t
r
t−(2) trs
 l 
−ts
!
Y 1
= Qr j
1+
j=1 (t − 1) l≥0 t tr+1
(
0 r<s
= .
πs (r − s, t) r ≥ s

We now prove the following lemma which completes the proof of Theorem 2.

Lemma 10 Let r, s be constant, r ≥ s and let πs (r, t) be given by (3). Let {C}
be a space of random ((n − s) × n)-matrices for which E(W )k = tsk (1 + o(1)), for
k ≤ k ∗ = log(log n/3 log log n)/ log t. Let Y(C) be the row null space of C, then

lim P r (dimension of Y(C) = r) = πs (r − s, t).


n→∞

Proof
Let C be a random matrix over GF(t) with row null space Y(C). We note that the
dimension of the vector space Y must be at least s. Let (z 1 , ..., z k ) be a k-tuple of
linearly independent vectors from Y. Let Nk,r be the number of linearly independent
k-tuples if the dimension Y is r. Thus Nk,r = 0 if r < k, N0,r = 1 and for 1 ≤ k ≤ r,

Nk,r = (tr − 1)(tr − t) · · · (tr − tk−1 ).

If the probability of dimension r is p(r, t) = pr we can write


X
E(W )k = Nk,r pr . (18)
r≥k

15
Thus we have the following system of equations which we wish to solve for pk ,

1 = p0 + p1 + p2 + ··· + pk + ···
EW = N1,1 p1 + N1,2 p2 + · · · + N1,k pk + ···
E(W )2 = N2,2 p2 + · · · + N2,k pk + ···
.. (19)
. ··· ··· ···
E(W )k = Nk,k pk + ···
..
. ···

We wish to prove that pr = 0 for r < s and for r ≥ s, pr → πs (r − s, t) as given in (3).


We claim that pr is given by
X
pr = c(r, l)E(W )r+l , (20)
l≥0

where c(r, l) is from (15). To see this, substitute for E(W )r+l from (18) into the sum
(20). The coefficient of pr is c(r, 0)Nr,r = 1. If j > r, then the coefficient of pj is
 
a(r) Nr,j b(r, 0) − Nr+1,j b(r, 1) + · · · + (−1)l Nr+l,j b(r, l) + · · · + (−1)j−r Nj,j b(r, j − r) .

However,

t−rl
b(r, l)Nr+l,j = (tj − 1) · · · (tj − tr+l−1 ) Ql
(ti − 1)
" i=1 #
j−r l
= (tj − 1) · · · (tj − tr−1 ) t(2) .
l t

Thus the coefficient of pj is


j−r " #
j−r l
j j r−1
t(2) (−1)l ,
X
a(r)(t − 1) · · · (t − t )
l=0
l t

so that for j > r, this coefficient is identically zero, from (12), with x = (−1).
As we have only estimated E(W )k for 1 ≤ k ≤ k ∗ we must make a slight adjustment.
Let k < k ∗ be fixed and for 0 ≤ i ≤ k let
X
δ(i) = Ni,j pj .
j≥k+1

If i < k + 1 ≤ j then Nk+1,j = Ni,j (tj − ti ) · · · (tj − tk ). Thus as

δ(k + 1) = E(W )k+1 = ts(k+1) (1 + o(1))

16
we have that the tail δ(i) of the i-th equation is bounded by

ts(k+1) (1 + o(1))
δ(i) ≤ .
(tk+1 − ti ) · · · (tk+1 − tk )

Let p = (p0 , p1 , ..., pk )0 and let w = (1 − δ0 , EW − δ1 , ..., E(W )k − δk )0 . The system


of equations (19) is now expressed as N p = w where N is a ((k + 1) × (k + 1)) upper
triangular matrix with positive entries Ni,j on and above the main diagonal.
The discussion following (20), concerning the extraction of pr , 0 ≤ r ≤ k is still valid
except now
k−r
X
pr = c(r, l)wr+l .
l=0

Thus the difference θ(r) = |pr − πs (r − s, t)| is bounded above by

∞ k−r k−r
!
X
s(l+r)
X 1 s(l+r)
X
θ(r) ≤ c(r, l)t + c(r, l)O t + c(r, l)δ(r + l) .
l=k−r+1 l=0 log n l=0

The first of these error terms is the truncation of the expansion of πs (r − s, t) in (17)
above. The second comes from Theorem 8 where E(W )k = tsk (1 + O(1/ log n)). The
third is the subtracted tails δ(i) of the equations. Provided k → ∞ and r, s are constant
it is straightforward to show that θ(r) → 0 as required. 2

6 Bibliography
[A1] A. N. Alekseychuk, On uniqueness of the problem of moments in the class of
q-distributions. Discrete Math. Appl 8.1 (1998) 1-16.
[A2] A. N. Alekseychuk, Conditions for uniqueness of the problem of moments in the
class of q-distributions. Discrete Math. Appl. 9.6 (1999) 615-625.
[B] G. V. Balakin, The distribution of the rank of random matrices over a finite field.
Theory of Probability and its Applications 8.4 (1968) 594-605.
[BKW] J. Blömer, R. Karp, E. Welzl, The rank of sparse random matrices over finite
fields, Random Structures and Algorithms 10(4) (1997), 407-419 .
[CC] C. Cooper, On the rank of random matrices, Random Structures and Algorithms
(2000) 209-232.
[CC1] C. Cooper, The rank of very sparse random matrices, (2000).
[Ha] G. Hadley, Linear Algebra, Addison-Wesley, 1979.

17
[Ko] V. F. Kolchin, Random Graphs, Cambridge University Press, Cambridge, Eng-
land, 1999.
[K1] I. N. Kovalenko, On the limit distribution of the number of solutions of a random
system of linear equations in the class of Boolean functions. Theory of Probability and
its Applications 7.1 (1967) 47-56
[KL] I. N. Kovalenko and A. A. Levitskaya, Stochastic properties of systems of random
linear equations over finite algebraic structures. Probabilistic Methods in Discrete
Mathematics, (editor V. F. Kolchin) TVP Sci. Publ. 1993.
[KLS] I. N. Kovalenko, A. A. Levitskya and M. N. Savchuk, Selected Problems in
Probabilistic Combinatorics (in Russian), Naukova Dumka, Kyiv,1986.
[vLM] J. H. van Lint and R. M. Wilson, A Course in Combinatorics, Cambridge Uni-
versity Press, Cambridge, England, 1992.

18

You might also like