Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Muchnik's Proof of Tarski-Seidenberg

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

MUCHNIK’S PROOF OF TARSKI-SEIDENBERG

HANS SCHOUTENS

A BSTRACT. These notes arose in an attempt to understand a preprint in English by Se-


menov entitled Decidability of the Field of Reals regarding a proof due to Andrej Muchnik
of the Tarski-Seidenberg algebraic quantifier elimination over the reals. (At present, I
know of two Russian sources in which this proof has appeared: [1] and [2].) The method
of proof is extremely simple: it consists of determining from the coefficients of a polyno-
mial, a finite list of polynomial expressions in these coefficients, such that the knowledge
of the signs of these expressions yields (in an effective way) the knowledge of the sign
table of the original function. These expressions in the coefficients are obtained from the
original polynomial by three very simple procedures: (Eucledean) division, differentation,
and truncation. As such this proof is truly an undergraduate proof of a Theorem that
without doubt belongs to the Pantheon of Mathematics. Moreover, the method extends to
include an effective quantifier elimination procedure for any algebraically closed field of
characteristic zero, as well as of any real closed field.

1. S IGN D IAGRAMS
To a polynomial p(Y ) over the reals in the single variable Y we associate its sign evo-
lution as follows. Let ξ2 < ξ4 < · · · < ξ2d be its real roots (we ignore their multiplicity).
Then the sign of p is constant on each intermediate interval (ξ2i , ξ2i+2 ). So let ξ2i+1 be
any point in that interval and let ξ1 (respectively, ξ2d+1 ) be any number smaller than ξ2
(respectively, bigger than ξ2d ). The elements ξi are called test points. Hence we know the
sign evolution of p once we know the sign for each test point. For a real number a, let
sgn a := 0 if a = 0, let sgn a := +1 if a > 0, and let sgn a := −1 if a < 0. In conclusion,
to each polynomial p we will associate its row of signs
(sgn p(ξ1 ), sgn p(ξ2 ), . . . , sgn p(ξ2d+1 )).
Since we need to deal with several polynomials simultanseously, we no longer insist
that roots are test points with even index. Namely, let L := (p1 , . . . , pm ) be a list of
polynomials in one variable Y with real coefficients (allowing repetition as well as the
zero polynomial) and let Ξ := (ξ1 , . . . , ξn ) be a list of test points. Let D be an m × n-
matrix with entries in {−1, 0, 1}. We will use the entries of L to label the rows of D and
the elements of Ξ to label its columns, that is to say, D(pi , ξj ) is the (i, j)-th entry of D.
We call D a sign diagram for L, if the following holds. There exists an n-tuple of real
numbers Ξ := (ξ1 , . . . , ξn ), with ξ1 < · · · < ξn , such that
• any real root of some non-zero pi is among the ξj , and any two such roots are
separated by at least one other ξj which is not a root of any non-zero polynomial
in L;
• D(pi , ξj ) is the sign of pi (ξj ).

Date: 25.02.04.
Key words and phrases. quantifier elimination, Tarski-Seidenberg, semialgebraic sets.
1
2 HANS SCHOUTENS

Hence the row labeled pi contains a row of signs of pi with possibly the signs in some
extra points lying in intervals between its roots. In particular, if pi is identically zero, the
row labeled pi is also identically zero; we do not exclude this case, but we will refer to such
a row as a zero-row. A test point which is a root of some non-zero pi is simply called a root
of L; the remaining test points are called non-roots of L. Two adjacent columns cannot be
equal unless their are both labeled by a non-root. Omitting one of these redundant columns
then yields again a sign diagram for L. In deleting redundant columns in this way we obtain
a unique smallest sign diagram for L, called the reduced sign diagram for L and denoted
diag(L). In particular, if all the pi are constant, then the reduced sign diagram consists of
a single column given by the respective signs of these constants.
1.1. Remark. If D is a sign diagram for L, then by construction, we can find for each y ∈ R
a column in D which gives precisely the signs in y, that is to say, there is a j, such that
D(pi , ξj ) = sgn pi (y), for all i.
Another property of a sign diagram is that there can never be two subsequent zero’s in
a row, unless the row is a zero row, and neither can two adjacent non-zero entries in a row
have opposite sign.

2. M UCHNIK S ETS
Let A be a domain and let Y be a single variable. Let
p := ad Y d + ad−1 Y d−1 + . . . a1 Y + a0
be a polynomial in A[Y ] of degree d, so that ad 6= 0 (we take up the convention that the
zero-polynomial has degree −∞). We will denote by p◦ the result of omitting the highest
degree term of p, that is to say,
p◦ := ad−1 Y d−1 + . . . a1 Y + a0
(if p is the zero polynomial, then so is p◦ ). Let q := be Y e + . . . b1 Y + b0 be a non-zero
polynomial in A[Y ] with e ≤ d.
2.1. Definition (Pseudo-remainders). Performing Euclidean division on p by q, there are
unique polynomials h and r, with r of degree strictly smaller than e, such that
bed−e+1 p = hq + r.
We call r the pseudo-remainder of p and q, and denote it rem(p, q).
Let L be a collection of polynomials (including constants) in A[Y ].
2.2. Definition (Muchnik Sets). We say that L is a Muchnik set, if the following three
properties hold.
(T) If p lies in L, then so does p◦ .
(D) If p lies in L, then so does its derivative p0 .
(R) If p and q lie in L, then so does their pseudo-remainder rem(p, q).
2.3. Remark. Any Muchnik set contains the zero polynomial; any finite set of constants
which includes zero, is Muchnik. We denote the collection of all the constants of a Much-
nik set L by L0 . Note that the coefficients of all p ∈ L are, up to a positive integer multiple
(some factorial), among the constants L0 . This is a direct consequence of rules (T) and
(D). However, rule (R) will create many more constants which are non-trivial expressions
in the original coefficients.
MUCHNIK’S PROOF OF TARSKI-SEIDENBERG 3

2.4. Remark. Note that each of these three operations are closure operations and hence we
can form for each set L0 of polynomials the smallest Muchnik set containing L0 , which
we then call the Muchnik closure of L0 . Since each of the three rules yields a polynomial
of smaller degree, the Muchnik closure of a finite set is again finite. Exploiting this fact
further, we make the following definition.
2.5. Definition (Muchnik Lists). Let L be a Muchnik set and enumerate its elements in
such way that the degrees are non-decreasing. Any such enumeration will be called a
Muchnik list.
Hence, if L is a Muchnik list, then it starts with 0, followed by the remaining elements
in L0 , listed in some order, and then come the higher degree elements of L.
2.6. Lemma. Let L be a Muchnik list. Then any initial segment of L is again a Muchnik
list.
Proof. We induct on the length of the an initial segment M of L. If M has length 1, then
M is just the singleton {0} and the statement is clear (in fact, the statement also holds
for the initial segment L0 as already pointed out). For the general case, suppose M is an
initial segment of L with last element p. By the inductive argument, we know that M \ {p}
is Muchnik and we need to show that so is M . By rule (T), we have p◦ ∈ L. However,
since p◦ has degree strictly less than p, it must be enumerated in the list L before p◦ and
therefore it must occur in M . A similar argument shows that p0 ∈ M . Lastly, if q ∈ M ,
then both rem(p, q) and rem(q, p) have degree less than the degree of p (note that q has
degree at most the degree of p since it is enumerated in L before p). Again we conclude
that these pseudo-remainders must lie in M , since they lie in L by rule (R). 

3. Q UANTIFIER E LIMINATION AND D ECIDABILITY


Let A := R[X], where X := (X1 , . . . , XN ). Let L := (p1 , . . . , pm ) be a list of
polynomials pi in R[X][Y ]. For any x ∈ RN , we put
L(x) := (p1 (x, Y ), . . . , pm (x, Y ))
so that L(x) is a list whose entries are polynomials in the single variable Y over R. This
applies in particular to a Muchnik list L and its sublist of constants L0 (with respect to Y ).
Suppose that the subset L0 has length m0 . In the next section, I will prove the existence of
an effective algorithm, depending on L, with the following property. Let C0 be a column
labeled by L0 (in other words, an (m0 × 1)-matrix) with entries in {−1, 0, 1}. To each
such C0 , the algorithm assigns an m × n-matrix A(C0 ) with entries in {−1, 0, 1} (where
we take the rows to be labeled by the elements in the list L) with the property that
A(diag(L0 (x))) = diag(L(x))
N
for each x ∈ R .
Note that diag(L0 (x)) is simply the column with entries the sgn p(x), for p ∈ L0 . In
particular, we see that the first m0 rows of A(C0 ), that is to say, those rows labeled by some
p ∈ L0 , are constant rows. More precisely, the upper m0 × n part of A(C0 ) is just n copies
of the column C0 . Note also that if C0 does not occur as a column diag(L0 (x)) for any x,
then it does not really matter which matrix the algorithm assigns as A(C0 ). However, we
do not know in advance that C0 is not of the form diag(L0 (x)), and one of the tasks of the
algorithm will be to detect this. All this will be explained in the next section.
Granted we have an algorithm with these properties, we can now prove the celebrated
Tarski-Seidenberg Theorem. Recall that the language of ordered fields consists of the usual
4 HANS SCHOUTENS

field language together with a symbol for the ordering. More precisely, we have constant
symbols 0 and 1; binary function symbols + for addition, − for subtraction, and · for
multiplication; and one binary relation symbol < for the order relation
3.1. Theorem (Tarski-Seidenberg). The field of reals admits elimination of quantifiers in
the language of ordered fields.
Proof. By standard arguments it is enough to show that a formula of the form (∃y)ϕ(x, y),
with ϕ(x, y) quantifier free and x := (x1 , . . . , xN ) and y a single variable, is equivalent
with a quantifier free formula. Also, by induction on the number of disjuncts in a dis-
junctive normal form, we may assume that ϕ is a conjunction of formulae of the form
sgn p(x, y) = εp , with p a real polynomial and εp ∈ {−1, 0, 1}. Let L0 be the collection
of all polynomials that thus occur in ϕ and let L be its Muchnik closure, arranged as a
Muchnik list. Suppose L has length m. Let us say that an (m × n)-matrix D with entries
in {−1, 0, 1} is ϕ-compatible, if there exists a column-label ξ, so that D(p, ξ) = εp , for
all p ∈ L0 . Using Remark 1.1, we get, for x ∈ RN , that (∃y)ϕ(x, y) holds if and only if
diag(L(x)) is ϕ-compatible.
Let C1 , . . . , Cl be an enumeration of all possible columns of height m0 with entries in
{−1, 0, 1}. For i = 1, . . . , l, let ψi (x) be the quantifier free formula which expresses that
(1) diag(L0 (x)) = Ci .
Let A(Ci ) be the matrix obtained from Ci by means of the algorithm from §4. Let I ⊂
{1, . . . , l} be those indices for which A(Ci ) is ϕ-compatible. It follows that
_
(∃y)ϕ(x, y) ⇐⇒ ψi (x).
i∈I

This concludes the proof of the Theorem, since the right hand side is quantifier free. 
Note that since the algorithm described in the next section is effective, so is the quanti-
fier elimination process in the above proof. In particular, we obtain the following immedi-
ate corollary.
3.2. Corollary. The field of reals is decidable in the language of ordered fields.

4. T HE A LGORITHM A TO C ALCULATE A S IGN D IAGRAM


As before, L0 is some list of polynomials in R[X, Y ] and L is its Muchnik closure (with
respect to the last variable Y ). Our goal is to algorithmically calculate a sign diagram
for L(x) from diag(L0 (x)), for any point x ∈ RN . Recall that L0 is the initial part of
L consisting of all polynomials not containing Y (the constants with respect to Y ). By
Lemma 2.6, each initial segment M of L is again Muchnik. Therefore, we will build
by induction a sign diagram for each M (x); it will in general be an augmentation of the
previous sign diagram by one more row and several more columns (due to the occurrence
of new roots). In conclusion, it suffices to prove the following lemma.
4.1. Lemma. Let A := R[X], where X := (X1 , . . . , XN ). Let L be a finite Muchnik
list in A[Y ] of length m and let p be a non-constant polynomial in A[Y ]. Suppose that
L+ := L ∪ {p} is again a Muchnik list. There exists an effective algorithm A which
assigns to any (m × n)-matrix C with entries in {−1, 0, 1} an (m + 1) × n0 -matrix A(C)
with entries in {−1, 0, 1} (where n ≤ n0 ), such that for each x ∈ RN , we have that
A(diag(L(x))) = diag(L+ (x)).
MUCHNIK’S PROOF OF TARSKI-SEIDENBERG 5

Proof. Our goal is twofold. Firstly, given a matrix C with entries −1, 0 or 1, with rows
labeled by the Muchnik list L (from top to bottom) and columns labeled by test points ξ,
we want to assign a matrix C+ := A(C) with one additional row at the bottom, labeled p,
and some additional columns. Secondly, if x ∈ RN and C = diag(L(x)), then we have to
verify that C+ is a sign diagram for L+ (x) (after which we can trim it to become a reduced
sign diagram). At several stages, we might run into an inconsistency of C that excludes it
from being of the form diag(L(x)) for some x, and then we will reject this matrix.
Suppose p has degree d ≥ 1 in Y and let a 6= 0 be its highest degree coefficient (so that
a ∈ R[X]). Note that d! a, p◦ and p0 all belong to L by definition of Muchnik list. Fix also
x ∈ RN and let ξ be a test point. We will define C+ (p, ξ), depending on various cases.

Case 1. The row in C labeled d! a is not a constant row. This is impossible if C is of the
form diag(L(x)), so we reject this matrix.

Case 2. The row in C labeled d! a is the zero row. If C is of the form diag(L(x)), then
this means that a(x) = 0 and hence p(x, Y ) and p◦ (x, Y ) have the same sign evolution.
Therefore, we let the last row of C+ be a copy of the row labeled p◦ in C, and we are done
in this case.
Hence, we may in addition assume that the row in C labeled d! a is a constant row with
value α 6= 0. If C only consists of a single column, then we double this column at this
point.

Case 3. The column labeled by ξ is either the first or the last column. If C is of the
form diag(L(x)), then C+ (p, ξ) ought to be the sign of p(x, Y ) at minus or plus infinity
respectively. Therefore, we put C+ (p, ξ) equal to (−1)d α and α respectively, and we are
done for these columns. So we may assume in addition that ξ is the label of an internal
column.

Case 4. The point ξ represents a root of L, that is to say, there is some q ∈ L such that
C(q, ξ) = 0 but the row labeled q in C is not a zero row. If C is of the form diag(L(x)), then
this means that q(x, ξ) = 0 but q(x, Y ) is not identically zero. Choose a q ∈ L of minimal
possible degree with these properties. Let e be its degree and b its leading coefficient, so
that e! b ∈ L. If the row labeled by e! b in C is not a constant row, we again reject this
matrix, so we may assume that it has constant value β.

Case 5. Suppose β = 0. If C is of the form diag(L(x)), then this means that b(x) = 0.
However, since q = bY e + q ◦ , we get that q ◦ (x, ξ) is also zero. Since q ◦ ∈ L has degree
less than e, minimality implies that the row in C labeled q ◦ must be a zero row, that is to say,
that q ◦ (x, Y ) is identically zero. However, so is then q(x, Y ), contradiction. Therefore,
we reject any matrix with this property, so that we may assume in addition that β 6= 0.
Let r be the pseudo-remainder of p by q, so that bd−e+1 p = hq + r, for some polyno-
mials h, r ∈ R[X, Y ] with r of degree at most e − 1. If C is of the form diag(L(x)), then
b(x)d−e+1 p(x, ξ) = r(x, ξ). Therefore, we set
C+ (p, ξ) := β d−e+1 · C(r, ξ),
and we are done in this case.
So we may assume that ξ represents a non-root of L. Let ξ− and ξ+ denote the respective
labels of the column just preceding and just following it. Since in a reduced sign diagram,
roots alternate with non-roots, ξ− and ξ+ must represent roots, and therefore the signs
ε− := C+ (p, ξ− ) and ε+ := C+ (p, ξ+ ) have already been defined.
6 HANS SCHOUTENS

Case 6. Suppose ε− = ε+ = 0. If C is of the form diag(L(x)), this means that p(x, ξ− ) =


p(x, ξ+ ) = 0. By the Intermediate Value Theorem, there is some η in the open interval
(ξ− , ξ+ ) such that p0 (x, η) = 0. By definition of sign diagram, since p0 ∈ L, its roots must
occur as column labels, and hence η = ξ. By the assumption on ξ, the entire row labeled
p0 in C must then be a zero row, which means that p0 (x, Y ) is identically zero. This in turn
means that p(x, Y ) is constant, and this constant therefore must be zero, a case already
dealt with. In other words, if ε− = ε+ = 0, we reject C.

Case 7. Suppose ε− and ε+ have opposite sign and are non-zero. If C is of the form
diag(L(x)), then this means that p(x, ξ− ) and p(x, ξ+ ) have opposite sign. By the Inter-
mediate Value Theorem, there is some η in the open interval (ξ− , ξ+ ) such that p(x, η) = 0.
There is no reason why this η should agree with ξ, so that we encounter a new situation, in
which an additional root has to be added to the labels. However, since we should alternate
roots with non-roots, we have to replace the single ξ-column of C by three copies of it
and then put in the bottom row of C+ the three values (ε− , 0, ε+ ) to reflect the new sign
behaviour.
Therefore, we are also done in that case, so that we may assume that exactly one among
the ε− , ε+ is zero or that they have both the same sign. I claim that if C is of the form
diag(L(x)), then p(x, Y ) has no root in the open interval (ξ− , ξ+ ). Assuming the claim,
the sign on that interval must therefore be the same everywhere and equal to the sign in
an endpoint which is not a root. In conclusion, our algorithm is sound, if we put C+ (p, ξ)
equal to the non-zero value among ε− , ε+ .
So all that remains is to prove the claim. Suppose towards a contradiction, that there is
some η in the open interval (ξ− , ξ+ ) such that p(x, η) = 0. If one of the endpoints is also
a root, then again by the Intermediate Value Theorem, we would have also a root of p0 in
that interval, and we have already ruled this out. Hence p(x, Y ) has the same sign at the
endpoints of the interval, say, for the sake of argument, that it is positive at both ends. For
p(x, Y ) to become zero, it therefore has to decrease and then again increase. This would
mean that p0 changes sign at that interval and hence in particular must have a root there,
again an impossibility. 

Some further remarks. The proof is in fact independent of the real numbers and holds
equally in any real closed field, since the only non-trivial property used is the Intermedi-
ate Value Theorem. Moreover, since the quantifier free formula obtained by this process
depends only on the coefficients of the polynomials in the original formula, we obtain
that any formula defined over some ordered field is equivalent (in its real closure) with a
quantifier free formula defined over that field.
As for the complexity, a rough calculation yields that if a formula χ(x) with one exis-
tentially quantified variable consisting of l polynomial inequalities, each of degree at most
d (so that the cardinality of L0 in the proof is l), then we end up with a quantifier free for-
d−1
mula Ψ(x) consisting of a number of inequalities of the order O(l2 ). More precisely,
d−1
the cardinality of L and L0 is O(m) where m = l2 . This leaves us with 3m possible
sign assignments which might or might not lead to a compatible formula. The algorithm
apparently is polynomial in the degree and the cardinality of L, so that the reduction of χ
to Ψ is in time of order a polynomial in m times 3m .
However, if we fix the degree d and seek for an algorithm that determines whether a
sentence χ (with quantifiers) in which the number of quantified variables is bounded and
all occurring polynomials have degree at most d (and coefficients in some effective ordered
field K) is true or false, then this can be done in non-deterministic polynomial time, if we
MUCHNIK’S PROOF OF TARSKI-SEIDENBERG 7

assume that any arithmetic operation in the field K can be carried out in time O(1). Indeed,
we need only guess non-deterministically a sign assignment for the elements in L0 . All
other constructions are now carried out in a time depending polynomially on l, the number
of inequalities in the sentence.

5. Q UANTIFIER E LIMINATION FOR ALGEBRAICALLY CLOSED FIELDS OF


CHARACTERISTIC ZERO

The techniques of Muchnik sets can also be used to prove quantifier elimination for
C (of course, this is less hard a theorem and admits many other elegant proofs). I will
explain the case for C, but the same argument works for any algebraically closed field of
characteristic zero–the proof unfortunately collapses in positive characteristic. We start
with defining the analogue of sign diagram in this situation.

Root Diagrams. Let p ∈ C[Y ] with Y a single variable. If p is not the zero polynomial,
then a row of roots for p is simply a row of 0’s and 1’s such that (a) at least one entry is
equal to 1; and (b), the number of entries equal to 0 is equal to the number of distinct roots
of p in C. Any zero-row is a row of roots for the zero polynomial. We label the entries of
a row of roots by some complex numbers ξ with the convention that the entry labeled by
ξ is 0 if and only if p(ξ) = 0. Contrary to the real case, we do not care about the order in
which we list the ξ.
If L is a list of polynomials (allowing repetitions as well as the zero polynomial), then
a root diagram for L is a matrix of 0’s and 1’s with rows labeled by the polynomials p ∈ L
and columns labeled by some complex numbers ξ, having the property that each row is
a row of roots for its label and moreover, there is at least one column in which the only
0’s come from zero-rows. If there are more columns of the latter type, then we can safely
delete these and still have a root diagram. Moreover, any permutation of the columns
yields another root diagram for L. Apart from these permutations and redundant columns,
the root diagram of L is uniquely determined and we will denote it again by diag(L).
As in the case of the reals, it suffices to prove the following analogue of Lemma 4.1 to
obtain an effective quantifier elimination procedure for C (details are left to the reader; I
will only prove the lemma).
5.1. Lemma. Let A = C[X], where X = (X1 , . . . , XN ). Let L be a finite Muchnik list in
A[Y ] of length m and let p be a non-constant polynomial in A[Y ]. Suppose L+ = L ∪ {p}
is again Muchnik. There exists an effective algorithm C 7−→ C+ which assigns to any
(m × n)-matrix C with entries in {0, 1} an (m + 1) × n0 -matrix C+ with entries in {0, 1}
(where n ≤ n0 ), such that for each x ∈ CN , we have that
diag(L(x)) 7−→ diag(L+ (x)).
Proof. Let x ∈ CN . Suppose p has degree d ≥ 1 in Y and let a ∈ C[X] be its highest
degree coefficient. Note that d! a ∈ L by rule (D). If the row in C+ is not constant, it
should be rejected as in Case 1 of the previous proof. If the row in C corresponding to d! a
is a zero-row, we do exactly as in Case 2 in the previous proof. So we may assume that this
row has constant value 1; if C is of the form diag(L(x)), this means that a(x) 6= 0. Let ξ
be the label of a column of C.
Assume first that the column of C with label ξ contains a 0 other than those 0’s coming
from zero-rows. Let q ∈ L be of minimal degree e with the property that C(q, ξ) = 0 but
the row labeled by q is not a zero-row. If C is of the form diag(L(x)), then this means
that q(x, ξ) = 0, but q(x, Y ) is not identically zero. Let b be the leading coefficient of q.
8 HANS SCHOUTENS

By the same argument as in Case 5, we may reject the matrix if the row labeled by e! b is
not constant or is a zero-row. In particular, we may assume that C(e! b, ξ) = 1. Letting
r := rem(p, q), we put C+ (p, ξ) := C(r, ξ), and we are done in this case by the same
argument as before.
In the remaining case, when the only zero’s in the column labeled by ξ come from zero
rows, we reason as follows. If C is of the form diag(L(x)), then any new root of p must
have multiplicity 1, for otherwise it would also be a root of p0 ∈ L whence would have ap-
peared in some column of C. To determine how many new roots p has, we count, this time
with multiplicity, the roots of p so far marked. In other words, if η is a column for which
we already determined C+ (p, η) = 0, then we let e(η) be the multiplicity of that root (and
otherwise we put e(η) := 0). To calculate e(η), we simply look for successive derivatives
p0 , p00 , p(3) , . . . of p (which all belong to L and at least one is a non-zero constant) and
see whether there is also a 0 in the row labeled by these derivatives. More precisely, e(η)
is equal to the smallest l for which C(p(l) , η) = 1. To conclude the construction of C+ ,
let e be the difference between d (=the degree of p) and the sum of the e(η). Add e new
columns to the matrix so far obtained, all with a 0 in the bottom row and a 1 everywhere
else, except in zero-rows. 

R EFERENCES
1. A. Semenov, Decision procedues for logical theories, Cybernetics and computer technology 2 (1986), 134–
146 (Russian).
2. A. Shen and N.K. Vereshchagin, Languages and calculi, Moscow Center for Continuous Mathematical Edu-
cation, 2000 (Russian).

D EPARTMENT OF M ATHEMATICS , NYC C OLLEGE OF T ECHNOLOGY , CUNY,


E-mail address: hschoutens@citytech.cuny.edu

You might also like