HIT POLYNOMIALS AND CONJUGATION IN THE STEENROD
ALGEBRA AND ITS DUAL
JUDITH H. SILVERMAN
Abstract. Let A* be the mod2 Steenrod algebra of cohomology operations a*
*nd O its
canonical antiautomorphism. For all positive integers f and k, we show th*
*at the excess of
the element O[Sq(2k1f) . Sq(2k2f) . .S.q(2f) . Sq(f)] is (2k 1)(f), wh*
*ere (f) denotes
the minimal number of summands in any representation of f as a sum of num*
*bers of the form
2i 1. We also interpret this result in purely combinatorial terms. In so*
* doing, we express
the Milnor basis representation of the products Sq(a1) : :S:q(an) and O[S*
*q(a1) : :S:q(an)]
in terms of the cardinalities of certain sets of matrices.
For s 1, let Ps = F2[x1; : :;:xs] be the mod2 cohomology of the sfol*
*d product of
RP1 with itself, with its usual_structure_as an A*module._A polynomial P*
* 2 Ps is hit if it
is in the image of the action A* Ps! Ps, where A*is the augmentation ide*
*al of A*. We
prove that if the integers e, f, and k satisfy e < (2k 1)(f), then for a*
*ny polynomials E
and F of degrees e and f respectively, the product E . F2k is hit. This g*
*eneralizes a result
of Wood, conjectured by Peterson, and proves a conjecture of Singer and S*
*ilverman.
1. Introduction
Let A* be the mod2 Steenrod algebra of cohomology operations, multiplicative*
*ly gener
ated by the Steenrod squares Sq(f), 1 f < 1. If 2 A*, we denote by ^ the ima*
*ge of
under the canonical antiautomorphism O of A*.
For s 1, let Ps = F2[x1; : :;:xs] be the mod2 cohomology of the sfold prod*
*uct of RP 1
with itself, with its usual structure as an A*module. The excess of an elemen*
*t 2 A*
is given by ex() = min{s : (x1x2. .x.s) 6= 0 2 Ps}. In particular, for f 1 w*
*e have
ex(Sq(f)) = f and ex(S^q(f))= (f) [Kra71 ], where (f) denotes the minimal numbe*
*r of
summands in any representation of f as a sum of numbers of the form (2i 1).
Given positive integers f and k, define S[k; f] = Sq(2k1f) . Sq(2k2f) . .S.*
*q(2f) . Sq(f)
as in [Sil95b]. Our first theorem generalizes the above expression for the exce*
*ss of ^Sq(f) =
S^[1; f] to general k. The theorem is stated in full in Section 9; a simplified*
* version reads:
Theorem 1.1. Let f be a positive integer. Then for all positive integers k we*
* have
ex(S^[k; f])= (2k  1)(f):
This result was conjectured in [Sil93] and proven for special cases there and i*
*n [Sil95b]. The
present proof relies on the "stripping" action of the dual Steenrod algebra A* *
*on A* and
requires a formula for the conjugates of certain products in A*. A purely comb*
*inatorial
interpretation of Theorem 1.1 is given in Section 3.
___________
Date: October 11, 1996.
1991 Mathematics Subject Classification. Primary 55S05, 55S10; Secondary 57T0*
*5, 11P81.
Key words and phrases. Steenrod algebra, antiautomorphism, stripping, excess,*
* hit polynomials.
1
2 JUDITH H. SILVERMAN ___ *
* ___
A polynomial P 2 Ps is hit if it is in the image of the action A* Ps ! Ps, *
*where A*
is the augmentation ideal of A*. The set of hit polynomials is linked to variou*
*s important
questions in algebraic topology and representation theory. For example, it is *
*related to
the group ExtsA*(F2; F2), which appears in the E2term of the Adams spectral se*
*quence
for the stable homotopy of spheres [Sin89]. Information about hit polynomials *
*also yields
information about H*(BO), H*(MO), and the cobordism classes of closed manifolds*
* [Pet89].
Moreover, the set of hit polynomials is related to the study of simple represen*
*tations of the
general linear group GL(s; F2). The action of this group on Ps is compatible wi*
*th the A*
action, so that the set of hit elements is closed under the action of GL(s; F2)*
*. A theorem
in modular representation theory says that all simple representations of GL(s; *
*F2) occur in
composition series of Ps in some degree [Mit85], and Wood has shown that the sa*
*me is true
after dividing out the A*_action [Woo89a ].
In his elegant proof of a conjecture of Peterson [Pet87] concerning hit polyn*
*omials, Wood
makes use of the excess formula for S^q(f) which is the case k = 1 of Theorem 1*
*.1 [Woo89b ].
Fueled by the general case of the theorem, his argument implies the following r*
*esult, which
was conjectured in [SS95]:
k
Theorem 1.2. Let P be a polynomial of the form E . F 2 for some polynomials E*
* and F of
degrees e and f respectively, and suppose that e < (2k  1)(f). Then P is hit.
The reader is referred to [Sin91], [Mon94 ] and [Sil95a] for further discussi*
*on of results
related to Theorem 1.2.
2. Preliminaries
2.1. Sequences. Denote by S the monoid of sequences of nonnegative integers al*
*most
all of which are 0; write 0S for the trivial element and B(j) for the sequence *
*whose ith
coordinate is ffii;j. If T = (t1; t2; : :):has ti = 0 for i > L, we identify T *
*with (t1; : :;:tL).
Suppose that T =P(t1; : :;:tL) with tL 6= 0. Then the length ofPT is len(T )= L*
*, the degree
of T is T  = Li=1(2i 1)ti, and the excess of T is ex(T )= Li=1ti. Follow*
*ing [Mon ], we say
that T R T 0if T is greater than T 0in the rightlexicographic order.
It will be convenient to write S* = S [ {*}, where * satisfies x + * = * for *
*all x 2 S*.
2.2. Milnor bases. The Milnor bases of the Steenrod algebra A* and its dual A* *
*are both
indexed by elements of S [Mil58]. For T 2 S, we write M[T ] for the correspond*
*ing basis
element of A* and [T ] for the basis element of A*. In particular, M[(t)] is t*
*he Steenrod
square Sq(t) for t 0. The dual algebra is in fact a polynomialQalgebra on the *
*generators
j = [B(j)], so that if T = (t1; : :;:tL), then [T ] = Li=1tii. We further set *
*M[*] = 0 and
[*] = 0, and adopt the convention that M[R] = 0 and [R] = 0 if R is a finite se*
*quence of
integers at least one of which is negative.
The definitions of length, degree, excess, and rightlexicographic order for *
*elements of S
induce similar definitions for elements of both bases and hence for all homogen*
*eous elements
of A* and A*, where the length (resp. excess) of a sum of basis elements is det*
*ermined by
the maximal length (resp. minimal excess) of the summands, and sums are compar*
*ed in
the rightlexicographic order by comparing their terms in descending order. It*
* is readily
HIT POLYNOMIALS AND CONJUGATION 3
verified that this definition of excess in A* agrees with the definition of Sec*
*tion 1 based on
the A*action on polynomial rings (cf. [Kra71 ]).
Example. [Kra71 ] The function (f) of Section 1 may now be described as the*
* least
excess of any Milnor basis element of degree f or alternatively as the excess o*
*f the sum of
all Milnor basis elements of that degree. Since this sum is equal to S^q(f) [Mi*
*l58], we find
that ex(S^q(f))= (f) as stated in Section 1.
Let T 2 S be a sequence, 2 A*. We say M[T ] 2 if M[T ] occurs with nontriv*
*ial coef
ficient in the Milnor basis representation of . In other words, M[T ] 2 () <[T*
* ]; > = 1,
where < ; > is the pairing relative to the Milnor bases of A* and A*. In what f*
*ollows, we use
this criterion in the form
M[T ] 2 ^ () <^[T ]; > = 1; (*
*1)
where ^[T ] denotes the image of [T ] under the canonical antiautomorphism O of*
* A*.
2.3. Stripping. Let * be the diagonal map of the Hopf algebra A*. There is a n*
*atural
action of the dual Hopf algebra A* on A*, in which each 2 A* acts via
* * * 1<; > *
D() : A* ! A A  ! A :
Alternatively, D() satisfies
< . ; > = < ; D() > (*
*2)
for all 2 A* and 2 A*. We refer to D() as the operation of stripping by . *
*For
typographical convenience, we follow [WW96 ] in expressing D() in capproduct *
*notation:
\ def=D()
for all 2 A* and 2 A*. Thus (2) becomes < . ; > = <; \ >, an identity we *
*use in
the form
<[U + V ]; ^> = <^[U + V ]; > = <^[U]; ^[V ] \> (*
*3)
for all U; V 2 S and 2 A*. In particular, if T  =  we have <^[T ]; > = <1;*
* ^[T ] \>,
so that
M[T ] 2 ^ () ^[T ] \ = 1: (*
*4)
The following results are proven in [CW94 ] and [Sil95b], respectively.
Proposition 2.1. 1. Let 2 A*. If j > len(), then j\ = 0.
2. For all k; f 1, we have k \ S[k; f] = S[k; f  1].
Proposition 2.2. For all j; k; f 1, we have
1. ^j\ Sq(f) = Sq(f  (2j 1)).
2. ^j\ S[k; f] = S[k  1; 2f] . [^j\ Sq(f)] = S[k  1; 2f] . Sq(f  (2j 1)).
*
* j1
Observation 2.3. Part 1 of Proposition 2.2, which may be restated as "D(^j) = *
*D(21 )
when restricted to elements of A* of length 1", follows from Part 1 of Proposit*
*ion 2.1 and
j1
the fact that ^j 21 modulo elements of A* of length > 1.
4 JUDITH H. SILVERMAN
2.4. Application. As an illustration of the technique of stripping, we give a n*
*ew proof of
a theorem from [Sil96] which we will need in the proof of Theorem 1.1.
Theorem 2.4. [Sil96] Let f, L, and k be positive integers such that f < 2L *
* 1. Then
any sequence T with M[T ] 2 ^S[k; f] has length < L.
Proof. Suppose T is a sequence of length L, so that tj 1 for some j L. By (*
*3) and
Part 2 of Proposition 2.2, we have
<^[T ]; S[k; f]>=<^[T  B(j)]; ^j\ S[k; f]>
= <^[T  B(j)]; S[k  1; f] . [^j\ Sq(f)]>:
But f < 2j 1, so ^j\ Sq(f) = 0 by Part 1 of Proposition 2.2. Consequently th*
*e inner
product <^[T ]; S[k; f]> = 0, and we conclude from (1) that M[T ] =2^S[k; f]. 
3. Computing products in A*
In this section we use the language of stripping to express the question of w*
*hether a Milnor
basis element appears in O[Sq(a1).: :.:Sq(an)] in terms of the cardinality of a*
* set of matrices
(cf. Milnor's product formula for M(R) . M(S) [Mil58]). We then interpret Theor*
*em 1.1 in
combinatorial terms involving matrices and partitions. The rest of the paper do*
*es not draw
upon the material in this section.
We begin by giving an explicit description of the effect of stripping product*
*s of Steenrod
squares by ^i. The followingPproperty of stripping operators is discussed in [*
*Sil95b]: Let
2 A* and write OE* = 0 00, where OE* is the diagonal map of A*. Then
^\ (12) = X (^00\ 1) . (^0\ 2) (*
*5)
for all 1; 2 2 A*.
Definition. Fix integers i; n > 1. A ^istrip vector of length n is a sequence *
*(v1; : :;:vn) in
which the nonzero elements form exactly the sequence
(2e1 1; 2e2 2e1; : :;:2i 2em)
for some m and some sequence 1 < e1 < e2: :<:em < i. For example, the ^2strip *
*vectors of
length 3 are (3,0,0), (0,3,0), (0,0,3), (1,2,0), (1,0,2), and (0,1,2). Write ^V*
*n;ifor the set of ^i
strip vectors of length n. An inductive argument based on (5) and Part 1 of Pro*
*position 2.2
yields the following:
Proposition 3.1. Let i and a1; : :;:an be nonnegative integers. Then
^i\ [Sq(a1) . : :.:Sq(an)]= X Sq(a1  v1) . : :.:Sq(an  vn): (*
*6)
V 2^Vn;i
Note that the product on the left is not required to be admissible, nor are tho*
*se on the right
claimed to be.
We proceed to interpret <^[T ]; Sq(a1) . : :.:Sq(an)> as the parity of the ca*
*rdinality of a
certain set of matrices.
HIT POLYNOMIALS AND CONJUGATION 5
Definition. An (ex(T )x n)matrix R = (rij) is a {^[T ]; Sq(a1) . : :.:Sq(an)}*
*matrix if the
entries satisfy the following conditions:
P i1 P i
o For 1 i L, the rows numberedP( m=1 tm ) + 1 through m=1tm are ^istri*
*p vectors.
o For 1 j n, the column sum Li=1rijis equal to aj.
For example,
0 1
0 1
B@3 4 CA
7 0
is a {^[(1; 0; 2)]; Sq(10) . Sq(5)}matrix.
The following characterization of the summands of O[Sq(a1).: :.:Sq(an)] follo*
*ws from (3), (4)
and Proposition 3.1 by induction on ex(T ):
PropositionP3.2. Let a1; : :;:an be positive integers, and suppose that T 2 S w*
*ith T  =
aj. Then M[T ] appears in O[Sq(a1) . : :.:Sq(an)] () there are an odd number *
*of
{^[T ]; Sq(a1) . : :.:Sq(an)}matrices.
A similar characterization of the summands of Sq(a1) . : :.:Sq(an) may be obt*
*ained by
considering matrices whose rows are istrip vectors as described in [CW94 ] or*
* [Sil95b].
Proposition 3.2 makes possible the following combinatorial interpretation of *
*Theorem 1.1
as stated in Section 1:
Theorem 1.1, combinatorial version.PLet f and k be positive integers, and suppo*
*se that
T 2 S with T  = (2k  1)f. If ti < (2k  1)(f), then there are an even num*
*ber of
{^[T ]; S[k; f]}matrices.
Theorem 2.4 of Section 2.4 may be restated as follows:
Theorem 2.4, combinatorial version. With notation as above, suppose that T i*
*s of
length L. If f < 2L  1, then there are an even number of {^[T ]; S[k; f]}matr*
*ices.
In the case k = 2, the restatements of Theorems 1.1 and 2.4 may in turn be re*
*stated
asPfollows: Given f 1 and an Ltuple G = (g1; : :;:gL) of positive integers s*
*uch that
L gi
i=1(2  1) = 3f, definePH(G) to be the set of Ltuples H = (h1; : :;:hL) for*
* which
0 hi gi for all i and Li=1(2hi 1) = 2f. Elements of H(G) may be regarded *
*as
illustrated below in the case f = 7, G = (1; 2; 2; 3; 3), and H = (0; 1; 2; 3; *
*2). Here the rows
of 1's represent the numbers 2gi 1 in binary notation, and the boxed portions *
*represent the
corresponding numbers 2hi 1.
_1_
_1_ 1__
  
___ _1__1__
   
_1__1_1__
1 1_1_
6 JUDITH H. SILVERMAN
Theorems 1.1 and 2.4, case k = 2. Let f 1. Then with notation as above, the
cardinality of H(G) is even if L < 3(f) or if 2gm  1 > f for some m.
4.Motivation for the Proof of Theorem 1.1
Recall from Sections 1 and 2.2 that (f) is the minimal number of summands in *
*any
representation of f as a sum of numbers of the form (2i 1) or, equivalently, t*
*he least
excess of any Milnor basis element of degree f. The main result of [Sil95b], qu*
*oted below
as Theorem 9.1, implies that ex(S^[k; f]) (2k  1)(f) for all k and f. Consequ*
*ently to
prove the simplified version of Theorem 1.1 as stated in Section 1, it suffices*
* to establish the
reverse inequality. In order to motivate the computations to follow, we now giv*
*e a stripping
proof of the obvious inequality ex(S^q(f)) (f), for the appreciation of which t*
*he reader
is advised to ignore the second interpretation of (f) and to regard this functi*
*on simply as
one satisfying (0) = 0, (1) = 1, and
(f) (f  (2i 1)) + 1 (*
*7)
whenever f 2i 1. We then indicate the difficulties in extending the argument *
*to show
that ex(S^[k; f]) (2k1)(f) for k > 1. This exercise is intended to motivate an*
*d illustrate
techniques rather than to sketch the path actually followed in proving Theorem *
*1.1.
Proof. Since S^q(1) = Sq(1), the claim is true for f = 1. Now fix f > 1 and *
*assume
inductively that the result is true for f0 < f. Let M[T ] be a Milnor basis ele*
*ment 2 S^q(f)
and choose i for which ti 1. By (3) and Part 1 of Proposition 2.2, we have
1 = <^[T ]; Sq(f)>= <^[T  B(i)]; ^i\ Sq(f)>
= <^[T  B(i)]; Sq(f  (2i 1))>;
and consequently M[T  B(i)] 2 ^Sq(f  (2i 1)). By the inductive hypothesis,
( ex(T ) 1 = )ex(T  B(i)) (f  (2i 1));
and so ex(T ) (f) by (7). 
This argument rests on the unremarkable fact that if M[T ] 2 ^Sq(f), then ti *
*1 for some
i. The analogue of this starting point for general k, namely that ti 2k  1 fo*
*r some i, is
not the case for arbitrary summands of ^S[k; f]. As we show in Proposition 8.2*
*, however,
this condition does hold for the sequence W whose associated Milnor basis eleme*
*nt M[W ]
is maximal in rightlexicographic order among Milnor basis elements of minimal *
*excess
appearing in ^S[k; f]. We therefore have
k1
1 = <^[W ]; S[k; f]>=<^[W  (2k  1)B(i)]; ^2i \ S[k; f]>: (*
*8)
In order to proceed with the argument, one needs an analogue of Part 1 of Propo*
*sition 2.2
k1
to describe the effect of stripping Steenrod operations of length k by ^2i (cf*
*. Observa
k1
tion 2.3). We accomplish this in Section 5 by finding a formula for ^2i modulo*
* elements
i1
of A* of length > k. This formula involves 2k along with certain error terms *
*En. In
Section 7, we further show that the defining property of W implies that the err*
*or terms
HIT POLYNOMIALS AND CONJUGATION 7
*
* k1
<^[W  (2k  1)B(i)]; En \ S[k; f]> arising from substituting this expression f*
*or ^2i in (8)
all vanish. Consequently, we find that
i1
1 = <^[W  (2k  1)B(i)]; 2k \ S[k; f]>
= <^[W  (2k  1)B(i)]; S[k; f  (2i 1)]>;
the latter equality holding by Part 2 of Proposition 2.1. Therefore M[W  (2k *
* 1)B(i)]
appears in ^S[k; f  (2i 1)], and the inductive argument goes through as befor*
*e; we conclude
that ex(S^[k; f]) (2k 1)(f). As observed at the beginning of this section, thi*
*s inequality
confirms the simplified version of Theorem 1.1.
We include this argument to motivate the conjugation formulae of Section 5 an*
*d to il
lustrate the technique of manipulating these conjugation formulae while strippi*
*ng, which is
central to our proof of Proposition 8.2 in Sections 7 and 8 below. Once establi*
*shed, however,
this proposition makes possible a more straightforward proof of Theorem 1.1 tha*
*n the one
indicated above. In Section 9, where the theorem is stated in its entirety and*
* proved, we
invoke Proposition 8.2 in its full strength to establish not only that W has an*
* entry 2k 1
but that each entry of W is divisible by 2k  1. This along with Theorem 9.1 e*
*nables us
to bypass the inductive argument on f and, for fixed f, to prove the result for*
* general k
directly from the result for k = 1.
5.Conjugation formula
5.1. Formula for ^i. Let S(k) be the symmetric group with identity Idkacting on
{0; 1; : :;:k  1}. For o 2 S(k) and i 0, define
( P k1
2jB(i + o(j) j) i + o(j) j 0 for all j
Zi(k; o) = j=0 (*
*9)
* else
and
( Q k1 j
2i+o(j)j i + o(j) j 0 for all j
Xi(k; o) = [Zi(k; o)]= j=0
0 else.
k1
Observation 5.1. o Zi(k; Idk) = (2k  1)B(i), so that Xi(k; Idk) = 2i .
o If Xi(k; o) 6= 0, then Xi(k; o) = (2k  1)(2i 1) independently of o.
o Any non* Zi(k; o) with o 6= Idkis strictly R and no greater in excess than
(2k  1)B(i) = Zi(k; Idk).
Finally, define
X
Xi(k) = Xi(k; o)
o2S(k)
and note that
Xi(1) = Xi(1; Id1) = i: (1*
*0)
Lemma 5.2. For k 1, we have ^k= X1(k).
8 JUDITH H. SILVERMAN
Proof. The proof is by induction on k and makes use ofPMilnor's recursive for*
*mula for
l
the canonical antiautomorphism [Mil58]: ^1 = 1 and ^k= k1l=02kl^l. For k = *
*1, we have
X1(1) = X1(1; Id1) = 1 = ^1, and for k = 2 we have X1(2) = X1(2; Id2) + X1(2; (*
*0; 1)) =
021 0
31+ 220 = ^2. Suppose now that k 3 and that the formula holds for k < k. Note *
*that
if X1(k; o) 6= 0, we must have o(j) j  1 for all j. Consequently if X1(k; o) *
*6= 0 and l
satisfies o(l) = k  1, we must have o(k  1)= k  2, o(k  2) = k  3; : :;:o*
*(l + 1) = l, so
that o has cycledecomposition of the form (k  1; k  2; : :;:l)oe for some oe*
* 2 S(l). In this
l
case, we have X1(k; o) = 2klX1(k; oe).
Let Sl(k) = {o 2 S(k) : o(l) = k  1}. Then
k1XX k1Xl X
X1(k) = X1(k; o) = 2kl X1(k; oe)
l=0o2Sl(k) l=0 oe2S(l)
ind=k1X2l ^ ^
kll = k :
l=0

5.2. General formula. Equation 10 and Lemma 5.2 suggest the following generaliz*
*ation,
which may be verified for i = 2 and all k by an argument similar to the proof o*
*f Proposi
tion 5.5 below:
Conjecture 5.3. ^Xk(i) = Xi(k) for all i; k 1.
For our purposes, however, we require a different generalization of Lemma 5.2.
Definition. For k 1, let I(k) denote the set of nondecreasing sequences (i0;*
* i1; : :;:ik1)
of positive integers. Now for o 2 S(k) and I 2 I(k), define
( P k1
2jB(io(j)+ o(j) j) io(j)+ o(j) j 0 for *
*all j
ZI(k; o) = j=0
* else
( Q k1 j
2i +o(j)jio(j)+ o(j) j 0 for all j
XI(k; o) = [ZI(k; o)] = j=0 o(j)
0 else
P X
XI(k) = o2S(k)[ZI(k; o)]= XI(k; o) :
o2S(k)
We further define
( P k1
2j+i0B(io(j)+ o(j) (j + i0))io(j)+ o(j) (j + i0) 0 for a*
*ll j
PI(k; o)= j=0
* else
( Q k1 i0+j
2i +o(j)(i +j)io(j)+ o(j) (j + i0) 0 for a*
*ll j
RI(k; o) =[PI(k; o)] = j=0 o(j) 0
0 else
P X
RI(k) = o2S(k)[PI(k; o)]= RI(k; o) :
o2S(k)
HIT POLYNOMIALS AND CONJUGATION 9
Observation 5.4. o If I = (i; : :;:i) 2 I(k) is a constant sequence, then ZI*
*(k; o) =
Zi(k; o) as defined in (9). Moreover PI(k; o) = * for o 6= Idk, and conseq*
*uently RI(k) =
RI(k; Idk) = 1. P
o For general I, ZI(k; Idk) = k1j=02jB(io(j)).
*
* Pk1 j
o Any non* ZI(k; o) with o 6= Idkis strictly R and no greater in excess tha*
*n j=0 2 B(io(j)) =
ZI(k; Idk).
o The degrees of the non* ZI(k; o) and PI(k; o) depend only on I and not on*
* o.
o If I = (i0; i1; : :;:ik1) 2 I(k) and i0 > 1, write I[1] for *
* the sequence
(i0  1; i1  1; : :;:ik1 1). Then RI(k) = [RI[1](k)]2.
i01
Proposition 5.5. With notation as above, X^I(k) 2k .R^I(k) modulo elements o*
*f length
> k.
Proof. Fix k. The proof is by induction on i0.
When i0 = 1, we have
X k Y j+1
k . ^RI(k)= k^2io(k1)+o(k1)k2^io(j)+o(j)(j+1): (1*
*1)
o j6=k1
On the other hand, we have
X 0 Y j
^XI(k)= ^2i +ae(0)0^2i +ae(j)j:
ae ae(0) j6=0 ae(j)
For each ae 2 S(k), define ae0by
(
ae(0) l = k  1
ae0(l)= ae(l + 1)0 l k  2: (1*
*2)
Then
X Y l+1
^XI(k)= i^ 0 2^ : (1*
*3)
ae0(k1)+ae (k1)iae0(l)+ae0(l)(l+1)
ae0 l6=k1
Observe that by Milnor's formula,
Xk
^i 0 ^2n modulo terms of length > k. (1*
*4)
ae0(k1)+ae (k1) n iae0(k1)+ae0(k1)n
n=1
From (13), we find that
!
X Xk n Y l+1
X^I(k) n^2i 0 ^2 0 : (1*
*5)
ae0(k1)+ae (k1)niae0(l)+ae (l)(l+1)
ae0 n=1 l6=k1
We claim now that the sums in (11) and (15) agree. Indeed, their difference is
10 JUDITH H. SILVERMAN
X k1X n Y l+1
n^2iae0(k1)+ae0(k1)n^2iae0(l)+ae0(l)(l+1)=
ae0n=1 l6=k1
k1XX Y
^n^2ni 0 +ae0(k1)n^2(n1)+1i 0 +ae0(n1)[(n1)+1]^2l+1i;*
*(0+ae0(l)(l+1)16)
n=1 ae0 ae (k1) ae (n1)l6=n1; k1 a*
*e (l)
and it is easily verified that the summand in (16) associated to n and ae0 equa*
*ls the term
associated to n and ae00, where
8
>< ae0(l) l 6= n  1; k  1
ae00(l)= > ae0(n  1)l = k  1
: ae0(k  1)l = n  1:
Consequently each term in (16) appears twice. We conclude that the sums in (1*
*1) and (15)
are equal, and hence that k. ^RI(k) and ^XI(k) are congruent modulo elements of*
* length > k.
This establishes the proposition for the case i0 = 1.
The proof for general I is similar. Suppose that the proposition is true for*
* the non
decreasing sequence (i0  1; i1  1; : :;:ik  1) def=I[1]. For typographical *
*convenience, let
i = i0. By Observation 5.4, we have
i1 2i11 2
2k . ^RI(k) = [k . ^RI[1](k)] . k
ind 2
[X^I[1](k)] . k (modulo length > k)
X k1Y j+1
= k[ ^2(io(j)1)+o(j)j]
o j=0
X k Y j+1
= k2^(io(k1)1)+o(k1)(k1)^2(io(j)1)+o(j)j
o j6=k1
X k Y j+1
= k2^io(k1)+o(k1)k ^2io(j)+o(j)(j+1): (1*
*7)
o j6=k1
On the other hand,
X k1Yj
^XI(k)= 2^i +ae(j)j
aej=0 ae(j)
X 0 Y j
= 2^iae(0)+ae(0)^2iae(j)+ae(j)j: (1*
*8)
ae j6=0
One can now define ae0as in (12) and use (14) as in the case i0 = 1 to establis*
*h the congruence
of the sums in (17) and (18) modulo elements of length > k. This establishes th*
*e inductive
step and proves the proposition. 
Define
X
X^0I(k)= ^Xi(k; o):
Idk6=o2S(k)
HIT POLYNOMIALS AND CONJUGATION 11
We can then rewrite the conclusion of Proposition 5.5 as
^XI(k; Idk) ^XI0(k) + 2i01k. ^RI(k)
modulo elements of length > k. When I = (i; : :;:i) is a constant sequence, we*
* find by
Observation 5.4 that
2^k1i X^0I(k) + 2i1k;
this is the formula referred to in Section 4.
Recall from Part 1 of Proposition 2.1 that j\ = 0 if len() < j, and from Par*
*t (2) of
the same proposition that k \ S[k; f] = S[k; f  1]. Proposition 5.5 therefore *
*implies:
Corollary 5.6. 1.If len() < k, then X^I(k) \ = 0 for all I 2 I(k).
i01
2. If len() = k, then X^I(k) \ = ^RI(k) \ (2k \ ).
3. In particular, X^I(k) \ S[k; f] = ^RI(k) \ S[k; f  (2i0 1)].
In what follows, we will use Part 3 of the corollary in the form
X^I(k; Idk) \ S[k; f]=(X^0I(k) \ S[k; f]) + (R^I(k) \ S[k; f  (2i0 1)*
*]):(19)
6. kreductions
In this section, we identify a constraint on the Milnor basis elements appear*
*ing in ^S[k; f].
Here and in Section 8, we make frequent use of the inequality
2(21  1)< 2  1: (2*
*0)
Definition. Let T 2 S,Pand suppose I(1); : :;:I(n) 2 I(k) are sequences such*
* that __
all coefficients of T  nr=1ZI(r)(k; Idk) are nonnegative. Then T is kredu*
*cible via I =
{I(1); : :;:I(n)}. Suppose also that o1; : :;:on 2 S(k) satisfy PI(r)(k; or) 6=*
* * for 1 r n.
P n k P
Set U = T  r=1ZI(r)(k;_Id) and P = rPI(r)(k; or). Then [U; P ] 2 S x S is*
* the k
reduction of T via I and __o= {o1; : :;:on}. We define the degree [U; P ] t*
*o be the sum
U + P ; it is easy to check that
P k i (*
*r)
Fact 6.1. With notation as above, the degree [U; P ] = T   r(2  1)(2 0 *
*  1), where
i0(r) is the 0th coordinate of I(r). In particular, any kreduction of T has *
*degree T 
modulo (2k  1).
Observation 6.2. o If T is kreducible via {I(1); : :;:I(n)} I(k), then one*
* may ob
tain a kreduction of T by taking or = Idkfor all r. Indeed, if the I(r) a*
*re constant
sequences, then by Observation 5.4 this is the only corresponding kreduct*
*ion of T .
o If ti 2k  1 for some i, then [T  (2k  1)B(i); 0S] is a kreduction of *
*T via the
constant sequence (i; : :;:i) 2 I(k) and the permutation o = Idk. (Here 0S*
* denotes the
trivial element of S as in Section 2.1.) Consequently, every T has a kred*
*uction [U; 0S]
with ui 2k  2 for all i.
Definition. A sequence T 2 S is kirreducible if it fails to be kreducible v*
*ia {I} for any
I 2 I(k).
12 JUDITH H. SILVERMAN
Lemma 6.3. 1.If U is a kirreducible sequence of length , then
U (0; : :;:0; 2k  2) = (2k  2)(2  1):
2. Suppose that j k1. With U as above, let U[m] = (0; : :;:0; um ; : :;:u )*
* and suppose
that U[mj] is jreducible but U[mj+ 1] is jirreducible. Then
U (0; : :;:0; 2k  2; 0; : :;:0; 2j 2) = (2k  2)(2mj  1) + (2j 2)*
*(2  1):
mj
Proof. Part 1 is obvious for either = 1 or k = 1. Suppose it is true for *
* 1 and
k0 k, and also for and k0 k  1. Let U0 = (u1; : :;:u1 ); observe that U0 *
*is also
kirreducible. Suppose first that u 2k1 1. Then by (20) and the inductive h*
*ypothesis,
U < (2k 2)(2  1) as claimed. Suppose now that u = 2k 2j+ " for some 1 j*
* k  1
and some 0 " 2j1 1. This time U0 must be jirreducible, and again (20) and*
* the
inductive hypothesis give the required inequality for U. The proof of Part 2 *
*is similar. 
Corollary 6.4. Fix positive integers k and f, and let T 2 S satisfy M[T ] 2 ^S[*
*k; f]. Then
T is kreducible.
Proof. Let = len(T.)If T is kirreducible, then by Lemma 6.3, T  (2k  2)(*
*2  1).
Since T  = (2k  1)f, we find that f < 2  1. Theorem 2.4 then implies that *
*len(T )< .
This contradiction proves the corollary. 
Given positive integers k and f, denote by W 2 S the sequence for which M[W ]*
* is maximal
in rightlexicographical order among Milnor summands of minimal excess of ^S[k;*
* f]. Recall
from Section 4 that our goal is to show that wi 2k  1 for some i; that is, tha*
*t W is not
only kreducible but kreducible via a constant sequence. This we accomplish in*
* the next
two sections by exploiting the conjugation formula of Section 5.
7. kReductions and stripping
__ __
In what follows, we assume for typographical convenience that Iand oconsist r*
*espectively
of the single sequence I 2 I(k) and_the single permutation o 2 S(k), but in fac*
*t the same
reasoning applies for arbitrary I and __o.
Let T 2 S and 2 A*, and suppose that T is kreducible via I 2 I(k). Then by *
*(3) we
have
<[T ]; ^> = <^[T ]; >= <^[T  ZI(k; Idk)]; ^[ZI(k; Idk)] \>
= <^[T  ZI(k; Idk)]; X^I(k; Idk) \>: (2*
*1)
Fix positive integers k and f, and set = S[k; f]. Henceforth we denote by W 2*
* S the
sequence for which M[W ] is maximal in the rightlexicographical order among al*
*l summands
of minimal excess in the Milnor basis representation of ^S[k; f]. If W is kred*
*ucible via I, we
have from (1), (19), and (21) that
1 = <^[W ]; S[k; f]>=<^[W  ZI(k; Idk)]; X^0I(k) \ S[k;>f]
+ <^[W  ZI(k; Idk)]; ^RI(k) \ S[k; f  (2i0>1)];
HIT POLYNOMIALS AND CONJUGATION 13
so
X k
1 = <^[W  ZI(k; Id) + ZI(k; o)]; S[k; f]> (2*
*2)
Idk6= o 2 S(k)
ZI(k; o) 6= *
X k
+ <^[{W  ZI(k; Id)} + PI(k; o)]; S[k; f  (2i0 1)]>: (2*
*3)
PI(k;o)6=*
By Observation (5.4), any non* ZI(k; o) for o 6= Idkis R and no greater in exc*
*ess than
ZI(k; Idk), and so the term W  ZI(k; Idk) + ZI(k; o) of (22) is R and no great*
*er in excess
than W . By definition of W , then, W ZI(k; Idk)+ZI(k; o) does not appear in ^*
*S[k; f]. This
implies that each summand in (22) vanishes, and consequently that the summation*
* in (23)
is non0. But this summation is equal to
X
<^[U + P ]; S[k; f  (2i0 1)]>;
where the summation is over all kreductions [U; P ] via the sequence I and som*
*e o 2 S(k).
We therefore conclude by (1) and (22)(23) that M[U + P ] appears in ^S[k; f  *
*(2i0 1)] for
some kreduction [U; P ] of W via I.
Observe that if wi 2k  1 for some i, then I may be taken to be the constant *
*sequence
(i; : :;:i). By Observation 5.4 the only kreduction of W via I in this case c*
*orresponds
to o = Idk. Since ZI(k; Idk) = (2k  1)B(i) and PI(k; Idk) = 0S, we find from *
*(23) that
M[W (2k1)B(i)] 2 ^S[k; f  (2i 1)]. This is the conclusion we reached when p*
*reviewing
the argument for the simplified version of Theorem 1.1 in Section 4 under the a*
*ssumption
that wi 2k 1 for some i. In Section 8 below, we prove the stronger result that*
* each entry
of W is divisible by 2k  1, and this result paves the way for the proof of The*
*orem 1.1 itself
in Section 9. __
In the meantime, we note that if W is_reducible_by_any set of sequences I= {I*
*(1); : :;:I(n)},
P i (r)
then_the_above argument implies that_M[U + P] appears in ^S[k; f  r(2 0  1*
*)], where
M[U + P] is a kreduction of W via I and some __o S(k), and i0(r) denotes the 0*
*th coor
dinate of I(r). We summarize this discussion in the following_proposition, in w*
*hich we deal
separately with the constant and nonconstant sequences in I. First observe tha*
*t if T 2 S
is written in the form T = (2k  1)G + H for some G; H 2 S, then by Observation*
* 6.2,
(H; 0S) is the unique kreduction of T via constant sequences corresponding to *
*the entries
of G. Moreover, if M[T ] 2 ^S[k; f] (so that T  = (2k  1)f ), then H = (2k*
*  1)j for some
integer j by Fact 6.1.
Proposition 7.1. Let M[W ] be maximal in the rightlexicographical order among *
*all Milnor
basis elements of minimal excess appearing in ^S[k; f], and write W = (2k  1)G*
* + H for
some G; H 2 S with H = (2k  1)j. Then M[H]_appears_in ^S[k; j].
Suppose moreover that H is kreducible via I = {I(1); : :;:I(n)}, and denote *
*by i0(r)
P i (r)
the 0th coordinate of I(r). _Then_M[U + P ] appears in S^[k; j  r(2 0  1)*
*] for some
kreduction [U; P ] of H via I.
14 JUDITH H. SILVERMAN
A similar argument yields the analogous conclusion concerning the sequence "W*
* whose as
sociated Milnor basis element is maximal in rightlexicographical order among a*
*ll summands
of ^S[k; f], regardless of excess.
8.Degrees of kreductions
It follows from Observation 6.2 that if each coordinate of the sequence T is *
*divisible by
2k1, then [0S; 0S] is a kreduction of T via constant sequences. We now turn o*
*ur attention
to sequences which are not divisible by 2k  1.
Lemma 8.1. Let T 2 S, and let L be the largest index i for which tiis not div*
*isible by 2k1.
Then either T  < (2k1)(2L  1) or T has a kreduction [U; P ] of degree < (2*
*k1)(2L  1)
with uL > 0.
Proof. By Observation 6.2, we may assume that ti 2k  2 for all i, and conseq*
*uently
that T is of length L. Thus by Lemma 6.3, the lemma is true if T is kirreducib*
*le. In what
follows, we assume that T is kreducible.
The lemma is easily verified for L = 1 and all k, and for k = 1 and all L. Su*
*ppose then
that it is true for L  1 and k0 k, and for L and k0 k  1. In what follows, we*
* write T 0
for (t1; : :;:tL1).
Case_I:_1__tL__2k1_1._Let [U0; P 0] be a kreduction of T 0of degree < (2k  *
*1)(2L1  1),
which exists by the inductive hypothesis. Then it is easily verified using (20*
*) that [U0 +
tLB(L); P 0] is a kreduction of T satisfying the conditions stated in the lemm*
*a.
Case_II:_2k1__tL__2k__1._Write tL = 2k  2j + " for some 1 j k  1 and 0 "
2j1 1. Since T is kreducible and tL < 2k  2j1, T 0must be jreducible. We *
*distinguish
the cases " > 0 and " = 0.
Case_IIa:_"_>_0.With T 0as usual, define T 0[mj] as in Lemma 6.3. By the lemma,*
* we have
T  = T 0 + tL(2L  1) (2k  2)(2mj  1) + (2j 2)(2L1  1) + (2k  2j+ *
*")(2L  1):
Now tL 2k  2j and T 0[mj] is jreducible, so the sequence T itself is kreduc*
*ible via some
sequence I 2 I(k) with i0 = mj, ij1< L, and il= L for j l k  1. Let [U; P ]*
* be any
kreduction via I, and observe that uL = tL  (2k  2j) = " 6= 0. It is easily *
*verified using
Fact 6.1 that the degree [U; P ] satisfies the inequality given in the lemma.
Case_IIb:_"_=_0._If T 0is (j + 1)reducible, proceed as in Case IIa, replacing *
*j by j + 1
when invoking Lemma 6.3 and choosing I; this time uL = tL  (2k  2j+1) = 2j. *
*If T 0is
(j + 1)irreducible, the inductive hypothesis along with (20) implies that [T ;*
* 0S] satisfies the
requirements of the lemma. 
Proposition 8.2. Fix positive integers k and f, and suppose that W 2 S is the s*
*equence
whose associated Milnor basis element is maximal in rightlexicographical order*
* among all
summands of ^S[k; f] of minimal excess. Then W = (2k  1)Q for some Q 2 S of d*
*egree
f. Similarly, if W" 2 S is the sequence whose associated Milnor basis element *
*is maximal
in rightlexicographical order among all summands of ^S[k; f] regardless of exc*
*ess, then W" =
(2k  1)Q" for some "Q2 S of degree f.
HIT POLYNOMIALS AND CONJUGATION 15
Proof. We give the proof for W ; the argument for W" is identical. Suppose t*
*hat the
corollary is not true, and let L be the largest index i for which wiis not divi*
*sible by 2k  1.
Take [U; P ] to be a kreduction of W satisfying the conclusion of Lemma 8.1, n*
*amely that
uL > 0 and [U; P ] < (2k 1)(2L  1). Since [U; P ] W  0 (mod 2k 1) *
*by Fact 6.1,
we can write [U; P ] = (2k  1)OE for some OE < 2L  1. By Proposition 7.1 an*
*d Part 4 of
Observation 5.4 we may assume, perhaps after adjusting the parameter __oof the *
*kreduction,
that M[U +P ] appears in ^S[k; OE]. But Theorem 2.4 with f = OE then implies th*
*at uL+pL = 0,
contradicting the assumption on [U; P ]. We conclude that indeed wi 0 (mod 2k*
*  1) for
all i. 
9. Proof of Theorem 1.1
Before stating Theorem 1.1 in its entirety and giving a proof, we recall a re*
*sult of [Sil95b].
Given a positive integer f, let (f) = max { : 2  1 f}, and define sequence*
*s R1(f)
inductively by
(
B(1) f = 1
R1(f) = (f)
B((f)) + R1(f  (2  1)) f > 1:
Fix f and write R1(f) = (r1; : :;:r(f)). Evidently R1(f) = f and ri 1 for a*
*ll i (f),
except that the first non0 riis 2. One may verify inductively that R1(f) is b*
*oth minimal
in excess over all Milnor basis elements of degree f and maximal in the rightl*
*exicographical
order over all elements in that degree regardless of excess [Gal79]. In particu*
*lar ex(R1(f))=
(f), so that (f) may be computed recursively via (1) = 1, (f)P= 1 + (f  (2(f) *
* 1)).
For future reference, we observe that writing f = R1(f) = j(2j  1)rj give*
*s rise to
a decomposition of f as the sum of (f) numbers of the form 2i 1, all distinct *
*except
possibly the two smallest, in which the number of times each 2i 1 appears is g*
*iven by ri.
For k 2, define Rk(f) = (2k  1)R1(f). Then
Theorem 9.1. [Sil95b] For all positive integers f and k, Rk(f) is a summand*
* of ^S[k; f],
and so ex(S^[k; f]) (2k  1)(f).
We are now ready to prove:
Theorem 1.1 Let f be a positive integer. Then for all positive integers k, the*
* Milnor basis
element M[Rk(f)] is both minimal in excess and maximal in rightlexicographical*
* order over
all basis elements appearing in ^S[k; f]. In particular, ex(S^[k; f])= (2k  1)*
*(f).
Proof. The theorem for k = 1 follows from Milnor's formula
X
^Sq(f) = M[T ]
T=f
and the remarks following the definition of R1(f). Suppose then that k > 1, an*
*d let W"
(resp. W ) 2 S be the sequence whose associated Milnor basis element is maximal*
* in right
lexicographical order among all summands of ^S[k; f] (resp. among all summands *
*of ^S[k; f]
of minimal excess). By Proposition 8.2, we have W = (2k  1)Q and W" = (2k  1*
*)Q" for
16 JUDITH H. SILVERMAN
some sequences Q, "Qof degree f. But the relations "R" and " in excess" are pre*
*served
by termwise multiplication by 2k  1, and so it follows from Theorem 9.1 and Th*
*eorem 1.1
for k = 1 that W = "W = Rk(f). 
10.Proof of Theorem 1.2
The A*action on the polynomial rings Ps is defined by the Cartan formula and*
* the
requirement that Sq(1)xi= x2ifor all i. It follows that for all f; k 1 and all*
* polynomials
F of degree f, we have
k
S[k; f]F = F 2: (2*
*4)
Recall from Section 1 that the excess of an element 2 A* satisfies ex() = min*
* {s :
(x1x2. .x.s) 6= 0 2 Ps}. Since linear maps commute with the action of A*, it fo*
*llows that
(P ) = 0 for any polynomial in any Ps0of degree < ex() .
k
Theorem 1.2 Let P be a polynomial of the form E . F 2 for some polynomials E an*
*d F of
degrees e and f respectively, and suppose that e < (2k  1)(f). Then P is hit.
Proof. As mentioned in Section 1, the proof mirrors exactly Reg Wood's proof o*
*f the case
k = 1[Woo89b ]; the main ingredient is the observation that if U and V are poly*
*nomials and
2 A*, then U . V ^U . V modulo hit elements. By (24), we have
k
EF 2 = E . S[k; f]F
^S[k; f]E . F (modulo hit elements).
But ex(S^[k; f])= (2k  1)(f) by Theorem 1.1, so ^S[k; f]E = 0 by the definitio*
*n of excess
k
and the hypothesis on e, f, and k. We conclude that EF 2 is indeed hit. 
k
A given monomial M can generally be written in the form E.F 2 in several ways*
*, especially
if k is allowed to vary, and to each such decomposition the theorem assigns a d*
*ifferent
inequality to be checked. In what remains of this section, we show that it suff*
*ices to consider
the obvious decompositions in which F is squarefree and the degree in E of eac*
*h variable
xi is 2k  1. We begin by stating two immediate consequences of the discussion*
* of R1(f)
and (f) in Section 9.
Consequence 10.1. 1.(f + 1) = (f) 1 accordingly as the first non0 entry o*
*f R1(f)
is 1 or 2. P
2. If A 2L  1 and L j1 < : :<:jm , then (A + mi=1(2ji 1)) = (A) + m.
We shall need the following further properties of (f):
Lemma 10.2. 1. For any positive integers g and h, we have (h) (g + h) + g.
2. For any positive integer f, we have (f) 2(2f).
Proof. Part 1 follows from the first consequence above by induction on g. To p*
*rove Part 2,
P (f) j
write f = i=1(2 i 1) with j1 j2 < j3 < : :<:j(f) as in the paragraph follow*
*ing the
HIT POLYNOMIALS ANDhCONJUGATIONi 17
definition of R1(f) in Section 9. Let s = (f)+1_2. If s 1, then (f) 2 < 4 2*
*(2f).
Suppose then that s 2 and observe that
(f)sX
2 (2ji 1) 2((f)  s) 2(s  1) s:
i=1
Therefore we can write 2f in the form
(f)X 0 (f)sX 1
2f = [2(2ji 1) + 1] + @2 (2ji 1)  sA
i=(f)s+1 i=1
(f)X 0 (f)sX 1
= (2ji+1 1) + @2 (2ji 1)  sA:
i=(f)s+1 i=1
This decomposition may be seen to satisfy the conditions of the second conseque*
*nce above,
so we find that 0
(f)sX 1
(2f) = s + @2 (2ji 1)  sA
i=1
" #
(f) + 1
s def=________ ;
2
and hence 2(2f) (f) as claimed. 
In what follows, we use uppercase letters such as E, F , and G to denote mono*
*mials and
the corresponding lowercase letters to denote their respective degrees. If e < *
*(2k  1)(f),
k
we shall say that the decomposition M = E . F 2 satisfies the kcriterion for b*
*eing hit.
k 2k
Proposition 10.3. 1. If the decomposition (E . G2 ) . H satisfies the kcri*
*terion, then
k
so does E . (F G)2 .
k 2k+1
2. If the decomposition E . (G2)2 satisfies the kcriterion, then E . G s*
*atisfies the
(k + 1)criterion.
Proof. These statements follow easily from their counterparts in Lemma 10.2; w*
*e prove
only the first. By assumption, we have e+2kg < (2k1)(h), so that e < (2k1)(h)*
*2kg.
But
(2k  1)(h)  2kg < (2k  1)((h)  g)
(2k  1)(g + h)
by Part 1 of the lemma, so e < (2k  1)(g + h) as claimed. 
If the monomial M is a perfect square, then we know from (24) and without rec*
*ourse to
Theorem 1.2 that it is hit. If M is not a square, then it can be uniquely writt*
*en in the form
Q n 2kj
M = j=0(Lj) , where each Lj is a nontrivial product of distinct xi and 0 = *
*k0 < k1 <
: :<:kn. For 1 J n, define decompositions
Y kj Y kjkJ kJ
DJ = DJ(M) = [ (Lj)2 ] . [ (Lj)2 ]2 :
j