The problem of solving polynomial equations in one variable, i.e., equations of the form
goes back to ancient times. Here by “solving” I mean finding a procedure or a formula which produces a solution for a given set of coefficients . The terms “procedure” and “formula” are ambiguous; to get a well-posed problem, we need to specify what kinds of operations we are allowed to perform to obtain from . In the simplest setting, we are only allowed to perform the four arithmetic operations: addition, subtraction, multiplication and division. In other words, we are asking if the polynomial (1) has a root which is expressible as a rational function of . For a general polynomial of degree , the answer is clearly “no”; this was already known to the ancient Greeks. The focus then shifted to the problem of “solving polynomials in radicals”, where one is allowed to use the four arithmetic operations and radicals of any degree. Here the th radical (or root) of is a solution to
Mathematicians attempted to solve polynomial equations this way for centuries, but only succeeded for , , and . It was shown by Ruffini, Abel and Galois in the early 19th century that a general polynomial of degree cannot be solved in radicals. This was a ground-breaking discovery. However, the story does not end there.
Suppose we allow one additional operation, namely solving
That is, we start with , and at each step, we are allowed to enlarge this collection by adding one new number, which is the sum, difference, product or quotient of two numbers in our collection, or a solution to (2) or (3) for any in our collection. In 1786, Bring  showed that every polynomial equation of degree can be solved using these operations.
Note that the coefficients of (2) and (3) only depend on one parameter . Thus roots of these equations can be thought of as ”algebraic functions” of one variable. By contrast, the coefficients of the general polynomial equation (1) depend on independent parameters . With this in mind, we define the resolvent degree of a polynomial in (1) as the smallest positive integer such that every root of can be obtained from in a finite number of steps, assuming that at each step we are allowed to perform the four arithmetic operations and evaluate algebraic functions of variables. Let us denote the largest possible value of by , as ranges over all polynomials of degree . The algebraic form of Hilbert’s 13th problem asks for the value of .
The actual wording of the 13th problem is a little different: Hilbert asked for the minimal integer one needs to solve every polynomial equation of degree , assuming that at each step one is allowed to perform the four arithmetic operations and apply any continuous (rather than algebraic) function in variables. Let us denote the maximal possible resolvent degree in this setting by , where “c” stands for “continuous”. Specifically, Hilbert asked whether or not . In this form, Hilbert’s 13th problem was solved by Kolmogorov  and Arnold  in 1957.1 They showed that, contrary to Hilbert’s expectation, for every . In other words, continuous functions in variable are enough to solve any polynomial equation of any degree. Moreover, any continuous function in variables can be expressed as a composition of functions of one variable and addition.
In spite of this achievement, Wikipedia lists the 13th problem as “unresolved”. While this designation is subjective, it reflects the view of many mathematicians that Hilbert’s true intention was to ask about , not . They point to the body of work on going back centuries before Hilbert (see, e.g., ) and to Hilbert’s own 20th century writings, where only was considered (see, e.g., ). Arnold himself was a strong proponent of this point of view [13, pp. 45–46], .
Progress on the algebraic form of Hilbert’s 13th problem has been slow. From what I said above, when ; this was known before Hilbert and even before Galois. The value of remains open for every , and the possibility that for every has not been ruled out. The best known upper bounds on are of the form , where is an unbounded but very slow growing function of . The list of people who have proved inequalities of this form includes some of the leading mathematicians of the past two centuries: Hamilton, Sylvester, Klein, Hilbert, Chebotarev, Segre, Brauer. Recently, their methods have been refined and their bounds sharpened by Wolfson , Sutherland  and Heberle–Sutherland .
There is another reading of the 13th problem, to the effect that Hilbert meant to allow global multi-valued continuous functions; see [2, p. 613]. These behave in many ways like algebraic functions. If we denote the resolvent degree in this sense by , where “C” stands for “global continuous”, then
As far as I am aware, nothing else is known about or for .
On the other hand, in recent decades, considerable progress has been made in studying a related but different invariant, the essential dimension.2 Joe Buhler and I  introduced this notion in the late 1990s. In special instances, it came up earlier, e.g., in the work of Kronecker , Klein , Chebotarev , Procesi 3 and Kawamata 4. Our focus in  was on polynomials and field extensions. It later became clear that the notion of essential dimension is of interest in other contexts: quadratic forms, central simple algebras, torsors, moduli stacks, representations of groups and algebras, etc. In each case, it poses new questions about the underlying objects and occasionally leads to solutions of pre-existing open problems.
This paper has two goals. The first is to survey some of the research on essential dimension in Sections 2–7. This survey is not comprehensive; it is only intended to convey the flavor of the subject and sample some of its highlights. My second goal for this paper is to define the notion of resolvent degree of an algebraic group in Section 8, building on the work of Farb and Wolfson  but focusing on connected, rather than finite groups. The quantity defined above is recovered in this setting as . For more comprehensive surveys of essential dimension and resolvent degree, see [41, 51] and , respectively.
2 Essential dimension of a polynomial
Let be a base field, be a field containing and be a finite-dimensional -algebra (not necessarily commutative, associative or unital). We say that descends to an intermediate field if there exists a finite-dimensional -algebra such that . Equivalently, recall that, for any choice of an -vector space basis of , one can encode multiplication in into the structure constants given by . Then descends to if and only if there exists a basis such that all of the structure constants with respect to this basis lie in . The essential dimension is defined as the minimal value of the transcendence degree , where descends to . If the reference to the base field is clear from the context, we will write in place of .
If is a polynomial over , for some , as in (1), we define as , where . Note that if (or equivalently, ) is separable over , then descends to if and only if there exists an element which generates as an -algebra and such that the minimal polynomial of lies in .
In classical language, the passage from to is called a Tschirnhaus transformation. Note that
for some . Here is modulo . Tschirnhaus’ strategy for solving polynomial equations in radicals by induction on degree was to transform to a simpler polynomial , find a root of and then recover a root of from (4) by solving a polynomial equation of degree . In his 1683 paper , Tschirnhaus successfully implemented this strategy for but made a mistake in implementing it for higher . Tschirnhaus did not know that a general polynomial of degree cannot be solved in radicals or that his method for solving cubic polynomials had been discovered by Cardano a century earlier.
This classical result is strengthened in  as follows.
Assume . Then ,
and for every . In particular,
for every .
I recently learned that a variant of the inequality was known to Chebotarev  as far back as 1943.
The problem of finding the exact value of may be viewed as being analogous to Hilbert’s 13th problem with , or replaced by . Since Hilbert specifically asked about , the case where is of particular interest.
If , then .
The proof of Theorem 2 relies on the same general strategy as Klein’s proof of (5); I will discuss it further it in Section 6. Combining Theorem 2 with the inequality from Theorem 1, we can slightly strengthen (6) in characteristic as follows:
Analogous questions can be asked about polynomials that are not separable, assuming . In this setting, the role of the degree is played by the “generalized degree” . Here , where is the separable closure of in and is the so-called type of the purely inseparable algebra defined as follows. Given , let us define the exponent to be the smallest integer such that . Then is the largest value of as ranges over . Choose an of exponent , and define as the largest value of . Now choose of exponent , and define as the largest value of , etc. We stop when . By a theorem of Pickert, the resulting integer sequence satisfies and does not depend on the choice of the elements . One can now define by analogy with : is the maximal value of , as ranges over all field extension of and ranges over all polynomials of generalized degree . Surprisingly, the case where (i.e., the polynomials in question are not separable) turns out to be easier. We refer the reader to , where an exact formula for is obtained.
3 Essential dimension of a functor
Following Merkurjev , we will now define essential dimension for a broader class of objects, beyond polynomials or finite-dimensional algebras. Let be a base field, which we assume to be fixed throughout, and be a covariant functor from the category of field extensions to the category of sets. Any object in the image of the natural (“base change”) map is said to descend to . The essential dimension is defined as the minimal value of , where the minimum is taken over all intermediate fields such that descends to .
For example, consider the functor of -dimensional associative algebras given by
For , the new definition of is the same as the definition in the previous section. Recall that, after choosing a -basis for , we can describe completely in terms of the structure constants . In particular, descends to the subfield of , and consequently, .
Another interesting example is the functor of non-degenerate -dimensional quadratic forms,
For simplicity, let us assume that the base field is of characteristic different from . Under this assumption, a quadratic form on is the same thing as a symmetric bilinear form . One passes back and forth between and using the formulas
for any . The form (or equivalently, ) is called degenerate if the linear form is identically zero for some . A variant of the Gram–Schmidt process shows that there exists an orthogonal basis of with respect to . In other words, in some basis of , can be written as
for some in . In particular, we have that descends to , and thus . Note that is non-degenerate if and only if .
Yet another interesting example is provided by the functor of elliptic curves,
For simplicity, assume that or . Then every elliptic curve over is isomorphic to the plane curve cut out by a Weierstrass equation for some . Hence, descends to and .
Informally, we think of as specifying the type of algebraic object under consideration (e.g., algebras or quadratic forms or elliptic curves), as the set of objects of this type defined over , and as the minimal number of parameters required to define . In most cases, essential dimension varies from object to object, and it is natural to consider what happens under a “worst case scenario”, i.e., how many parameters are needed to define the most general object of a given type. This number is called the essential dimension of the functor . That is,
as varies over all fields containing and varies over . Note that can be either a non-negative integer or . In particular, the arguments above yield
One can show that the last two of these inequalities are, in fact, sharp. The exact value of is unknown for most ; however, for large ,
where and are the functors of -dimensional Lie algebras and commutative algebras, respectively. These formulas are deduced from the formulas for the dimensions of the varieties of structure constants for -dimensional associative, Lie and commutative algebras due to Neretin .5
This brings us to the functor , where is an algebraic group defined over . The essential dimension of this functor is a numerical invariant of . This invariant has been extensively studied; it will be our main focus in the next section. The functor associates to a field , the set of isomorphism classes of -torsors over . Recall that a -torsor over over is an algebraic variety with a -action defined over such that, over the algebraic closure , becomes equivariantly isomorphic to acting on itself by left translations. If has a -point , then taking to is, in fact, an isomorphism over . In this case, the torsor is called “trivial” or “split”. The interesting (non-trivial) torsors over have no -points. For example, if is a cyclic group of order and , then every -torsor is of the form , where is the subvariety of cut out by the quadratic equation for some . Informally, is a pair of points (roots of this equation) permuted by ; it is split if and only if these points are defined over (i.e., is a complete square in ). In fact, is in bijective correspondence with given by , where is the multiplicative group of . Note that, in this example, is, in fact, a group. This is the case whenever is abelian. For a non-abelian algebraic group , carries no natural group structure; it is only a set with a marked element (the trivial torsor).
For many linear algebraic groups , the functor parametrizes interesting algebraic objects. For example, when is the orthogonal group , is the functor we considered above. When is the projective linear group , is the set of isomorphism classes of central simple algebras of degree over . When is the exceptional group of type , is the set of isomorphism classes of octonion algebras over .
4 Essential dimension of an algebraic group
The essential dimension of the functor is abbreviated as . Here is an algebraic group defined over . This number is always finite if is linear but may be infinite if is an abelian variety . If is the symmetric group , then
where is the quantity we defined and studied in Section 2. Indeed, is the set of étale algebras of degree . Étale algebras of degree are precisely the algebras of the form , where is a separable (but not necessarily irreducible) polynomial of degree over . Thus (8) is just a restatement of the definition of .
Another interesting example is the general linear group . Elements of are the -dimensional vector spaces over . Since there is only one -dimensional -vector space up to -isomorphism, we see that . In particular, every object of descends to , and we conclude that . I will now give a brief summary of three methods for proving lower bounds on for various linear algebraic groups .
4.1 Cohomological invariants
Let be a covariant functor from the category of field extensions to the category of sets, as in the previous section. A cohomological invariant of degree for is a morphism of functors
for some discrete -module . In many interesting examples, is the module of th roots of unity with a natural -action (trivial if contains a primitive -th root of unity). The following observation is due to J.-P. Serre.
Assume that the base field is algebraically closed. If has a non-trivial cohomological invariant , then .
The proof is an immediate consequence of the Serre vanishing theorem. Cohomological invariants of an algebraic group (or equivalently, of the functor ) were introduced by Serre and Rost in the early 1990s, and have been extensively studied since then; see . These invariants give rise to a number of interesting lower bounds on for various groups ; in particular,
for every ,
Inequalities (1), (2) and (3) turn out to be exact; (4) is best known, and (5) is best known for even ; see (7).
4.2 Finite abelian subgroups
Let be a reductive group over and be a finite abelian subgroup of of rank .
Note that both parts are vacuous if lies in a maximal torus of . Indeed, in this case, the centralizer contains , so . In other words, only non-toral finite abelian subgroups of linear algebraic groups are of interest here. These have been much studied and catalogued, starting with the work of Borel in the 1950s. Theorem 4 yields the best known lower bound on in many cases, such as and , where denotes the split simply connected exceptional group of type and similarly for .
4.3 The Brauer class bound
Consider a linear algebraic group defined over our base field . Suppose fits into a central exact sequence of algebraic groups (again, defined over )
where is diagonalizable over . For every field extension , this sequence gives rise to the exact sequence of pointed sets
Every element has an index, , defined as follows. If , then is a Brauer class over , and denotes the Schur index of , as usual. In general, we consider the character group whose elements are homomorphisms . Note that is a finitely generated abelian group and each character induces a homomorphism
The index of is defined as the minimal value of
as ranges over generating sets of . Here each lies in , and denotes its Schur index, as above. We now define as the maximal index of , where the maximum is taken over all field extensions , as ranges over the image in .
is the greatest common divisor of , where ranges over the linear representations of over such that the restriction is faithful.
Let be a prime different from . Assume that the exponent of every element of in the image of
is a power of for every field extension . (This is automatic if is a -group.) Then .
Part (1) is known as Merkurjev’s index formula. The inequality of part (2) is based on Karpenko’s incompressibility theorem. Part (b) first appeared in  in the special case where or and in  in an even more special case, where . It was proved in full generality in .
Theorem 5 is responsible for some of the strongest results in this theory, including the exact formulas for the essential dimension of a finite -group (Theorem 6 below), the essential -dimension of an algebraic torus, and the essential dimension of spinor groups . The latter turned out to increase exponentially in :
This inequality was first proved in . The exact value of subsequently got pinned down in [10, 18] in characteristic ,  in characteristic and  in characteristic . When , inequality (9) is sharp for modulo , and is off by otherwise. Here is the largest power of dividing .
The exponential growth of came as a surprise. Prior to , the best known lower bounds on were linear (see [19, Section 7]), on the order of . Moreover, the exact values of for obtained by Rost and Garibaldi  appeared to suggest that these linear bounds should be sharp. The fact that increases exponentially in has found interesting applications in the theory of quadratic forms. For details, see [10, 18].
5 Essential dimension at
Once again, fix a base field , and let be a covariant functor from the category of field extensions to the category of sets. The essential dimension of an object at a prime is defined as the minimal value of , where the minimum ranges over all finite field extensions of degree prime to and denotes the image of under the natural map . Finally, the essential dimension of at is the maximal value of , as ranges over all fields containing and ranges over . When for an algebraic group , we write in place of . Once again, if the reference to the base field is clear from the context, we will abbreviate as . By definition, and .
The reason to consider in place of is that the former is often more accessible. In fact, most of the methods we have for proving a lower bound on (respectively, ) turn out to produce a lower bound on (respectively, ) for some prime . For example, the lower bound in Theorem 5 (b) is really . In Theorem 4, one can usually choose to be a -group, in which case the conclusion can be strengthened to in part (a) and in part (b). In Theorem 3, if is -torsion (which can often be arranged), then .
This is a special case of a general meta-mathematical phenomenon: many problems concerning algebraic objects (such as finite-dimensional algebras or polynomials or algebraic varieties) over fields can be subdivided into two types. In type 1 problems, we are allowed to pass from to a finite extension of degree prime to , for one prime , whereas in type 2 problems this is not allowed. For example, given an algebraic variety defined over , deciding whether or not has a -cycle of degree 1 is a type 1 problem (it is equivalent to showing that there is a -cycle of degree prime to , for every prime ), whereas deciding whether or not has a -point is a type 2 problem. As I observed in [51, Section 5], most of the technical tools we have are tailor-made for type 1 problems, whereas many long-standing open questions across several areas of algebra and algebraic geometry are of type 2.
In the context of essential dimension, the problem of computing for a given algebraic group and a given prime is of type 1, whereas the problem of computing is of type 2. For simplicity, let us assume that is a finite group. In this case, , where is the Sylow -subgroup of . In other words, the problem of computing reduces to the case where is a -group. In this case, we have the following remarkable theorem of Karpenko and Merkurjev .
Let be a prime and be a field containing a primitive th root of unity. Then, for any finite -group ,
where denotes the minimal dimension of a faithful representation of defined over .
Theorem 6 reduces the computation of to . For a given finite -group , one can often (though not always) compute in closed form using the machinery of character theory; see, e.g., [3, 36, 42, 43].
The situation is quite different when computing for an arbitrary finite group . Clearly,