Skip to main content

Worksheet Problem Set M4

Instructions: You may type up or handwrite your work, but it must be neat, professional, and organized and it must be saved as a PDF file and uploaded to the appropriate Gradescope assignment. Use a scanner or scanning app to convert handwritten work on paper to PDF. I encourage you to type your work using the provided template.
All tasks below must have a complete solution that represents a good-faith attempt at being right to receive engagement credits. If your submission is complete and turned in on time, you will receive full engagement credits for the assignment. All other submissions will receive zero engagement credit. Read the guidelines at Grading Specifications carefully.
To abide by the class’ academic honesty policy, your work should represent your own understanding in your own words. If you work with other students, you must clearly indicate who you worked with in your submission. The same is true for using tools like generative AI although I strongly discourage you from using such tools since you need to build your own understanding here to do well on exams.

1. Computations in GF(4).

Let \(F=\GF(2)\) and let the field \(E=\GF(4)\) be represented as \(F[\alpha]/(\alpha^2+\alpha+1)\text{.}\)

(a)

Let \(u_0,u_1,v_0,v_1 \in F\text{.}\) Show that the product of the elements \(u_0+u_1\alpha\) and \(v_0+v_1\alpha\) in \(E\) is given by
\begin{equation*} (u_0v_0+u_1v_1)+(u_0v_1+u_1v_0+u_1v_1)\alpha \end{equation*}
and so, it can be computed using three additions and four multiplications in \(F\text{.}\)
Solution.
We have
\begin{align*} (u_0+u_1\alpha)(v_0+v_1\alpha) \amp =u_0v_0 + (u_0v_1+u_1v_0)\alpha + (u_1v_1)\alpha^2 \\ \amp= u_0v_0 + (u_0v_1+u_1v_0)\alpha + (u_1v_1)(1+\alpha) \\ \amp= (u_0v_0+u_1v_1)+(u_0v_1+u_1v_0+u_1v_1)\alpha \text{.} \end{align*}
This requires four multiplications to compute the individual terms \(u_0v_0,u_1v_1,u_0v_1\text{,}\) and \(u_1v_0\text{.}\) Then it requires three additions to combine these to find the two coefficients in the product.

(b)

Show that two elements in \(E\) be multiplied using four additions and three multiplications in \(F\text{.}\)
Hint.
Consider the terms \(u_0v_0, u_1v_1\text{,}\) and \((u_0+u_1)(v_0+v_1)\text{.}\)
Solution.
Notice that
\begin{equation*} (u_0+u_1)(v_0+v_1) = u_0v_0+u_0v_1+u_1v_0+u_1v_1 \end{equation*}
so the coefficient of \(\alpha\) in the product is \((u_0+u_1)(v_0+v_1)+u_0v_0 \text{.}\) Hence we can compute the product using the three multiplications \(u_0v_0, u_1v_1\text{,}\) and \((u_0+u_1)(v_0+v_1)\text{.}\) This then needs four additions: adding \(u_0v_0\) and \(u_1v_1\) to get the constant coefficient, the two sums \(u_0+u_1 \) and \(v_0+v_1\text{,}\) and then the final sum to get the coefficient of \(\alpha\text{.}\)

(c)

Show that the square of an element in \(E\) can be computed using one addition in \(F\text{.}\)
Solution.
We have
\begin{equation*} (u_0+u_1\alpha)^2=(u_0^2+u_1^2)+u_1^2 \alpha = (u_0+u_1)+u_1\alpha \text{.} \end{equation*}
This needs only one addition in \(F\) to compute the constant coefficient.

(d)

Show that the mapping \(E\to E\) defined by \(x\mapsto \alpha x^2\) can be computed only by repositioning of elements of \(F\text{,}\) without any arithmetic operations in \(F\text{.}\)
Solution.
We have
\begin{align*} (u_0+u_1\alpha) \mapsto \amp \alpha ((u_0+u_1)+u_1\alpha)\\ \amp = u_0\alpha+u_1\alpha + u_1\alpha^2\\ \amp = u_0\alpha+u_1\alpha + u_1 +u_1\alpha\\ \amp = u_1 + u_0\alpha\text{.} \end{align*}
So this map just flips the coefficients and needs no arithmetic operations.

2. The Vandermonde Matrix.

Let \(\beta_1, \beta_2, \ldots, \beta_r\) be \(r\) distinct elements of a field \(F\) and let \(V\) be the \(r\times r\) matrix given by
\begin{equation*} V = \begin{pmatrix} 1 \amp 1 \amp \ldots \amp 1\\ \beta_1 \amp \beta_2 \amp \ldots \amp \beta_r \\ \beta_1^2 \amp \beta_2^2 \amp \ldots \amp \beta_r^2 \\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ \beta_1^{r-1} \amp \beta_2^{r-1} \amp \ldots \amp \beta_r^{r-1} \\ \end{pmatrix} \end{equation*}
In this problem you will show that
\begin{equation*} \det(V)= \prod_{1\leq i \lt j \leq r} (\beta_j - \beta_i)\text{.} \end{equation*}

(a)

By careful choice of row operations and cofactor expansion, show that \(\det(V)=\det(A)\) with
\begin{equation*} A = \begin{pmatrix} \beta_2 - \beta_1 \amp \beta_3 - \beta_1 \amp \ldots \amp \beta_r - \beta_1 \\ \beta_2(\beta_2 - \beta_1) \amp \beta_3(\beta_3 - \beta_1) \amp \ldots \amp \beta_r(\beta_r - \beta_1) \\ \beta_2^2(\beta_2 - \beta_1) \amp \beta_3^2(\beta_3 - \beta_1) \amp \ldots \amp \beta_r^2(\beta_r - \beta_1) \\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ \beta_2^{r-2}(\beta_2 - \beta_1) \amp \beta_3^{r-2}(\beta_3 - \beta_1) \amp \ldots \amp \beta_r^{r-2}(\beta_r - \beta_1) \\ \end{pmatrix} \end{equation*}
Solution.
Denote the \(i\)th row of \(V\) by \(V_i\text{.}\) Then beginning with \(j=r\) replace \(V_j\) with \(V_j-\beta_1 V_{j-1}\) and reduce \(j\) by 1. This gives the matrix
\begin{equation*} \begin{pmatrix} 1 \amp 1 \amp 1 \amp \ldots \amp 1 \\ 0 \amp \beta_2 - \beta_1 \amp \beta_3 - \beta_1 \amp \ldots \amp \beta_r - \beta_1 \\ 0 \amp \beta_2(\beta_2 - \beta_1) \amp \beta_3(\beta_3 - \beta_1) \amp \ldots \amp \beta_r(\beta_r - \beta_1) \\ 0 \amp\beta_2^2(\beta_2 - \beta_1) \amp \beta_3^2(\beta_3 - \beta_1) \amp \ldots \amp \beta_r^2(\beta_r - \beta_1) \\ \vdots \amp \vdots \amp \vdots \amp \vdots \amp \vdots \\ 0 \amp\beta_2^{r-2}(\beta_2 - \beta_1) \amp \beta_3^{r-2}(\beta_3 - \beta_1) \amp \ldots \amp \beta_r^{r-2}(\beta_r - \beta_1) \\ \end{pmatrix} \end{equation*}
which has the same determinant as \(V\text{.}\) Then doing cofactor expansion down the first column shows that \(\det(V)=\det(A)\text{.}\)

(b)

Use induction on \(r\) and the previous part to prove the claim.
Solution.
The claim holds for the base case \(r=2\) since
\begin{equation*} \begin{vmatrix} 1 \amp 1 \\ \beta_1 \amp \beta_2\\ \end{vmatrix}=\beta_2-\beta_1\text{.} \end{equation*}
Now suppose the claim holds for some integer \(r-1\geq 2\text{.}\) From the previous part we have \(\det(V)=\det(A)\) as given above. But \(A\) is a Vandermonde matrix \(B\) of size \(r-1\) whose columns are multiplied by the factors \(\beta_2-\beta_1,\beta_3-\beta_1,\ldots,\beta_r-\beta\) with
\begin{equation*} B=\begin{pmatrix} 1 \amp 1 \amp \ldots \amp 1 \\ \beta_2 \amp \beta_3 \amp \ldots \amp \beta_r \\ \beta_2^2 \amp \beta_3^2 \amp \ldots \amp \beta_r^2 \\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ \beta_2^{r-2} \amp \beta_3^{r-2} \amp \ldots \amp \beta_r^{r-2} \\ \end{pmatrix}\text{.} \end{equation*}
So we have by determinant properties and the induction hypothesis that
\begin{align*} \det (A) \amp = \left(\prod_{k=2}^r (\beta_k-\beta_1) \right) \det(B)\\ \amp = \left(\prod_{k=2}^r (\beta_k-\beta_1) \right) \prod_{2\leq i \lt j \leq r} (\beta_j - \beta_i)\\ \amp = \prod_{1\leq i \lt j \leq r} (\beta_j - \beta_i)\text{.} \end{align*}

3. Linearized Polynomials.

Let \(F= \GF(q)\) and \(E = \GF(q^n)\text{.}\) A linearized polynomial over \(E\) with respect to \(F\) is a polynomial over \(E\) of the form
\begin{equation*} a(x) = \sum_{i=0}^t a_i x^{q^i} \end{equation*}
for some \(t\geq 0\text{.}\) For the rest of this problem, assume all linearized polynomials are over \(E\) with respect to \(F\text{.}\) Sometimes we write \(x^{[i]}\) for \(x^{q^i}\) to avoid double superscripts.

(a)

Let \(a(x)\) be a linearized polynomial. Show that the mapping \(\phi: E\to E\) defined by \(x\mapsto a(x)\) is a linear transformation over \(F\text{.}\)
Solution.
Recall that over a field of characteristic \(p\) the maps \(x\mapsto x^{p^m}\) are additive. Here we have \(q=p^s\) for some \(s\geq 1\) and so each power \(q^i=p^{si}\text{.}\) Let \(x,y \in E\) and \(\alpha \in F\text{.}\) Then we have
\begin{align*} \phi(x+y)=\amp a(x+y)\\ \amp =\sum_{i=0}^t a_i (x+y)^{q^i}\\ \amp =\sum_{i=0}^t a_i x^{q^i}+a_iy^{q^i}\\ \amp =a(x)+a(y)\\ \amp =\phi(x)+\phi(y) \end{align*}
and
\begin{align*} \phi(\alpha x)=\amp a(\alpha x) \\ \amp = \sum_{i=0}^t a_i \alpha^{q^i}x^{q^i}\\ \amp =\sum_{i=0}^t a_i\alpha x^{q^i}\\ \amp =\alpha a(x)\\ \amp =\alpha(\phi(x)) \end{align*}
since \(\alpha\in F=\GF(q)\) implies \(\alpha^q=\alpha\text{.}\) So \(\phi\) is an \(F\)-linear transformation.

(b)

Show by a counting argument that every linear transformation \(L:E \to E\) over \(F\) can be represented as a mapping \(x\mapsto a(x)\) for some linearized polynomial with \(t \leq n-1\text{.}\)
Solution.
Each \(F\)-linear transformation \(L:E\to E\) is uniquely defined by the choice of image of the \(n\) vectors in a basis of \(E\) as an \(F\)-vector space. There are thus \((q^n)^n=q^{n^2}\) such transformations. On the other hand there are also \((q^n)^n\) many linearized polynomials with \(t\leq n-1\text{,}\) each of which gives rise to an \(F\)-linear transformation. It remains only to show that any two different linearized polynomials give rise to different linear transformations. This follows at once because any two polynomials of degree at most \(q^{n-1}\) which agree on all \(q^n\) points of \(E\) must be the same polynomial, since their difference is a polynomial with more roots than its degree and hence is zero.

(c)

Let \(a(x)\neq0\) be a linearized polynomial with \(0\leq t \lt n\text{.}\) Fix a basis \(B=(\alpha_1\,\alpha_2\,\ldots\,\alpha_n)\) of \(E\) over \(F\text{,}\) and let \(A\) be an \(n\times n\) matrix representation of the mapping \(E\to E\) defined by \(x\mapsto a(x)\text{;}\) that is, if \(\bu\) is a column vector in \(F^n\) that represents an element \(u \in E\) as \(u=B\bu\text{,}\) then \(A\bu\) is a vector representation of \(a(u)\text{,}\) i.e., \(a(u)=BA\bu\text{.}\) Show that \(\rank(A)\geq n-t\text{.}\)
Hint.
Find an upper bound on the size of the right kernel of \(A\text{.}\)
Solution.
By Rank-Nullity, we have
\begin{equation*} \rank(A)=\dim_F(\im(\phi_a))=n-\dim_F(\ker(\phi_a))\text{.} \end{equation*}
So it suffices to show that
\begin{equation*} \dim_F(\ker(\phi_a))\leq t\text{.} \end{equation*}
But we have
\begin{equation*} x \in \ker(\phi_a) \Leftrightarrow a(x)=0 \end{equation*}
and the degree \(q^{t}\) polynomial \(a(x)\) has at most \(q^t\) roots in \(E\text{.}\) Hence the dimension of \(\ker(\phi_a)\) over \(F\) is at most \(t\) as desired.

4. Linearized Polynomials & Subspaces.

Throughout this problem suppose \(a(x)\) is a linearized polynomial over \(\GF(q^s)\) with respect to \(\GF(q)\text{.}\)

(a)

Suppose the roots of \(a(x)\) lie in the extension field \(\GF(q^n)\) for \(n\geq s\text{.}\) Show that
  1. The roots of \(a(x)\) form a \(\GF(q)\)-subspace of \(\GF(q^n)\text{.}\)
  2. If \(r\) divides \(n\) then the roots of \(a(x)\) which lie in \(\GF(q^r)\) form a \(\GF(q)\)-subspace of \(\GF(q^n)\text{.}\)
Solution.
  1. Since \(\GF(q^s)\) is a subfield here of \(\GF(q^n)\text{,}\) \(a(x)\) is also a linearized polynomial over \(\GF(q^n)\) with respect to \(\GF(q)\text{.}\) So by the previous problem its roots are the kernel of a \(\GF(q)\)-linear transformation of \(\GF(q^n)\text{,}\) i.e. they are a subspace of \(\GF(q^n)\text{.}\)
  2. We have already established that any \(\GF(q)\)-linear combination of roots of \(a(x)\) is also a root of \(a(x)\text{.}\) So it remains only to check that such a combination of a root lying in \(\GF(q^r)\) also remains in \(\GF(q^r)\text{.}\) But this follows at once from the fact that \(\GF(q^r)\) is itself a \(\GF(q)\)-vector space.
    Equivalently, the set of roots of \(a(x)\) lying in the subfield \(\GF(q^r)\) is the intersection of the \(\GF(q)\)-subspaces \(\ker(\phi_a)\) and \(\GF(q^r)\) and thus is also a \(\GF(q)\)-subspace.

(b)

Let \(\beta_1, \beta_2,\ldots, \beta_m\) be elements of \(\GF(q^s)\) which are linearly independent over \(\GF(q)\text{.}\) Show that the \(m\times m\) matrix
\begin{equation*} B = \begin{pmatrix} \beta_1 \amp \beta_2 \amp \ldots \amp \beta_m \\ \beta_1^q \amp \beta_2^q \amp \ldots \amp \beta_m^q \\ \beta_1^{q^2} \amp \beta_2^{q^2} \amp \ldots \amp \beta_m^{q^2} \\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ \beta_1^{q^{m-1}} \amp \beta_2^{q^{m-1}} \amp \ldots \amp \beta_m^{q^{m-1}} \\ \end{pmatrix} \end{equation*}
is invertible by observing that \((a_0\, a_1\, \ldots\, a_{m-1})B=\bzero\) if and only if the \(\beta_i\) are roots of the linearized polynomial with coefficients given by the \(a_i\) and applying the previous problems.
Solution.
As the statement says, we can see by doing the multiplication that \((a_0\, a_1\, \ldots\, a_{m-1})B=\bzero\) if and only if the \(\beta_i\) are roots of the linearized polynomial \(a(x)\) with coefficients given by the \(a_i\text{.}\) Now from the first part the roots of \(a(x)\) in \(\GF(q^s)\) form a \(\GF(q)\)-subspace \(V\) of \(\GF(q^s)\text{.}\) Since the \(\beta_i\) are linearly independent over \(\GF(q)\) we see that \(\dim_{\GF(q)}(V)\geq m\) and so \(|V|\geq q^m\text{.}\) But if \(a(x)\) is nonzero then it has degree at most \(q^{m-1}\) and hence cannot have \(q^m\) roots. So we have \(a(x)=0\text{,}\) i.e. the nullspace of \(B\) is the zero subspace and so \(B\) is invertible.

(c) Subspace Polynomials.

Let \(V\) be a \(\GF(q)\)-subspace of \(\GF(q^n)\text{.}\) Show that the polynomial
\begin{equation*} p_V(x) = \prod_{\gamma \in V} (x-\gamma) \end{equation*}
is a linearized polynomial over \(\GF(q^n)\) with respect to \(\GF(q)\text{.}\)
Hint.
Let \(\{\beta_1,\beta_2,\ldots,\beta_m\}\) be a basis for \(V\) over \(\GF(q)\text{.}\) Then use the previous part to show that the set of \(m\) linear equations
\begin{equation*} \beta_j^{q^m}+\sum_{i=0}^{m-1} a_i \beta_j^{q^i}=0, \qquad 1\leq j \leq m\text{,} \end{equation*}
has a unique solution for \((a_0\, a_1\, \dots\, a_{m-1})\in \GF(q^n)^m\text{.}\) Then verify that \(p_V(x)=x^{q^m}+\sum_{i=0}^{m-1} a_i x^{q^i}\text{.}\)
Solution.
Because \(V \subseteq \GF(q^n)\) the coefficients of \(p_V(x)\) are certainly in \(\GF(q^n)\text{,}\) so we need to show that only the coefficients on the powers \(q^i\) for \(i=0,\ldots,m\) are possibly nonzero.
Let \(\{\beta_1,\beta_2,\ldots,\beta_m\}\) be a basis for \(V\) over \(\GF(q)\text{.}\) The set of \(m\) linear equations from the hint corresponds to the matrix equation
\begin{equation*} (a_0\, a_1\, \ldots\, a_{m-1})B=(-\beta_1^{q^m}\, -\beta_2^{q^m}\, \ldots\, -\beta_m^{q^m}) \end{equation*}
and by the previous part since \(B\) is invertible this has a unique solution for \((a_0\, a_1\, \dots\, a_{m-1})\in \GF(q^n)^m\text{.}\) Now we verify that \(p_V(x)=x^{q^m}+\sum_{i=0}^{m-1} a_i x^{q^i}\text{.}\) Since both are monic polynomials of degree \(q^m\) and the roots of \(p_V(x)\) are precisely the elements of \(V\text{,}\) it suffices to check that each element of \(V\) is a root of the latter polynomial. Since the latter is linearized it suffices to check that the basis elements \(\beta_i\) are roots. But this is true by the definition of the \(a_i\text{.}\)