Skip to main content

Worksheet Problem Set M5A

Instructions: You may type up or handwrite your work, but it must be neat, professional, and organized and it must be saved as a PDF file and uploaded to the appropriate Gradescope assignment. Use a scanner or scanning app to convert handwritten work on paper to PDF. I encourage you to type your work using the provided template.
All tasks below must have a complete solution that represents a good-faith attempt at being right to receive engagement credits. If your submission is complete and turned in on time, you will receive full engagement credits for the assignment. All other submissions will receive zero engagement credit. Read the guidelines at Grading Specifications carefully.
To abide by the class’ academic honesty policy, your work should represent your own understanding in your own words. If you work with other students, you must clearly indicate who you worked with in your submission. The same is true for using tools like generative AI although I strongly discourage you from using such tools since you need to build your own understanding here to do well on exams.

1. Doubly-Extended GRS Codes.

Let \(C\) be a linear \([n,k=n-r,d]\) code over \(F\) defined by a parity check matrix
\begin{equation*} H=\begin{bmatrix} 1 \amp 1 \amp \ldots \amp 1 \amp 0 \\ \alpha_1 \amp \alpha_2 \amp \ldots \amp \alpha_{n-1} \amp 0 \\ \vdots \amp \vdots \amp \vdots \amp \vdots \amp \vdots \\ \alpha_1^{r-2} \amp \alpha_2^{r-2} \amp \ldots \amp \alpha_{n-1}^{r-2} \amp 0 \\ \alpha_1^{r-1} \amp \alpha_2^{r-1} \amp \ldots \amp \alpha_{n-1}^{r-1} \amp 1 \end{bmatrix}\begin{bmatrix} v_1 \amp 0 \amp \ldots \amp 0 \\ 0 \amp v_2 \amp \ldots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp\ldots \amp v_{n} \end{bmatrix} \end{equation*}
where \(\alpha_1,\alpha_2,\dots,\alpha_{n-1}\) are distinct elements of \(F\) and \(v_1,v_2,\dots,v_n\) are nonzero elements of \(F\text{.}\) The code \(C\) is called a doubly-extended GRS code, and the last column in \(H\) is said to correspond to the code locator \(\infty\text{,}\) i.e., the β€œinfinity” element or β€œpoint at infinity”.

(a)

Show that \(C\) is MDS.
Solution.
To show that \(C\) is an MDS code, we must show that its minimum distance is \(d = n - k + 1 = r + 1\text{.}\) By the Singleton bound, we know \(d \le r + 1\text{.}\) It therefore suffices to show that any \(r\) columns of the parity check matrix \(H\) are linearly independent.
Let us factor \(H = H' V\text{,}\) where \(V\) is the \(n \times n\) diagonal matrix with nonzero diagonal entries \(v_1, \dots, v_n\text{,}\) and \(H'\) is the \(r \times n\) matrix:
\begin{equation*} H' = \begin{bmatrix} 1 \amp 1 \amp \ldots \amp 1 \amp 0 \\ \alpha_1 \amp \alpha_2 \amp \ldots \amp \alpha_{n-1} \amp 0 \\ \vdots \amp \vdots \amp \vdots \amp \vdots \amp \vdots \\ \alpha_1^{r-2} \amp \alpha_2^{r-2} \amp \ldots \amp \alpha_{n-1}^{r-2} \amp 0 \\ \alpha_1^{r-1} \amp \alpha_2^{r-1} \amp \ldots \amp \alpha_{n-1}^{r-1} \amp 1 \end{bmatrix} \end{equation*}
Because right-multiplying by a non-singular diagonal matrix \(V\) merely scales the columns by nonzero constants, it preserves linear independence. Thus, we only need to show that any \(r\) columns of \(H'\) are linearly independent.
Case 1: We select \(r\) columns from the first \(n-1\) columns. These columns form an \(r \times r\) matrix that is a square Vandermonde matrix constructed from \(r\) distinct elements \(\alpha_{i_1}, \dots, \alpha_{i_r}\text{.}\) Because the elements \(\alpha_i\) are distinct, the determinant is nonzero, meaning the columns are linearly independent.
Case 2: We select the last column (column \(n\)) and \(r-1\) columns from the first \(n-1\) columns. Let these correspond to locators \(\alpha_{i_1}, \dots, \alpha_{i_{r-1}}\text{.}\) The resulting \(r \times r\) submatrix \(M\) is:
\begin{equation*} M = \begin{bmatrix} 1 \amp \ldots \amp 1 \amp 0 \\ \alpha_{i_1} \amp \ldots \amp \alpha_{i_{r-1}} \amp 0 \\ \vdots \amp \ddots \amp \vdots \amp \vdots \\ \alpha_{i_1}^{r-2} \amp \ldots \amp \alpha_{i_{r-1}}^{r-2} \amp 0 \\ \alpha_{i_1}^{r-1} \amp \ldots \amp \alpha_{i_{r-1}}^{r-1} \amp 1 \end{bmatrix} \end{equation*}
Expanding the determinant along the last column gives \(\det(M) = 1 \cdot \det(V_{r-1})\text{,}\) where \(V_{r-1}\) is the \((r-1) \times (r-1)\) Vandermonde matrix on the distinct elements \(\alpha_{i_1}, \dots, \alpha_{i_{r-1}}\text{.}\) Since the elements are distinct, \(\det(M) \neq 0\text{,}\) so these columns are also linearly independent.
Because any \(r\) columns of \(H\) are linearly independent, the code \(C\) has minimum distance at least \(r+1 = n-k+1\text{.}\) Thus, \(C\) is MDS.

(b)

Assuming that \(k\lt n\text{,}\) show that the dual code \(C^{\perp}\) is an \([n,n-k]\) doubly-extended GRS code that can be defined through the same code locators as \(C\text{.}\)
Solution.
The dual code \(C^\perp\) has dimension \(r = n-k\) and its generator matrix is exactly the parity check matrix of \(C\text{,}\) \(H\text{.}\) We want to show that \(C^\perp\) is a doubly-extended GRS code, which means it must have a parity check matrix \(H^\perp\) of the exact same form as \(H\text{,}\) but of dimension \(k \times n\text{,}\) using the identical locators \(\alpha_1, \dots, \alpha_{n-1}, \infty\) and some set of nonzero column multipliers \(w_1, \dots, w_n\text{.}\)
Let \(H^\perp\) be defined as:
\begin{equation*} H^\perp = \begin{bmatrix} 1 \amp 1 \amp \ldots \amp 1 \amp 0 \\ \alpha_1 \amp \alpha_2 \amp \ldots \amp \alpha_{n-1} \amp 0 \\ \vdots \amp \vdots \amp \vdots \amp \vdots \amp \vdots \\ \alpha_1^{k-2} \amp \alpha_2^{k-2} \amp \ldots \amp \alpha_{n-1}^{k-2} \amp 0 \\ \alpha_1^{k-1} \amp \alpha_2^{k-1} \amp \ldots \amp \alpha_{n-1}^{k-1} \amp 1 \end{bmatrix} \begin{bmatrix} w_1 \amp 0 \amp \ldots \amp 0 \\ 0 \amp w_2 \amp \ldots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp \ldots \amp w_n \end{bmatrix} \end{equation*}
For \(H^\perp\) to be a valid parity check matrix of \(C^\perp\text{,}\) we require \(H^\perp H^\top = 0\text{.}\) The \((i, j)\)-th entry of \(H^\perp H^\top\) for \(0 \leq i \leq k-1\) and \(0 \leq j \leq r-1\) is given by the dot product of the \(i\)-th row of \(H^\perp\) and the \(j\)-th row of \(H\text{.}\)
By the result we proved for generalized Reed-Solomon codes on \(n-1\) points, there exist nonzero multipliers \(u_1, \dots, u_{n-1}\) such that \(\sum_{l=1}^{n-1} u_lv_l \alpha_l^m = 0\) for all \(0 \leq m \leq n-3\text{.}\) Let us select \(w_l = u_l\) for \(1 \leq l \leq n-1\text{.}\)
If \((i, j) \neq (k-1, r-1)\text{,}\) the exponent of \(\alpha\) in the dot product satisfies \(i + j \leq (k - 1) + (r - 1) - 1 \leq n - 3\text{.}\) The dot product evaluates to:
\begin{equation*} \sum_{l=1}^{n-1} (w_l v_l) \alpha_l^{i+j} + (\text{last column product}) = \sum_{l=1}^{n-1} u_lv_l \alpha_l^{i+j} + 0 = 0. \end{equation*}
For the single remaining case where \(i = k-1\) and \(j = r-1\text{,}\) the dot product is:
\begin{equation*} \sum_{l=1}^{n-1} (w_l v_l) \alpha_l^{(k-1)+(r-1)} + w_n v_n (1)(1) = \sum_{l=1}^{n-1} u_lv_l \alpha_l^{n-2} + w_n v_n. \end{equation*}
Setting this equal to zero forces \(w_n = -v_n^{-1} \sum_{l=1}^{n-1} u_lv_l \alpha_l^{n-2}\text{.}\) It suffices to show that the sum \(\sum_{l=1}^{n-1} u_lv_l \alpha_l^{n-2}\) must be nonzero so that the final coefficient \(w_n\) is also nonzero.
Suppose to the contrary that this is zero. Then (together with our assumption about \(\vec{u}\) initially) we have
\begin{equation*} \begin{bmatrix} 1 \amp 1 \amp \ldots \amp 1 \\ \alpha_1 \amp \alpha_2 \amp \ldots \amp \alpha_{n-1} \\ \vdots \amp \vdots \amp \vdots \amp \vdots \amp \vdots \\ \alpha_1^{n-2} \amp \alpha_2^{n-2} \amp \ldots \amp \alpha_{n-1}^{n-2} \\ \end{bmatrix} \begin{bmatrix} v_1 \amp 0 \amp \ldots \amp 0 \\ 0 \amp v_2 \amp \ldots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp \ldots \amp v_{n-1} \end{bmatrix}\begin{bmatrix} u_1 \\ u_2 \\ \vdots \\ u_{n-1} \end{bmatrix}=\begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix} \end{equation*}
But the left two matrices are a Vandermonde matrix and a diagonal matrix with no non-zero entries and so both are invertible. Therefore the vector \(\vec{u}\) is the zero vector, which contradicts that each \(u_i\) is nonzero. So \(w_n\) is also nonzero.
Since all \(w_l\) are nonzero, \(H^\perp\) is a legitimate parity check matrix exhibiting the doubly-extended GRS layout while retaining the identical code locators. Therefore, \(C^\perp\) is an \([n, n-k]\) doubly-extended GRS code.

2. Code Locators & Column Multipliers of GRS Codes.

Let \(C\) be an \([n,k]\) GRS code over \(F\) with code locators \(\alpha_1,\dots,\alpha_n\) and column multipliers \(v_1, \dots, v_n\text{.}\)

(a)

Fix \(\mu, \nu,\) and \(\eta\) to be elements of \(F\) with \(\mu,\eta\neq 0\text{.}\) Show that \(C\) is identical to the \([n,k]\) GRS code \(C'\) over \(F\) defined by the code locators
\begin{equation*} \alpha_j'=\mu \alpha_j +\nu, \quad j=1,2,\dots,n\text{,} \end{equation*}
and column multipliers
\begin{equation*} v_j'=\eta v_j, \quad j=1,2,\dots,n\text{.} \end{equation*}
(Note: Given \(\mu\text{,}\) there are certain choice of \(\nu\) for which \(C'\) will in fact be singly-extended; still, if \(n\lt|F|\text{,}\) one can select \(\nu\) so that each \(\alpha_j'\) is nonzero, even when \(C\) is singly-extended.)
Hint.
Show that each row in a canonical patrity check matrix of \(C'\) can be written as a linear combination of some rows in a canonical parity check matrix of \(C\text{.}\)
Solution.
Letting \(r=n-k\text{,}\) the canonical parity check matrix of \(C\) is:
\begin{equation*} H=\begin{bmatrix} 1 \amp 1 \amp \ldots \amp 1 \\ \alpha_1 \amp \alpha_2 \amp \ldots \amp \alpha_n \\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ \alpha_1^{r-1} \amp \alpha_2^{r-1} \amp \ldots \amp \alpha_n^{r-1} \end{bmatrix}\begin{bmatrix} v_1 \amp 0 \amp \ldots \amp 0 \\ 0 \amp v_2 \amp \ldots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp\ldots \amp v_n \end{bmatrix} \end{equation*}
and the canonical parity check matrix of \(C'\) is:
\begin{equation*} H'=\begin{bmatrix} 1 \amp 1 \amp \ldots \amp 1 \\ \mu \alpha_1 +\nu \amp \mu \alpha_2 +\nu \amp \ldots \amp \mu \alpha_n +\nu \\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ (\mu \alpha_1 +\nu)^{r-1} \amp (\mu \alpha_2 +\nu)^{r-1} \amp \ldots \amp (\mu \alpha_n +\nu)^{r-1} \end{bmatrix}\begin{bmatrix} \eta v_1 \amp 0 \amp \ldots \amp 0 \\ 0 \amp \eta v_2 \amp \ldots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp\ldots \amp \eta v_n \end{bmatrix} \end{equation*}
Let row \(\ell\) of \(H_{\GRS}\) be
\begin{equation*} \vec{h}_{\ell}=\begin{pmatrix}v_1\alpha_1^{\ell} \amp v_2\alpha_2^{\ell}\amp \dots \amp v_n\alpha_n^{\ell} \end{pmatrix}\text{.} \end{equation*}
For each \(1\leq i \leq n\) and \(1\leq j \leq r-1\) we can use the binomial theorem to write
\begin{equation*} (\mu \alpha_i +\nu)^j = \sum_{\ell=0}^j \binom{j}{\ell} \mu^\ell \alpha_i^\ell \nu^{j-\ell} \end{equation*}
So row \(j\) of \(H'_{\GRS}\) for \(0\leq j \leq r-1\) is
\begin{align*} \vec{h'}_j\amp = \begin{pmatrix} \eta v_1 (\mu \alpha_1+\nu)^j\amp \eta v_2 (\mu \alpha_2+\nu)^j\amp \dots \amp \eta v_n (\mu \alpha_n+\nu)^j\end{pmatrix} \\ \amp = \begin{pmatrix} \eta v_1 \left[\sum_{\ell=0}^j \binom{j}{\ell} \mu^\ell \alpha_1^\ell \nu^{j-\ell}\right]\amp \eta v_2 \left[\sum_{\ell=0}^j \binom{j}{\ell} \mu^\ell \alpha_2^\ell \nu^{j-\ell}\right]\amp \dots \amp \eta v_n \left[\sum_{\ell=0}^j \binom{j}{\ell} \mu^\ell \alpha_n^\ell \nu^{j-\ell}\right] \end{pmatrix}\\ \amp = \sum_{\ell=0}^j \eta \binom{j}{\ell} \mu^{\ell}\nu^{j-\ell} \begin{pmatrix} v_1\alpha_1^{\ell} \amp v_2\alpha_2^{\ell}\amp \dots \amp v_n\alpha_n^{\ell}\end{pmatrix} \\ \amp = \sum_{\ell=0}^j \eta \binom{j}{\ell} \mu^{\ell}\nu^{j-\ell} h_{\ell} \end{align*}
Therefore the rowspace of \(H'_{\GRS}\) is a subset of the rowspace of \(H_{\GRS}\) and so the the nullspace of \(H\) (\(C\)) is a subset of the nullspace of \(H'\) (\(C'\)). But these sets have the same size, since \(C\) and \(C'\) are GRS codes of the same length and dimension, so they are equal.

(b)

Show that \(C\) is equivalent to an \([n,k]\) GRS code over \(F\) with code locators \(\alpha_1^{-1},\alpha_2^{-1},\dots,\alpha_n^{-1}\) (and with properly selected column multipliers). Verify that this holds also when \(C\) is singly-extended or doubly-extended, i.e., when the code locators of \(C\) include the zero element or the β€œinfinity” element, regarding one to be the multiplicative inverse of the other.
Solution.
Again, let \(r=n-k\text{.}\) For each nonzero, non-infinity code locator \(\alpha_i\) define the new column multiplier \(u_i=\alpha_i^{r-1}v_i\text{.}\) If \(\alpha_i\) is zero or infinity, define \(u_i=v_i\text{.}\) Without loss of generality now suppose the code locators of \(C\) are \(\alpha_1,\alpha_2,\ldots,\alpha_{n-2},0,\infty\) so that the code locators of \(C'\) are \(\alpha_1^{-1},\alpha_2^{-1},\dots,\alpha_{n-2}^{-1},\infty,0\text{.}\) Then we have
\begin{equation*} H' = \begin{bmatrix} v_1\alpha_1^{r-1} \amp v_2\alpha_2^{r-1} \amp \ldots \amp v_{n-2}\alpha_{n-2}^{r-1} \amp 0 \amp v_n \\ v_1\alpha_1^{r-2} \amp v_2\alpha_2^{r-2} \amp \ldots \amp v_{n-2}\alpha_{n-2}^{r-2} \amp 0 \amp 0 \\ \vdots \amp \vdots \amp \vdots \amp \vdots \amp \vdots \amp \vdots\\ v_1\alpha_1 \amp v_2\alpha_2 \amp \ldots \amp v_{n-2}\alpha_{n-2} \amp 0 \amp 0 \\ v_1 \amp v_2 \amp \ldots \amp v_{n-2} \amp v_{n-1} \amp 0 \end{bmatrix}\text{.} \end{equation*}
This matrix is the parity check matrix of \(C\) with the order of the rows reversed, so they have the same rowspace and thus the same nullspace, so \(C=C'\text{.}\)

(c)

[Optional]. Let \(\mu,\nu,\sigma\text{,}\) and \(\tau\) be elements of \(F\) such that \(\mu\tau \neq \sigma\nu\text{.}\) Based on parts 1 and 2, show that \(C\) is identical to an \([n,k]\) GRS code over \(F\) with code locators
\begin{equation*} \alpha_j'=\frac{\mu \alpha_j +\nu}{\sigma \alpha_j +\tau}, \quad j=1,2,\dots,n \end{equation*}
and with properly selected column multipliers. Verify that this holds also when \(C\) is singly-extended or doubly-extended (with \(\alpha_j=\infty\) being mapped to \(\alpha_j'=\mu/\sigma\) and \(\alpha_j=-\tau/\sigma\) to \(\alpha_j'=\infty\)).
Solution.

3. Building a Concatenated Code.

Give a construction of a concatenated code \(C_{\mathrm{cont}}\) which is a \([765,248,\geq 63]_2\) code. This means identifying explicitly the inner code \(C_{\mathrm{in}}\text{,}\) the outer code \(C_{\mathrm{out}}\text{,}\) and the mapping from symbols of \(C_{\mathrm{out}}\) to codewords of \(C_{\mathrm{in}}\text{.}\) Justify that your construction indeed achieves the desired parameters. How many errors can your code correct?
Hint.
Find an appropriate RS code to use as the outer code and an appropriate binary code to use as the inner code, noticing that \(765=51\cdot 15, 248=31\cdot 8\) and \(63=21\cdot 3\text{.}\)
Solution.
Per the hint we can look for an \([51,31,21]\) RS code for the outer code and an \([15,8,3]\) binary code for the inner code. The inner code has \(2^8\) codewords, matching the necessary choice of field \(\GF(2^8)\) for an RS code of length \(51\) to exist (since we need \(51\mid 2^m-1\)). We could choose any RS code of this length; so lets use a simple narrow-sense code with \(b=1\) and \(\alpha=\beta^5\) for \(\beta\) a primitive element of \(\GF(2^8)\text{.}\) Then the RS code has parity-check matrix
\begin{equation*} H_{\RS}=\begin{bmatrix} 1 \amp \alpha \amp \dots \amp \alpha^{50}\\ 1 \amp \alpha^2 \amp \dots \amp \alpha^{100}\\ \vdots \amp \vdots \amp \vdots \amp \vdots \\ 1 \amp \alpha^{20} \amp \dots \amp \alpha^{1000} \end{bmatrix}\text{.} \end{equation*}
For the inner code we have a variety of options. For instance, we could take any dimension 8 subcode of the \([15,11,3]\) Hamming code by selecting 8 rows from its generator matrix (this code has distance at least 3, possibly higher). Or we could explicitly give a parity check matrix for the inner code by giving a \(7\times 15\) binary matrix with no zero column and no two columns equal and containing three vectors of that are dependent, for instance
\begin{equation*} H_{in} = \begin{bmatrix} 1 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 1 \amp 1 \amp 1 \amp 1 \amp 1 \amp 1 \amp 1 \\ 0 \amp 1 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 1 \\ 0 \amp 0 \amp 1 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 1 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 0 \amp 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 1 \amp 0 \amp 0 \end{bmatrix}\text{.} \end{equation*}
We fix the map \(\GF(2^8) \to \GF(2)^8\) by using the standard basis \(\{1,\beta,\beta^2,\dots,\beta^7\}\) and sending each symbol \(c\in \GF(2^8)\) to the 8-bit binary vector of coefficients when \(c\) is expressed in this basis. The resulting concatenated code has length \(765=51\cdot 15\text{,}\) dimension at least \(248=31\cdot 8\text{,}\) and minimum distance at least \(63=21\cdot 3\text{.}\) Therefore, it is a \([765,248,\geq 63]_2\) code. This code can correct up to \(\lfloor (63-1)/2 \rfloor = 31\) errors.

4. BCH Code Example.

Let \(C_{\RS}\) be a \([21,17]\) normalized RS code over an extension field \(E\) of \(F=\GF(2)\) and let \(C_{\mathrm{BCH}}\) be the BCH code \(C_{\RS}\cap F^{21}\) over \(F\text{.}\)

(a)

Find the smallest possible size of \(E\text{.}\)
Solution.
We must have \(21 \mid |E|-1\) so that we have an element of order 21 in \(E\) to generate the code locators for \(C_{\RS}\text{.}\) The smallest such field is thus \(\GF(2^6)=\GF(64)\text{.}\)

(b)

Show that the dimension of \(C_{\mathrm{BCH}}\) is at least \(8\text{.}\)
Solution.
Since we have a normalized RS code, we take \(b=0\) in the definition of RS codes and so if \(\alpha\) is an element of order 21 in \(E\text{,}\) the parity check matrix of \(C_{\RS}\) is
\begin{equation*} H_{\RS} = \begin{bmatrix} 1 \amp 1 \amp \dots \amp 1 \\ 1 \amp \alpha \amp \dots \amp \alpha^{20}\\ 1 \amp \alpha^2 \amp \dots \amp \alpha^{40}\\ 1 \amp \alpha^3 \amp \dots \amp \alpha^{60}\\ \end{bmatrix}\text{.} \end{equation*}
Each row corresponds to a root of the code, but since \(c(\alpha)=0\) implies \(c(\alpha^2)=0\) over \(\F_2\) the third row can be eliminated before expanding the symbols into binary column vectors. Further, the first row contributes only one binary equation since it already involves only binary symbols. Therefore, the parity check matrix of \(C_{\mathrm{BCH}}\) has at most 13 independent rows, so the dimension of \(C_{\mathrm{BCH}}\) is at least \(21-13=8\text{.}\)

(c)

Show that the minimum distance of \(C_{\mathrm{BCH}}\) is at least \(6\text{.}\)
Solution.
The minimum distance of \(C_{\mathrm{BCH}}\) is at least the minimum distance of \(C_{\RS}\text{,}\) which is at least \(5\text{.}\) To show that the minimum distance is at least \(6\text{,}\) it suffices to show that there are no codewords of weight exactly 5 in \(C_{\mathrm{BCH}}\text{.}\) To see this, note that the corresponding polynomial \(c(x)\in \F_2[x]\) has \(c(1)=0\text{,}\) but \(c(1)\) is the sum of the coefficients of \(c(x)\) mod 2, so there must be an even number of nonzero coefficients in \(c(x)\text{,}\) and so the weight is at least \(6\text{.}\)

5. Generator Matrix for a Concatenated Code [Optional].

Let \(C_{\mathrm{out}}\) be an \([N,K]\) code over an extension field \(E=\GF(q^k)\) of \(F=\GF(q)\text{,}\) and let \(C_{\mathrm{in}}\) be an \([n,k]\) code over \(F\text{.}\) Let \(\{\beta_1,\beta_2,\dots,\beta_k\}\) be a basis for \(E\) over \(F\text{.}\) Let \(C_{\mathrm{cont}}\) be the concatenated code with these outer and inner codes and with the mapping from symbols of \(C_{\mathrm{out}}\) to codewords of \(C_{\mathrm{in}}\) defined by writing each symbol in \(E\) as a linear combination of the basis elements with coefficients in \(F\) and replacing it with the corresponding linear combination of codewords of \(C_{\mathrm{in}}\text{.}\)

(a)

Let \(G_{\mathrm{out}}\) be a generator matrix for \(C_{\mathrm{out}}\text{.}\) Argue that the \(kK\times N\) matrix \(G_{\mathrm{out}}'\) obtained by replacing each row \(\br_i\) of \(G_{\mathrm{out}}\) with the \(k\times N\) matrix
\begin{equation*} A_i = \begin{bmatrix} \beta_1 \br_i \\ \beta_2 \br_i \\ \vdots \\ \beta_k \br_i \end{bmatrix} \end{equation*}
has the same rowspace as \(G_{\mathrm{out}}\) and that all of its rows are linearly independent over \(F\text{.}\)
Solution.

(b)

Each symbol \(g_{ij}\) of \(G_{\mathrm{out}}'\) is an element of \(E\) and therefore can be written as a linear combination of the basis elements \(\beta_1,\beta_2,\dots,\beta_k\) with coefficient vector \(\vec{g_{ij}}\) in \(F\text{.}\) By replacing each symbol in \(G_{\mathrm{out}}'\) with the vector \(\vec{g_{ij}}G_{\mathrm{in}}\text{,}\) we can obtain a \(kK\times nN\) matrix \(\tilde{G}\) with entries in \(F\text{.}\) Show that \(\tilde{G}\) is a generator matrix for the concatenated code \(C_{\mathrm{cont}}\text{.}\)
Solution.