Skip to main content

Worksheet Problem Set M5B

Instructions: You may type up or handwrite your work, but it must be neat, professional, and organized and it must be saved as a PDF file and uploaded to the appropriate Gradescope assignment. Use a scanner or scanning app to convert handwritten work on paper to PDF. I encourage you to type your work using the provided template.
All tasks below must have a complete solution that represents a good-faith attempt at being right to receive engagement credits. If your submission is complete and turned in on time, you will receive full engagement credits for the assignment. All other submissions will receive zero engagement credit. Read the guidelines at Grading Specifications carefully.
To abide by the class’ academic honesty policy, your work should represent your own understanding in your own words. If you work with other students, you must clearly indicate who you worked with in your submission. The same is true for using tools like generative AI although I strongly discourage you from using such tools since you need to build your own understanding here to do well on exams.

1. Decoding for a BCH Code.

Let \(\alpha\) be a primitive element in \(E=\GF(2^m)\) and let \(C_{\mathrm{BCH}}\) be a narrow-sense primitive BCH code of length \(n=2^m-1\) over \(F=\GF(2)\) whose parity-check matrix is obtained by representing each entry in
\begin{equation*} \begin{bmatrix} 1 \amp \alpha \amp \alpha^2 \amp \dots \amp \alpha^{n-1}\\ 1 \amp \alpha^3 \amp \alpha^6 \amp \dots \amp \alpha^{3(n-1)} \end{bmatrix} \end{equation*}
as a column vector in \(F^m\text{.}\)

(a)

Show that
\begin{equation*} C_{\mathrm{BCH}}=C_{\RS} \cap F^n \end{equation*}
where \(C_{\RS}\) is an \([n,n-4,5]\) narrow-sense RS code over \(E\text{.}\)
Solution.
An \([n,n-4,5]\) narrow-sense RS code over \(E\) has roots \(\alpha,\alpha^2,\alpha^3,\alpha^4\text{.}\) Now, for any \(c(x)\in F[x]\) we have that \(c(\alpha)=0\) implies \(c(\alpha^2)=0=c(\alpha^4)\) as these are conjugate elements over \(F\text{.}\) So we have
\begin{align*} C_{\BCH} \amp = \{ \vec{c} \in F^n \mid c(\alpha)=c(\alpha^3)=0\}\\ \amp = \{\vec{c}\in F^n \mid c(\alpha)=c(\alpha^2)=c(\alpha^3)=c(\alpha^4)=0\} \\ \amp =C_{\RS}\cap F^n\text{.} \end{align*}

(b)

Suppose that a codeword \(\vec{c}\) of \(C_{\mathrm{BCH}}\) has been transmitted and a word \(\vec{y}\in F^n\) with at most two errors has been received. Let \(\transpose{(S_0\, S_1\, S_2\, S_3)}\) be the syndrome \(H_{\RS}\transpose{\vec{y}}\text{,}\) where \(H_{\RS}\) is the canonical parity-check matrix of the code \(C_{\RS}\) from part a). Show that \(S_1=S_0^2\) and \(S_3=S_0^4\text{.}\)
Hint.
Remember that \(\vec{y}\) is over \(F\text{.}\)
Solution.
Let \(y(x)\) be the polynomial in \(F[x]\) with coefficients given by \(\vec{y}\text{.}\) Then we have
\begin{equation*} \transpose{(S_0\, S_1\, S_2\, S_3)}= H_{\RS}\transpose{\vec{y}} = \transpose{(y(\alpha)\,\, y(\alpha^2)\,\, y(\alpha^3)\,\, y(\alpha^4))}\text{.} \end{equation*}
It follows immediately from the linearity of the Frobenius automorphism over \(F\) that
\begin{equation*} S_1=y(\alpha^2)=(y(\alpha))^2=S_0^2 \end{equation*}
and
\begin{equation*} S_3=y(\alpha^4)=(y(\alpha))^4=S_0^4\text{.} \end{equation*}

(c)

Now assume that exactly two errors have occurred. Show that the EEP is the constant polynomial \(\Gamma(x)=S_0\text{.}\)
Solution.
If two errors have occurred then \(\vec{y}=\vec{e}_i+\vec{e}_j\) with ones in positions \(i\) and \(j\) and zeros elsewhere (\(0\leq i\lt j \leq n-1\)). Now we have
\begin{align*} \Gamma(x) \amp =\sum_{j\in J} e_j v_j \prod_{m\in J\setminus\{j\}} (1-\alpha_{m} x) \\ \amp = e_i v_i (1-\alpha_j x)+e_jv_j(1-\alpha_i x)\\ \amp = \alpha^i (1+\alpha^j x)+\alpha^j(1+\alpha_i x)\\ \amp = \alpha^i + \alpha^j\\ \amp = S_0 \end{align*}

(d)

Again assume that exactly two errors have occurred. Derive from the key equation expressions for the coefficients of the ELP \(\Lambda(x)=1+\Lambda_1 x + \Lambda_2 x^2\text{.}\) Write these expressions in terms of \(S_0\) and \(S_2\) only. Compare the result with the polynomial in Decoding Algorithm for the Double-Error-Correcting Alternant Code over GF(2).
Solution.
From the key equation and parts (b) and (c) we have
\begin{align*} S_0 \amp \equiv (1+\Lambda_1 x+ \Lambda_2 x^2)(S_0+S_0^2 x + S_2 x^2 + S_0^4 x^3) \pmod{x^4}\\ \amp \equiv S_0 + (S_0^2+S_0\Lambda_1) x + (S_2+\Lambda_1 S_0^2 + S_0\Lambda_2) x^2 + (S_0^4+\Lambda_1 S_2+\Lambda_2 S_0^2)x^3 \pmod{x^4}\text{.} \end{align*}
From this it follows that
\begin{equation*} \begin{cases} S_0^2+S_0\Lambda_1 = 0 \\ S_2+\Lambda_1 S_0^2 + S_0\Lambda_2 = 0 \\ S_0^4+\Lambda_1 S_2 + \Lambda_2 S_0^2 = 0 \text{.}\end{cases} \end{equation*}
Now we have that \(S_0=\alpha^i+\alpha^j\) is not zero since exactly two errors have occurred. Therefore the first equation gives
\begin{equation*} \Lambda_1=S_0\text{.} \end{equation*}
Substituting this into the second equation gives
\begin{equation*} \Lambda_2 = \frac{S_2 + S_0^3}{S_0}=S_0^2+\frac{S_2}{S_0}. \end{equation*}
The third equation is consistent with these two:
\begin{equation*} S_0^4+ S_0S_2 +S_0(S_2+S_0^3)=0\text{.} \end{equation*}
So the error locator polynomial is \(\Lambda(x)=1+S_0 x + \left(S_0^2+\frac{S_2}{S_0}\right)x^2. \) The polynomial in Decoding Algorithm for the Double-Error-Correcting Alternant Code over GF(2) is the reverse polynomial of this, but this is just because that polynomial is the product \((x+\alpha^i)(x+\alpha^j)\) while this one is \((1+\alpha^i x)(1+\alpha^j x)\text{.}\)

2. Sugiyama’s Algorithm for Singly-Extended GRS Codes.

Let \(\Lambda(x)\) and \(\Gamma(x)\) be the error locator and evaluator polynomials for the error vector \(\vec{e}\neq\vec{0}\text{.}\) Set \(J=\{j\mid e_j\neq 0\}\) and \(J_0=\{j\in J \mid \alpha_j\neq 0\}\text{.}\) Recall that there is at most one index \(j\) with \(\alpha_j=0\text{.}\) Show the following.

(a)

\(w_H(\vec{e})=|J|\) is equal to \(|J_0|+1\) or \(|J_0|\) depending upon whether or not there is an index \(j\) with \(\alpha_j=0\) and \(e_j\neq 0\text{.}\)
Solution.
If an error occurs at the index \(j\) where \(\alpha_j=0\) then \(j\in J\) and \(j\not\in J_0\) so we have \(|J|=|J_0|+1\text{.}\) Otherwise \(\alpha_j\neq 0\) for all \(j\in J\) so we have \(|J|=|J_0|\text{.}\)

(b)

\(\deg(\Lambda(x))=|J_0|\) and \(\deg(\Gamma(x))\leq |J| -1\text{.}\)
Solution.
Since \((1-0x)=1\) we have
\begin{equation*} \Lambda(x)=\prod_{j\in J}(1-\alpha_j x)=\prod_{j\in J_0}(1-\alpha_j x) \end{equation*}
and so \(\deg(\Lambda(x))=|J_0|\text{.}\) We have
\begin{equation*} \Gamma(x) = \sum_{j\in J} e_j v_j \prod_{m\in J\setminus\{j\}} (1-\alpha_{m} x) \end{equation*}
and each term in this sum has degree at most \(|J|-1\text{,}\) so the sum has degree at most \(|J|-1\) as well. Thus \(\deg(\Gamma(x))\leq |J| -1\text{.}\)

(c)

\(J_0=\{j\mid \alpha_j\neq 0,\Lambda(\alpha_j^{-1})=0\}\text{.}\)
Solution.
This follows immediately from the definitions since \(j\in J\) if and only if \(e_j\neq 0\) if and only if \((1-\alpha_j x)\) is a factor in the definition of \(\Lambda(x)\) if and only if \(\Lambda(\alpha_j^{-1})=0\text{.}\)

(d)

The index \(j^*\) with \(\alpha_{j^*}=0\) belongs to \(J\) if and only if \(\deg(\Lambda(x))=\deg(\Gamma(x))\text{.}\)
Solution.
First, suppose the index \(j^*\) with \(\alpha_{j^*}=0\) belongs to \(J\text{.}\) Then we have
\begin{equation*} \Gamma(x) =\left( e_{j^*} v_{j^*}\prod_{m\in J_0}(1-\alpha_m x)\right)+\sum_{j\in J_0} e_j v_j \prod_{m\in J_0\setminus\{j\}} (1-\alpha_{m} x)\text{.} \end{equation*}
But only the first term in this sum has degree \(|J_0|\) while the others have degree \(|J_0|-1\text{.}\) Further, the leading coefficient of that first term is \(e_{j^*} v_{j^*}\prod_{m\in J_0} (-\alpha_m)\neq 0 \) since each \(\alpha_m\neq 0\text{.}\) So we have \(\deg(\Gamma(x))=|J_0|=\deg(|Lambda(x))\text{.}\)
Now suppose that the index \(j^*\) with \(\alpha_{j^*}=0\) does not belong to \(J\text{.}\) Then by (a) we have \(|J|=|J_0|\) and so by (b) we have
\begin{equation*} \deg(\Lambda(x))=|J_0| \gt |J_0|-1 \geq \deg(\Gamma(x))\text{.} \end{equation*}
So the degrees are not equal.

(e)

For \(j\in J_0\text{,}\) \(e_j\) is given by Forney’s algorithm. If \(j\in J\setminus J_0\) then
\begin{equation*} e_j=\gamma v_j^{-1} \left( \prod_{i\in J_0} (-\alpha_i) \right)^{-1} \end{equation*}
where \(\gamma\) is the leading coefficient of \(\Gamma(x)\text{.}\)
Solution.
The unique possible \(j\in J\setminus J_0\) is the index \(j^*\) such that \(\alpha^j=0\text{.}\) Then from the previous part, the leading coefficient of \(\Gamma(x)\) is
\begin{equation*} \gamma = e_{j^*} v_{j^*}\prod_{m\in J_0} (-\alpha_m)\text{.} \end{equation*}
Solving for \(e_{j^*}\) gives the desired result.

(f)

Use these results to give a version of Sugiyama’s Algorithm for GRS Decoding for singly-extended GRS codes.
Solution.
The only necessary modification is in the step to determine the errors and their values. A modified algorithm is below.
Sugiyama Decoding for Singly-Extended GRS Codes.
  1. Syndrome computation: compute the polynomial \(S(x)=\sum_{\ell=0}^{d-2} S_{\ell} x^{\ell}\) where
    \begin{equation*} S_{\ell}=\sum_{j=1}^n y_j v_j \alpha_j^{\ell}, \quad \ell=0,1,\dots,d-2\text{.} \end{equation*}
  2. Solving the key equation: Apply the extended Euclidean algorithm with inputs
    \begin{equation*} a(x)=x^{d-1} \quad \text{and} \quad b(x)=S(x) \end{equation*}
    to produce
    \begin{equation*} \Lambda(x)=t_h(x) t_h(0)^{-1} \quad \text{and} \quad \Gamma(x)=r_h(x) t_h(0)^{-1}\text{,} \end{equation*}
    where \(h\) is the smallest index such that \(\deg(r_h) \lt \frac{d-1}{2}\text{.}\)
  3. Check degrees: If \(\deg(\Lambda(x))=\deg(\Gamma(x))\) then an error occurred at the index \(j^*\) such that \(\alpha^j=0\) and the error value at that index is
    \begin{equation*} e_j=\gamma v_j^{-1} \left( \prod_{i\in J_0} (-\alpha_i) \right)^{-1}\text{.} \end{equation*}
  4. Find other error locations: Use Chien search (or other root-finding algorithm) to find the roots of \(\Lambda(x)\text{.}\)
  5. Find other error values: use Forney’s algorithm to compute the error values by
    \begin{equation*} e_j = \begin{cases} -\dfrac{\alpha_j}{v_j} \dfrac{\Gamma(\alpha_j^{-1})}{\Lambda'(\alpha_j^{-1})} \amp \text{if } \Lambda(\alpha_j^{-1})=0 \\ 0 \amp \text{otherwise} \end{cases} \end{equation*}
    for \(j=1,2,\dots,n\text{.}\)

(g)

Use your algorithm to decode the received word \(\vec{y}=(\alpha, 0, \alpha, \alpha^5, 0, \alpha^3, \alpha^2, 0)\) for the singly-extended \([8,4,5]_8\) normalized GRS code with \(\alpha_1=0\) and \(\alpha_i=\alpha^i\) where \(\GF(8)=\F_2[\alpha]/(\alpha^3+\alpha+1)\text{.}\)
Solution.
Modifying the code at Sugiyama Euclidean Algorithm gives
\begin{equation*} \Lambda(x)= 1+ x+ \alpha^4 x^2 \qquad \Gamma(x)= \alpha x\text{.} \end{equation*}
The degrees are not equal, so the errors occured at the non-zero code locators. Finding roots \(\alpha^6\) and \(\alpha^4\text{,}\) we see that the errors occur at positions 3 (with \(\alpha_3=\alpha^3=(\alpha^4)^{-1}\)) and 8 (with \(\alpha_8=\alpha^8=\alpha=(\alpha^6)^{-1}\)) both with value \(\alpha\text{.}\) So the correct codeword is \(\vec{c}=(\alpha, 0, 0, \alpha^5, 0, \alpha^3, \alpha^2, \alpha)\text{.}\)