Skip to main content

Worksheet Weekly Practice 11

Instructions: You may type up or handwrite your work, but it must be neat, professional, and organized and it must be saved as a PDF file and uploaded to the appropriate Gradescope assignment. Use a scanner or scanning app to convert handwritten work on paper to PDF. I encourage you to type your work using the provided template.
All tasks below must have a complete solution that represents a good-faith attempt at being right to receive engagement credits. If your submission is complete and turned in on time, you will receive full engagement credit for the assignment. All other submissions will receive zero engagement credit. Read the guidelines at Grading Specifications carefully.
To abide by the class academic honesty policy, your work should represent your own understanding in your own words. If you work with other students, you must clearly indicate who you worked with in your submission. The same is true for using tools like generative AI although I strongly discourage you from using such tools since you need to build your own understanding here to do well on exams.

True/False, Multiple Choice, & Fill-In.

For these problems a justification is not required for credit, but it may be useful for your own understanding to include one. True/False problems should be marked True if the statement is always true, and False otherwise. Multiple choice problems may have more than one correct answer if that is indicated in the problem statement; be sure to select all that apply. Fill-in problems require a short answer such as a number, word, or phrase.

1.

Fill-In: In Sugiyama’s algorithm, the polynomial \(\prod_{j\in J} (1-\alpha_j x)\) is called the
Solution.
This is the error-locator polynomial.

2.

True/False: The Welch-Berlekamp algorithm is more efficient than Sugiyama’s algorithm.
Solution.
False; Welch-Berlekamp has complexity \(O(n^3)\) while Sugiyama has complexity \(O(n^2)\text{.}\)

3.

Fill-In: In the Welch-Berlekamp algorithm, the polynomial \(Q(x)\) has degree \(\fillinmath{e+k-1}\)

Short Response.

Your responses to these questions should be complete solutions with justifications, as per the Grading Specifications.

4.

For this problem, you will work with the field \(\GF(8)=\F_2[\alpha]/(\alpha^3+\alpha+1)\) and the \([7,3,5]_8\) GRS code \(C\) with code locators \(\alpha_i=\alpha^{i-1}\) and column multipliers \(v_i=1\) for \(i=1,\dots,7\text{.}\) Assume that the vector
\begin{equation*} \vec{y}=(0,\alpha^5,0,1,\alpha^6,0,1) \end{equation*}
is received.
(a)
Use Sugiyama’s Algorithm for GRS Decoding to find the error vector \(\vec{e}\) and intended codeword \(\vec{c}\text{.}\)
Solution.
Computing the syndrome (via the SageMath tool) gives
\begin{equation*} S(x)=\alpha+\alpha x^2+\alpha^4 x^3\text{.} \end{equation*}
Now we apply the extended Euclidean algorithm (again, with the SageMath tool) with \(a(x)=x^{d-1}=x^4\) and \(b(x)=S(x)\) until the remainder has degree strictly lower than \(e=\frac{d-1}{2}=2\text{.}\) This takes two steps and concludes with
\begin{align*} \Lambda(x)\amp=t_2(x)t_2(0)^{-1}=1+\alpha^3 x + x^2\\ \Gamma(x)\amp=r_2(x)t_2(0)^{-1}=\alpha+\alpha^4x\text{.} \end{align*}
To find the error locations we find the roots of \(\Lambda(x)\text{.}\) By Chien search they are \(\alpha^2\) and \(\alpha^5\text{,}\) which are the inverses of \(\alpha^5=\alpha_6\) and \(\alpha^2=\alpha_3\) respectively so the errors occurred in positions 3 and 6. Finally the error values are computed by Forney’s algorithm as
\begin{align*} e_3\amp=\frac{-\alpha^2}{1}\frac{\alpha+\alpha^4(\alpha^5)}{\alpha^3}=\alpha^3\\ e_6 \amp= \frac{-\alpha^5}{1}\frac{\alpha+\alpha^4(\alpha^2)}{\alpha^3}=1 \end{align*}
and so the intended codeword \(c\) was \((0,\alpha^5,\alpha^3,1,\alpha^6,1,1)\text{.}\)
(b)
Use Berlekamp-Welch Decoding for GRS Codes to find the error vector \(\vec{e}\) and intended codeword \(\vec{c}\text{.}\)
Solution.
First, we need to compute the dual multipliers (the multipliers for the generator matrix of the code). Here the code is normalized, primitive, and in characteristic 2, so we have
\begin{equation*} u_i=-\alpha_i/v_i=\alpha_i\text{.} \end{equation*}
Now \(e=\lfloor(d-1)/2\rfloor=2\) and \(e+k-1=2+3-1=4\) so our polynomials are \(E(x)=e_0+e_1 x+ x^2\) and \(Q(x)=q_0+q_1x+q_2x^2++q_3x^3+q_4x^4\text{.}\) We need to solve the linear system over \(\GF(8)\) of the seven equations in seven unknowns given by
\begin{equation*} Q(\alpha_i)=\frac{y_i}{u_i}E(\alpha_i) \end{equation*}
for \(i=1,\dots,7\text{.}\)
This system, represented as a matrix equation, has the form
\begin{equation*} \begin{bmatrix} 1 \amp 1 \amp 1 \amp 1 \amp 1 \amp 0 \amp 0 \\ 1 \amp \alpha \amp \alpha^{2} \amp \alpha^{3} \amp \alpha^{4} \amp \alpha^{4} \amp \alpha^{5} \\ 1 \amp \alpha^{2} \amp \alpha^{4} \amp \alpha^{6} \amp \alpha \amp 0 \amp 0 \\ 1 \amp \alpha^{3} \amp \alpha^{6} \amp \alpha^{2} \amp \alpha^{5} \amp \alpha^{4} \amp 1 \\ 1 \amp \alpha^{4} \amp \alpha \amp \alpha^{5} \amp \alpha^{2} \amp \alpha^{2} \amp \alpha^{6} \\ 1 \amp \alpha^{5} \amp \alpha^{3} \amp \alpha \amp \alpha^{6} \amp 0 \amp 0 \\ 1 \amp \alpha^{6} \amp \alpha^{5} \amp \alpha^{4} \amp \alpha^{3} \amp \alpha \amp 1 \\ \end{bmatrix}\begin{bmatrix} q_0 \\ q_1 \\q_2\\ q_3 \\ q_4 \\ e_0 \\ e_1\end{bmatrix} = \begin{bmatrix}0 \\\alpha^{6} \\0 \\\alpha^{3} \\\alpha^{3} \\ 0 \\\alpha^{6} \\ \end{bmatrix} \end{equation*}
Solving the system gives the solution
\begin{equation*} Q(x)=x+\alpha x^2+\alpha x^3+x^4 \qquad E(x)=1+\alpha^3 x + x^2\text{.} \end{equation*}
Then we compute the message polynomial
\begin{equation*} P(x)=Q(x)/E(x)=x^2+x \end{equation*}
and check that the associated codeword
\begin{equation*} c=(u_iP(\alpha_i))_{i=1}^7=(0,\alpha^5,\alpha^3,1,\alpha^6,1,1) \end{equation*}
has Hamming distance at most \(2\) from \(\vec{y}\text{.}\) It has distance 2, with errors in positions \(3\) and \(6\text{,}\) so we decoded correctly. This matches the result in (a).
(c)
Which method was easier for you to use on this problem?
Solution.
Answers will vary by student comfort with polynomial division, linear systems, and SageMath.