HW 32 — Comparing ML and MMSE Estimators (MBIP Fig. 2.1)
Background
The goal of this assignment is to compare the ML and MMSE estimators on a correlated signal buried in white Gaussian noise.
The unknown signal \(X \in \mathbb{R}^p\) is modeled as a Gaussian random vector with zero mean and covariance matrix \(R_x\), where the \((i,j)\) entry is:
This covariance models a smooth signal: nearby components are strongly correlated and distant components are weakly correlated. The observation is \(Y = X + W\), where \(W \sim \mathcal{N}(0, \sigma_w^2 I)\) is independent white Gaussian noise with \(\sigma_w = 0.75\) and \(p = 50\).
The two estimators are:
Tasks
Part 1 — Generate the Data
- Build the covariance matrix \(R_x\). Construct the \(50 \times 50\) matrix with entries \([R_x]_{i,j} = 1/(1 + |{(i-j)/10}|^2)\) for \(i, j = 1, \ldots, 50\).
-
Generate a realization of \(X\). Draw a single random vector \(X \sim \mathcal{N}(0, R_x)\) using MATLAB's
mvnrndfunction. Transpose as needed so that \(X\) is a column vector. - Generate noise and observation. Generate \(W \sim \mathcal{N}(0, \sigma_w^2 I)\) and compute \(Y = X + W\).
Part 2 — Compute the Estimates
- Compute the ML estimate: \(\hat X_{\mathrm{ML}} = Y\).
- Compute the MMSE estimate: \(\hat X_{\mathrm{MMSE}} = R_x(R_x + \sigma_w^2 I)^{-1} Y\).
Hint: Use MATLAB's backslash operator orinv()for the matrix inverse. For numerical stability, preferR_x / (R_x + R_w)orR_x * ((R_x + R_w) \ eye(p)).
Part 3 — Plot the results
Create a single figure with four subplots arranged in a 2×2 grid:
- Top-left: Plot of the true signal \(X\). Title: "Original Signal, X".
- Top-right: Plot of the noisy observation \(Y\). Title: "Signal with Noise, Y".
- Bottom-left: Plot of the ML estimate \(\hat X_{\mathrm{ML}}\). Title: "ML Estimate of X".
- Bottom-right: Plot of the MMSE estimate \(\hat X_{\mathrm{MMSE}}\). Title: "MMSE Estimate of X".
Label the \(x\)-axis as "Index" and the \(y\)-axis as "Value" on all panels. Use the same \(y\)-axis limits on all panels for easy comparison.
Part 4 — Compute MSE
Compute the mean squared error for both estimates:
Report both values. Which estimator has lower MSE for your realization?
Part 5 — Explanation
In a few sentences, explain why the MMSE estimate is typically more accurate than the ML estimate for this problem. Your answer should address:
- What prior information does the MMSE estimate use that the ML estimate ignores?
- What is the trade-off the MMSE estimator makes (hint: is it unbiased)?
- Would the MMSE advantage hold if \(\sigma_w \to 0\)? Why or why not?
Hints
- Build \(R_x\) efficiently with nested loops or vectorized indexing:
[I, J] = meshgrid(1:p, 1:p); Rx = 1 ./ (1 + ((I-J)/10).^2); mvnrnd(mu, Sigma)returns a row vector by default; transpose with'to get a column vector.- Use
subplot(2,2,k)for the four panels. Set consistent axis limits withylim([-3 3])or similar after inspecting your data. - The MMSE estimate visually looks smoother than both \(Y\) and \(\hat X_{\mathrm{ML}}\) — this is the prior covariance doing its job.