HW 31 — Maximum Likelihood Estimation: Laser Speckle
Background
When coherent light (e.g., a laser) reflects off a surface that is rough relative to the illumination wavelength, the reflected field exhibits laser speckle — a grainy, random intensity pattern visible when you shine a laser at a wall. This is not a measurement error; it is a physical consequence of the coherent superposition of waves scattered from many random surface features.
When we measure the speckle intensity integrated over a finite area — as a camera pixel or the eye does — the measured value \(x_i\) is not the true mean intensity \(\theta\) of the reflected field. Instead, it is well modeled as an exponentially distributed random variable:
where \(\theta\) is the actual intensity of the reflected light (units: W/m²). Suppose we illuminate a surface with a laser and collect \(n\) independent measurements \(X = [x_1, x_2, \ldots, x_n]\) using a single-pixel camera (the camera acquires one pixel at a time, with slight illumination angle variations between shots ensuring independence).
exprnd function uses the mean of the distribution as its input parameter, which for the exponential is \(1/\theta\). So to generate samples with parameter \(\theta = 2\), use exprnd(1/theta, n, 1).
Questions
Question 1 — Derive the ML Estimator (written)
Derive an expression for the maximum likelihood estimate \(\hat\theta_{\mathrm{ML}}\) of the true intensity \(\theta\) given \(n\) independent measurements \(X = [x_1, \ldots, x_n]\).
Hint: Because the measurements are independent, the joint likelihood is the product of the individual likelihoods: \[p_\theta(X) = \prod_{i=1}^n p(x_i) = \prod_{i=1}^n \theta\, e^{-\theta x_i}.\] It is easier to maximize the log-likelihood \(\log p_\theta(X)\) than the product directly.
Question 2 — MATLAB Simulation
Assume \(\theta = 2\) W/m². Write a MATLAB script to do the following:
-
Generate data and compute estimates. For each value of \(n \in \{1, 2, 3, \ldots, 100\}\):
- Generate 50 independent realizations of \(X_i = [x_1, x_2, \ldots, x_n]\) for \(i = 1, \ldots, 50\). Each \(X_i\) is a vector of \(n\) exponential random variables.
- For each realization, compute the ML estimate \(\hat\theta_{\mathrm{ML}}(X_i)\) using your formula from Question 1.
- Compute the mean and variance of the 50 estimates for this value of \(n\).
-
Plot results. Create a figure with two subplots:
- Top panel: mean of \(\hat\theta_{\mathrm{ML}}\) vs. \(n\) (linear scale). Include a horizontal dashed line at \(\theta = 2\).
- Bottom panel: variance of \(\hat\theta_{\mathrm{ML}}\) vs. \(n\) using
semilogy(log scale on the \(y\)-axis).
Question 3 — Conclusions
Based on your plots, what can you conclude about the behavior of the ML estimator as the number of measurements \(n\) increases? Address the following:
- Does the mean of the estimates converge to the true value \(\theta = 2\)? What property of the estimator does this reflect?
- How does the variance behave as a function of \(n\)? What does this tell you about the uncertainty in the estimate?
- How many measurements would you need in practice to achieve a reliable estimate of \(\theta\)?