Nonparametric Curve Estimation by Kernel Smoothers: Efficiency of Unbiased Risk Estimate and GCV Selectors
Nonparametric Curve Estimation by Kernel Smoothers: Efficiency of Unbiased Risk Estimate and GCV Selectors
This Demonstration considers one of the simplest nonparametric-regression problems: let be a smooth real-valued function over the interval ; recover (or estimate) when one only knows approximate values for that satisfy the model =f()+σ , where =i/n and the are independent, standard normal random variables. Sometimes one assumes that the noise level is known. Such a curve estimation problem is also called a signal denoising problem.
f
[0,1]
f
n
y
i
i1,2,...,n
y
i
x
i
ϵ
i
x
i
ϵ
i
σ
Perhaps the simplest "solution" to this problem is by the classical and widely used kernel-smoothing method, which is a particular case of the Loess method (locally weighted scatterplot smoothing); see the Details below.
In the kernel-smoothing method (like in any similar nonparametric method, e.g. the smoothing spline method) one or several smoothing parameters have to be appropriately chosen to obtain a "good" estimate of the true curve . For the kernel method, it is known that choosing a good value for the famous bandwidth parameter (see Details) is crucial, much more than the choice of the class of the kernels, which is fixed here.
f(·)
The curve estimate corresponding to a given value for the bandwidth parameter is then denoted . Notice that depends on , , and the 's only through the data .
h
f
n,h
f
n,h
f(·)
σ
ϵ
i
y
i
Three very popular methods are available for choosing : cross-validation (also called the "leave-one-out" principle; see PRESS statistic), generalized cross validation (GCV), and Mallows’ (also sometimes denoted by or UBR for unbiased risk estimate). In fact, cross-validation and GCV coincide in our context, where periodic end conditions are assumed. See [1] for a review; and see [2] for the definition and an analysis of . Notice that GCV (in contrast to the method) does not require that be known.
h
C
L
C
p
C
L
C
L
σ
This Demonstration provides interactive assessments of the statistical efficiency of GCV and smoothing-parameter selectors; and this can be done for rather large (here, could be taken as large as 8192 with reasonably fast interactivity on a current personal computer).
C
L
n
n
13
2
Here six examples for (the "true curve") can be tried: is plotted (in blue) in the third of the three possible views using the tabs: 1. The data and the curve-estimate , where you choose by the trial-bandwidth slider, show that the choice of the bandwidth is crucial. 2. The two-curve estimates given by GCV and are very often so similar that they can not be distinguished. 3. The third tab displays quantities related to the "truth": the ASE-optimal choice yields the green curve, this is the -value that minimizes a global discrepancy between , and , defined by
f(·)
f
f
n,h
h
C
L
h
f
n,h
f
ASE(h)
1
n
n
∑
i1
2
f()-()
x
i
f
n,h
x
i
where ASE stands for average of the squared errors. (Note that this ASE-optimal choice is a target that seems to be unattainable in practice, since we do not know the true curve .)
f
In the third view, the curve-estimate associated with the automatic choice is again plotted (purple curve): we can then assess the typical efficiency of (or the quite close GCV choice) by comparing this curve estimate with the targeted ASE-optimal curve.
C
L
C
L
This third view also displays the two associated ASE values, in addition to the one associated with the bandwidth chosen by-eye. This shows that it is difficult to choose by eye a trial-bandwidth in the first view (thus without the help of the true curve plot) that will turn to be as least as good as the GCV or choice, when "better" still means a lower ASE-value.
C
L
Staying in the third view and only increasing the size of the data set, one typically observes that the curve estimate and the ASE-optimal curve generally become very close, and the two associated ASE distances often become relatively similar. This agrees with the known asymptotic theory, see [3] and the references therein.
C
L
The rate at which the relative difference between the (or GCV) bandwidth and the optimal bandwidth converges to zero can be assessed in the additional panel, top right, in the third view, which shows the two kernels (precisely only the right-hand part of each of the two kernels, since they are even functions) associated with the two bandwidths: the differences between the two bandwidths can be scrutinized with a much better precision than by looking only at the two associated curves.
C
L
By varying the seed that generates the data, you can also observe that, in certain cases, there sometimes remains non-negligible differences between the (or GCV) choice and the optimal bandwidth.
C
L