# Estimating a Centered Ornstein-Uhlenbeck Process under Measurement Errors

Estimating a Centered Ornstein-Uhlenbeck Process under Measurement Errors

The problem of estimating the two parameters of a stationary process satisfying the differential equation , where follows a standard Wiener process, from observations at equidistant points of the interval , has been well studied. This is also the classical problem of fitting an autoregressive time series of order 1 (AR1), the case " large" yielding the "near unit root" situation. This Demonstration considers the important case where the observations may have additive measurement errors: we assume that these errors are independent, normal random variables with known variance .

dx(t)-θx(t)dt+

c

dw(t)w

n

[0,1]

n

2

σ

Recall that , assumed positive, is often referred to as the mean reversion speed (here assume the constant mean of the process is zero). In geostatistics is called the inverse-range parameter. It is well known that the autoregression coefficient in the equivalent AR1 formulation is given by , where .

θ

θ

ρ

exp(-θδ)

δ=1/(n-1)

Here we use the two parameters (the diffusion coefficient) and (recall that is then the marginal variance of the process; see the Details section in the help page for the OrnsteinUhlenbeckProcess function). We restrict ourselves to the case =1 (so that is also the noise-to-signal ratio).

c

τ=

c/2θ

2

τ

2

τ

σ

A simple "solution" to this fitting problem is to neglect the noise, that is, to use the most appealing estimator among those available for the non-noisy case and to substitute the noisy observations, as was studied in [2]. Here as "most appealing" we choose the celebrated maximum likelihood (ML) estimator. Indeed, it is known that this estimator can be exactly and reliably calculated by first solving a simple cubic equation in (see [3] and the references therein), the ML estimate of being then an explicit "Gibbs energy" (a quadratic form whose computation cost is of order ).

ρ

2

τ

n

On the other hand, as soon as , the exact maximization of the correctly specified likelihood criterion (the one that takes into account the noise) is not so easy.

σ>0

This Demonstration considers the recently proposed "CGEM-EV" approach [1]. In short, firstly is simply estimated by the bias-corrected empirical variance, say ; secondly an estimating equation is invoked to estimate . Precisely, is searched so that the conditional mean of the "candidate Gibbs energy" (where we substitute in place of the true so that this conditional mean is a function of only ) is equal to . It is easy to show that these two equations are unbiased, that is, they are true on average when and are set to their true values (the averaging is ensemble-averaging, i.e. from infinitely repeated simulations of the process and of the noise under the true model). Stronger properties are studied in [1].

2

τ

2

τ

EV

c

c

2

τ

EV

2

τ

c

2

τ

EV

c

2

τ

Implementation of CGEM-EV is much simpler than exact ML, since it reduces to one-dimensional numerical root finding. A simple fixed-point algorithm is used here. It proves to be reliable (with fast convergence) for all the settings in this Demonstration.