next up previous contents
Next: Further Reading Up: Correlator I. Basics Previous: Digital Delay   Contents

Discrete Correlation and the Power Spectral Density

The cross correlation of two signals $s_1(t)$ and $s_2(t)$ is given by

R_c(\tau) =~ <s_1(t)s_2(t+\tau)>
\end{displaymath} (8.5.7)

where $\tau$ is the time delay between the the two signals. In the above equation the angle bracket indicates averaging in time. For measuring $R_c(\tau)$ in practice an estimator is defined as
R(m) = \frac{1}{N}\sum_{n=0}^{N-1}s_1(n)s_2(n+m)\; \; 0\le m \le M
\end{displaymath} (8.5.8)

where $m$ denotes the number of samples by which $s_2(n)$ is delayed, $M$ is the maximum delay ($M \ll N$). By definition $R(m)$ is a random variable. The expectation value of $R(m)$ converges to $R_c(\tau=\frac{m}{f_s})$ when $N \rightarrow \infty$. The autocorrelation of the time series $s_1(n)$ is also obtained using a similar equation as Eq. 8.5.8 by replacing $s_2(n+m)$ by $s_1(n+m)$.

The correlation function estimated from the quantized samples in general deviates from the measurements taken with infinite amplitude precision. The deviation depends on the true correlation value of the signals. The relationship between the two measurement can be expressed as

\hat R_c(m/f_s) = \mbox{F(}\hat R(m)\mbox{)}
\end{displaymath} (8.5.9)

where $\hat R_c(m/f_s)$ and $\hat R(m)$ are the normalized correlation functions (normalized with zero lag correlation in the case of autocorrelation and with square root of zero lag autocorrelations of the signal $s_1(t)$ and $s_2(t)$ in the case of cross correlation) and F is a correction function. It can be shown that the correction function is monotonic (Van Vleck & Middelton 1966, Cooper 1970, Hagan & Farley 1973, Kogan 1998). For example, the functional dependence for a one-bit quantization (the `Van Vleck Correction') is
\hat R_c(m/f_s) = \sin(\frac{\pi}{2}\hat R(m))
\end{displaymath} (8.5.10)

Note that the correction function is non-linear and hence this correction should be applied before any further operation on the correlation function. If the number of bits used for quantization is large then over a large range of correlation values the correction function is approximately linear.

The power spectral density (PSD) of a stationary stochastic process is defined to be the FT of its auto-correlation function (the Wiener-Khinchin theorem). That is if $R_c(\tau) = ~<s(t)s(t -\tau)>$ then the PSD, $S_c(f)$ is

S_c(f)= \int_{-\infty}^{\infty} R_c(\tau)e^{-j2\pi f\tau}d\tau
\end{displaymath} (8.5.11)

From the properties of Fourier transforms we have
R_c(0) = ~<s(t)s(t)> ~=~ \int_{-\infty}^{\infty} S_c(f)df
\end{displaymath} (8.5.12)

i.e. the function $S_c(f)$ is a decomposition of the variance (i.e. `power') of $s(t)$ into different frequency components.

For sampled signals, the PSD is estimated by the Fourier transform of the discrete auto-correlation function. In case the signal is also quantized before the correlation, then one has to apply a Van Vleck correction prior to taking the DFT. Exactly as before, this estimate of the PSD is related to the true PSD via convolution with the window function.

One could also imagine trying to determine the PSD of a function $s(t)$ in the following way. Take the DFTs of the sampled signal $s(n)$ for several periods of length $N$ and average them together and use this as an estimate of the PSD. It can be shown that this process is exactly equivalent to taking the DFT of the discrete auto-correlation function.

The cross power spectrum of the two signals is defined as the FT of the cross correlation function and the estimator is defined in a similar manner to that of the auto-correlation case.

next up previous contents
Next: Further Reading Up: Correlator I. Basics Previous: Digital Delay   Contents