next up previous contents
Next: The Sampling Theorem Up: Signals in Radio Astronomy Previous: Properties of the Gaussian   Contents

The Wiener-Khinchin Theorem

So far, we have only asserted that the sum of waves with random phases generates a time-stationary gaussian signal. We now have to check this. It is convenient to start with a signal going from $0$ to $T$, and only later take the limit $T\rightarrow \infty$. The usual theory of Fourier series tells us that we can write


\begin{displaymath}E(t)\equiv \sum{a_n\cos \omega_nt + b_n\sin\omega_nt}\end{displaymath}


\begin{displaymath}\equiv \sum{r_n \cos(\omega_nt+\varphi_n)}\end{displaymath}

where,

\begin{displaymath}\omega_n=\frac{2\pi}{T},~r_n=\sqrt{a_nx^2+b_n^2}, and
~\tan\varphi_n=-b_n/a_n\end{displaymath}

Notice that the frequencies come in multiples of the ``fundamental'' $2\pi/T$ which is very small since $T$ is large, and hence they form a closely spaced set. We can now compute the autocorrelation


\begin{displaymath}C(\tau)=\langle{E(t)E(t+\tau)\rangle} = \langle \sum_n r_n
\c...
...+\varphi_n) \sum_m r_m
\cos(\omega_m(t+\tau)+\varphi_m)\rangle \end{displaymath}

The averaging on the right hand side has to be carried out by letting each of the phases $\varphi_k$ vary independently from $0$ to $2\pi$. When we do this, only terms with $m=n$ can survive, and we get


\begin{displaymath}C(\tau)=\sum\frac{1}{2}r_n^2\cos \omega_n\tau\end{displaymath}

.

Putting $\tau$ equal to zero, we get the variance


\begin{displaymath}C(0)=\langle E(t)^2 \rangle = \sum\frac{1}{2}r_n^2\end{displaymath}

We note that the autocorrelation is independent of $t$ and hence we have checked time stationarity, at least for this statistical property. We now have to face the limit $T\rightarrow \infty$. The number of frequencies in a given range $\Delta \omega$ blows up as


\begin{displaymath}\frac{\Delta\omega}{(2\pi/T)}=\frac{T\Delta\omega}{2\pi}.\end{displaymath}

Clearly, the $r^2_n$ have to scale inversely with $T$ if statistical qualities like $C(\tau)$ are to have a well defined $T\rightarrow \infty$ behaviour. Further, since the number of $r_n$'s even in a small interval $\Delta \omega$ blows up, what is important is their combined effect rather than the behaviour of any individual one. All this motivates the definition.


\begin{displaymath}\sum_{\omega <
\omega_n < \omega+\Delta\omega}{\frac{r^2_n}{2}}=2S(\omega)\Delta\omega\end{displaymath}

as $T\rightarrow\infty.$ Physically, $2S(\omega)\Delta\omega$ is the contribution to the variance $\langle E^2(t)\rangle$ from the interval $\omega$ to $\omega+\Delta\omega$. Hence the term ``power spectrum'' for $S(\omega)$. Our basic result for the autocorrelation now reads


\begin{displaymath}C(\tau)=\int^\infty_0 2S(\omega)\cos\omega\tau d\omega =
\int^{+\infty}_{-\infty}S(\omega)e^{-i\omega\tau}d\omega\end{displaymath}

if we define $S(-\omega)=S(\omega)$.

This is the ``Wiener-Khinchin theorem'' stating that the autocorrelation function is the Fourier transform of the power spectrum. It can also be written with the frequency measured in cycles (rather than radians) per second and denoted by $\nu$.


\begin{displaymath}C(\tau)=\int^\infty_0 2P(\nu)\cos (2\pi\nu\tau) d\nu =
\int^{+\infty}_{-\infty}P(\nu)e^{-2\pi i\nu\tau}d\nu\end{displaymath}

and as before, $P(-\nu)=P(\nu)$.

In this particular case of the autocorrelation, we did not use independence of the $\varphi$ 's. Thus the theorem is valid even for a non-gaussian random process. (for which different $\varphi$ 's are not independent). Notice also that we could have averaged over $t$ instead of over all the $\varphi$'s and we would have obtained the same result, viz. that contributions are nonzero only when we multiply a given frequency with itself. One could even argue that the operation of integrating over the $\varphi$'s is summing over a fictitious collection (i.e ``ensemble'') of signals, while integrating over $t$ and dividing by $T$ is closer to what we do in practice. The idea that the ensemble average can be realised by the more practical time average is called ``ergodicity'' and like everything else here, needs better proof than we have given it. A rigorous treatment would in fact start by worrying about existence of a well-defined $T\rightarrow \infty$ limit for all statistical quantities, not just the autocorrelation. This is called ``proving the existence of the random process''.

The autocorrelation $C(\tau)$ and the power spectrum $S(\omega)$ could in principle be measured in two different kinds of experiments. In the time domain, one could record samples of the voltage and calculate averages of lagged products to get $C$. In the frequency domain one would pass the signal through a filter admitting a narrow band of frequencies around $\omega$, and measure the average power that gets through.

A simple but instructive application of the Wiener Khinchin theorem is to a power spectrum which is constant (``flat band'') between $\nu_0 -B/2$ and  $\nu_0+B/2$. A simple calculation shows that

\begin{displaymath}C(\tau)~=~2KB \left(
\cos(2\pi \nu_0 \tau)\right)\left(\frac{\sin(\pi B \tau)}{\pi B
\tau}\right)\end{displaymath}

The first factor $2 K B $ is the value at $\tau = 0$, hence the total power/variance to radio astronomers/statisticians. The second factor is an oscillation at the centre frequency. This is easily understood. If the bandwidth $B$ is very small compared to $\nu_0$, the third factor would be close to unity for values of $\tau$ extending over say $1/4B$, which is still many cycles of the centre frequency. This approaches the limiting case of a single sinusoidal wave, whose autocorrelation is sinusoidal. The third sinc function factor describes ``bandwidth decorrelation1.1'', which occurs when $\tau$ becomes comparable to or larger than $1/B$.

Another important case, in some ways opposite to the preceding one, occurs when $\nu_0=B/2$, so that the band extends from $0$ to $B$. This is a so-called ``baseband''. In this case, the autocorrelation is proportional to a sinc function of $2\pi B \tau$. Now, the correlation between a pair of voltages measured at an interval of $1/2B$ or any multiple (except zero!) thereof is zero, a special property of our flat band. In this case, we see very clearly that a set of samples measured at this interval of $1/2B$, the so-called ``Nyquist sampling interval'', would actually be statistically independent since correlations between any pair vanish (this would be clearer after going through Section 1.8). Clearly, this is the minimum number of measurements which would have to be made to reproduce the signal, since if we missed one of them the others would give us no clue about it. As we will now see, it is also the maximum number for this bandwidth!



Footnotes

... decorrelation1.1
also called ``fringe washing'' in Chapter 4

next up previous contents
Next: The Sampling Theorem Up: Signals in Radio Astronomy Previous: Properties of the Gaussian   Contents
NCRA-TIFR