next up previous contents
Next: Higher order moments: skewness, Up: Probability Distributions Previous: Gamma distribution

Poisson distribution

If the occurrence of an event is random and rare, then the expected distribution function for this event is expected to follow a Poisson distribution. The Poisson distribution describes a number of discrete events in a sequence. Some nice properties of the Poisson processes is that the data variance equals the expected value: oar(x)=E(x) (Often seen in storm and hurricane events). The events $\tau_i - \tau_{i-1}$ are ``exponentially'' distributed if the events are of a Poisson type. The transform $\sqrt{x}$ is more Gaussian. The Poisson processes assume that the probability of events is small to avoid having multiple events at same time. The expected number of occurrence only depends on the length of the interval over which they are counted: the occurrence does not depend on time or previous history.


  
Figure: An example of a best-fit Poisson distribution to the events whereby the Bergen September temperature is greater than 12.5$^\circ $. It appears that the Bergen temperature is not a Poisson process. [stats_uib_3_5.m]
\begin{figure}\centerline{
\epsfxsize=5in
\epsfysize=5in
\epsffile{figs/stats_uib_3-5.eps}
}
\end{figure}


  
Figure: An example of a best-fit Poisson distribution to the events whereby the daily Oslo rainfall exceeds 30mm. These rainfall events may be randomly distributed in time (Poisson): the fit is not too bad but not excellent either. [stats_uib_3_6.m]
\begin{figure}\centerline{
\epsfxsize=5in
\epsfysize=5in
\epsffile{figs/stats_uib_3-6.eps}
}
\end{figure}


 \begin{displaymath}P_r(X=x)= \frac{\mu^x e^{-\mu}}{x!}.
\end{displaymath} (3.8)

The mathematical description of the Poisson distribution curve is given in equation 3.8. It may be most convenient to first calculate the logarithmic values of the various factors, and then the exponential of these, since large numbers will result in numerical overflow (eg, the computers may not be able to handle such large numbers). Thus, in practical terms: $P_r(X=x)= exp \{ x \ln (\mu) - \mu - \ln(x!) \}$. The calculation of $\ln(x!)$ is described in the Numerical Recipes [] (p.179), and there are often standard routines for doing this in analytical packages.

The intensity parameter, $\mu$, which is the average occurrence rate, can be estimated by the method of moments: $\mu = N_{\mbox{\scriptsize occurrence}}/\mbox{(interval)}$. In equation 3.8 Pr(X=x) is the probability of x number of events occurring, but can also be interpreted as the expected frequency of seeing x events in a given interval.

Figure 3.5 shows the September temperature in Bergen (a) and the distribution function for the time interval between each time the September month is warmer than 13.5$^\circ $. On average, there are 6.4 years between each of these warm events. If these warm events were to be randomly distributed in time, then the distribution function for the intervals between each event ought to follow a Poisson distribution curve. It is evident from the figure that the empirical distribution function is far from similar to the Poisson curve, which indicates that the warm September incidents in Bergen are not completely random. The main weakness of this analysis is that the time series is short, and therefore the interval distribution curve may not be well defined. Figure 3.6 shows similar analysis for the daily precipitation in Norway for the period 01-Jan-1883 to 31-Jul-1964, and is based on a much larger sample. Now, the event in question is daily rainfall in Oslo greater than 30mm. The mean interval length between each of these down-pours is 426 days (a bit more than a year). For this case, the observed distribution does resemble the Poisson curve, hence the precipitation events are not time-dependent from year-to-year.


next up previous contents
Next: Higher order moments: skewness, Up: Probability Distributions Previous: Gamma distribution
David Stephenson
2000-09-02