Homogeneous Poisson Process
Last Updated :
11 Jul, 2025
The poisson process is one of the most important and widely used processes in probability theory. It is widely used to model random points in time or space. In this article we will discuss briefly about homogeneous Poisson Process.
Poisson Process -
Here we are deriving Poisson Process as a counting process. Let us assume that we are observing number of occurrence of certain event over a specified period of time. ( Here we are considering time as an example. We might also consider space etc.)We can consider them as happening under a Poisson Process provided they satisfy below conditions.
- The number of occurrences during disjoint time intervals are independent.
- The probability of a single occurrence during a small time interval is proportional to the length of the interval.
- The probability of more than one occurrence during a small time interval can be neglected.
If we denote number of occurrences during a time interval of length t as X(t) then
P(X(t)=n) = \frac{e^{-\lambda t}(\lambda t)^n}{n!}
Examples -
Many real life situations can be modelled using Poisson Process. Suppose we consider number of accidents in a road. We can easily understand that the three above conditions are satisfied. For two disjoint time intervals number of accidents in a given road are independent, Again it is quite improbable that two or more accidents occur at a small interval of time. Intuitively we can also assume that probability that an accident occurs during a small interval of time is proportional to the length of the time interval. Number of earthquakes in a place can also be modelled using Poisson process.
Derivation -
Now we prove our claim that if X(t) be the number of occurrence in an interval of length t, then
P(X(t)=n) = \frac{e^{-\lambda t}(\lambda t)^n}{n!}
where \lambda is the rate of occurrence.
We will use mathematical induction to prove the statement. First we write the assumptions written above in mathematical terms. According to assumption 3 in a small time interval h
P(X(h) >1) = o(h)
where \frac{o(h)}{h} tends to zero as h tends to zero or
1-P(X(h)=0) -P(X(h)=1) = o(h) .
Again if \lambda be the rate of occurrence then according to assumption 2 we get,
P(X(h)=1) =\lambda h .
Let us take an interval (0, t) and a small interval (t, t+h). We will denote P(X(t)=n) as P_n(t) . So the above equations can be written as,
1-P_0(h) -P_1(h) = o(h)
or
P_0(h) = 1-\lambda h -o(h)
So we have to prove that
P_n(t) = \frac{e^{-\lambda t}(\lambda t)^n}{n!} .
First we will prove the result for n=0 and n=1. Then we will show that if the result is true for n=m then it will be true for n=m+1.
Take the interval (0, t+h). Now,
P_0(t+h)=P_0(t)P_0(h)
(since the occurrences in the interval (0, t) and (t, t+h) are independent) or ,
P_0(t+h) = P_0(t)(1-\lambda (h) -o(h) or \frac{P_0(t+h)-P_0(t)}{h} = -\lambda P_0(t) -\frac{o(h)}{h} .
Taking limit as h tends to zero we get,
P_{0}'(t) =- \lambda P_0(t). .
The solution of above differential equation is,
P_0(t)=ce^{-\lambda t} ,
taking the initial condition P_0(0)=1 we evaluate c=0. Hence,
P_0(t)=e^{- \lambda t} , so our claim is true for n=0.
Now we try to prove it for n=1.
P_1(t+h)=P_1(t)P_0(h) + P_0(t)P_1(h)
(We use the fact that the occurrence must be in either of the interval (0, t) and (t, t+h)), or
P_1(t+h) =P_1(t)(1-\lambda h-o(h))+ e^{-\lambda t}(\lambda h) ,
or
\frac{P_1(t+h) -P_1(t)}{h}= -\lambda P_1(t) - \lambda e^{-\lambda t}- \frac{o(h)}{h} .
Again taking limit as h tends to zero,
P_{1}'(t)=-\lambda P_1(t) - \lambda e^{-\lambda t} .
This is a first order linear differential equation and the solution is,
P_1(t)=\lambda t e^{-\lambda t}+c_1
where c_1 is a constant. Since, P_1(0)=0 . We get,
c_1=0 . Hence P_1(t)=\lambda t e^{-\lambda t} , or P_1(t)=\frac{(\lambda t)^1 e^{-\lambda t}}{1!} .
So our claim is true for n=1. We assume that our claim is true for n=m.
We will show that it is true for n=m+1. So,
P_{m+1}(t+h) =P_{m+1}(t)P_0(h) +P_m(t)P_1(h) +\sum_{j=1}^m {P_{m-j}(t)P_{j+1}(h)} ,
(We assume that the m+1 occurrence can happen in different ways such as m+1 occurrences in (0, t) and no occurrence in (t, t+h) or m occurrences in (0, t) and 1. occurrence in (t, t+h), or m-j occurrences in (0, t) and j+1 occurrence in (t, t+h) for j=1 to m). So,
P_{m+1}(t+h) =P_{m+1}(t)(1-\lambda h-o(h)) + \frac{e^{-\lambda t}(\lambda t)^m}{m!}\lambda h +\sum_{j=1}^m P_{m-j}(t)o(h)
Since,
P_{j+1}(h)=o(h) for j>=1.
Or,
\frac{P_{m+1}(t+h)-P_{m+1}(t)}{h}=-\lambda P_{m+1}(t)+\frac{{\lambda}^{m+1}t^m}{m!}e^{-\lambda t}+\frac{o(h)}{h}\sum_{j=1}^{m}P_{m-j}(t)-\frac{o_(h)}{h}P_{m+!}(t)
Taking limit as h goes to zero we have,
P'_{m+1}(t)=-\lambda P_{m+1}(t) + \frac{{\lambda}^{m+1}t^m}{m!}e^{-\lambda t} .
This is again a first order differential equation whose solution so,
P_{m+1}(t)=\frac{(\lambda t)^{m+1}e^{-\lambda t}}{(m+1)!}+c_2 .
If e assume that P_{m+1}(0)=1 we get c_2=0 .
So the final result is,
P_{m+1}(t)=\frac{(\lambda t)^{m+1}e^{-\lambda t}}{(m+1)!} .
Hence the result is proved.
Thus we have derived the pmf of no. of occurrences in a Poisson Process which is a Poisson Distribution with parameter \lambda . Now if this \lambda is a function of time we call the process as non-homogeneous Poisson process.
Similar Reads
Rolle's Mean Value Theorem Rolle's theorem one of the core theorem of calculus states that, for a differentiable function that attains equal values at two distinct points then it must have at least one fixed point somewhere between them where the first derivative of the function is zero.Rolle's Theorem and the Mean Value Theo
8 min read
Mathematics | Unimodal functions and Bimodal functions Before diving into unimodal and bimodal functions, it's essential to understand the term "modal." A mode refers to the value at which a function reaches a peak, typically a maximum point. The behavior of functions can vary depending on how many peaks or modes they contain, giving rise to classificat
5 min read
Surface Area and Volume of Hexagonal Prism Given a Base edge and Height of the Hexagonal prism, the task is to find the Surface Area and the Volume of hexagonal Prism. In mathematics, a hexagonal prism is a three-dimensional solid shape which have 8 faces, 18 edges, and 12 vertices. The two faces at either ends are hexagons, and the rest of
5 min read
Inverse functions and composition of functions Inverse Functions - In mathematics a function, a, is said to be an inverse of another, b, if given the output of b a returns the input value given to b. Additionally, this must hold true for every element in the domain co-domain(range) of b. In other words, assuming x and y are constants, if b(x) =
3 min read
Mathematics | Indefinite Integrals Antiderivative - Definition :A function ∅(x) is called the antiderivative (or an integral) of a function f(x) of ∅(x)' = f(x). Example : x4/4 is an antiderivative of x3 because (x4/4)' = x3. In general, if ∅(x) is antiderivative of a function f(x) and C is a constant.Then, {∅
4 min read
Mean, Variance and Standard Deviation Mean, Variance and Standard Deviation are fundamental concepts in statistics and engineering mathematics, essential for analyzing and interpreting data. These measures provide insights into data's central tendency, dispersion, and spread, which are crucial for making informed decisions in various en
10 min read
Newton's Divided Difference Interpolation Formula Interpolation is an estimation of a value within two known values in a sequence of values. Newton's divided difference interpolation formula is an interpolation technique used when the interval difference is not same for all sequence of values. Suppose f(x0), f(x1), f(x2).........f(xn) be the (n+1)
11 min read
Mathematics - Law of Total Probability Probability theory is the branch of mathematics concerned with the analysis of random events. It provides a framework for quantifying uncertainty, predicting outcomes, and understanding random phenomena. In probability theory, an event is any outcome or set of outcomes from a random experiment, and
12 min read
Uniform Distribution in Data Science Uniform Distribution also known as the Rectangular Distribution is a type of Continuous Probability Distribution where all outcomes in a given interval are equally likely. Unlike Normal Distribution which have varying probabilities across their range, Uniform Distribution has a constant probability
5 min read
Exponential Distribution The Exponential Distribution is one of the most commonly used probability distributions in statistics and data science. It is widely used to model the time or space between events in a Poisson process. In simple terms, it describes how long you have to wait before something happens, like a bus arriv
3 min read