Uniform Distribution definition, formula and applications
Spread the love
Uniform distribution is a word used in statistics to describe a type of probability distribution in which every conceivable outcome has an equal chance of occurring. Because each variable has an equal chance of being the outcome, the probability is constant.
A continuous random variable x is said to have a uniform distribution if the probability function is defined by-
where, a and b are the two parameters of the distribution such that -∞<=a<=b<=∞.
For example, if you stood on a street corner and began handing out $100 bills to any lucky passerby, each pedestrian would have an equal chance of receiving the cash. The probability percentage is 1 divided by the entire number of possible outcomes (number of passersby). Short persons and women, on the other hand, would have a better chance of receiving the $100 note than the rest of the crowd. It’s not what you’d call uniform probability.
While the historical origins of uniform distribution are unknown, it is thought that the name “uniform” came from the concept of equiprobability in dice games (note that the dice games would have discrete and not continuous uniform sample space). In Gerolamo Cardano’s Liber de Ludo Aleae, a guidebook created in the 16th century that covered advanced probability calculus in relation to dice, equiprobability was stated.
Types of Uniform Distribution
There are two types, i.e, discrete and continuous uniform distribution.
Discrete: A discrete uniform distribution is a statistical distribution with limited values and equal probability of outcomes in statistics and probability theory. The multiple results of rolling a 6-sided die are a nice example of a discrete uniform distribution. 1, 2, 3, 4, 5, or 6 are examples of possible values. Each of the six numbers has an equal chance of occuring in this situation. As a result, each side of the 6-sided die has a 1/6 probability each time it is thrown. The total number of possible values is limited. When rolling a fair die, it is impossible to acquire a value of 1.3, 4.2, or 5.7. When a second die is added and both are thrown, the resulting distribution is no longer uniform because the probability of the sums is not equal. The probability distribution of a coin flip is another simple example. There can only be two conceivable outcomes in such a circumstance. As a result, the finite value is two.
In Monte Carlo simulation, a discrete uniform distribution is also helpful. This is a modeling technique that employs computer technology to determine the likelihood of various events. Monte Carlo simulation is frequently used to estimate future events and identify dangers.
Continuous: A statistical distribution with an infinite number of equally likely measurable values is known as a continuous uniform distribution (sometimes known as a rectangle distribution). A continuous random variable, unlike discrete random variables, can take any real value within a given range.
A rectangular shape is typical for a continuous uniform distribution. An idealized random number generator is a nice example of a continuous uniform distribution. Every variable has an equal chance of occurring in a continuous uniform distribution, just as it does in a discrete uniform distribution. However, there are an endless number of possible points.
There are some impotant properties of uniform distribution-
Some special characteristics of uniform distribution are given below-
The probability of this distribution is same for equal intervals in any part of the distribution.
The probability of uniform distribution depends on the length of the intervals, not on its position.
The pdf of the uniform distribution over the interval [0,1] is defined by f(x)=1.
Moreover, uniform distribution can be defined in a infinite number of ways.
Uniform Vs. Normal Distribution
Probability distributions assist you in determining the likelihood of a future event. Discrete uniform, binomial, continuous uniform, normal, and exponential are some of the most frequent probability distributions. The normal distribution, which is commonly shown as a bell curve, is perhaps one of the most well-known and widely used. Normal distributions depict the distribution of continuous data and assert that the majority of the data is centered on the mean or average. In a normal distribution, the area under the curve equals one, and 68.27% of all data falls within one standard deviation of the mean; 95.45% of all data falls within two standard deviations of the mean; and about 99.73 percent of all data falls within three standard deviations of the mean. The frequency of data happening reduces as the data travels away from the mean.
Some real life applictions are-
A conventional deck of cards contains 52 cards. Hearts, diamonds, clubs, and spades are the four suits of the deck. A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K, and two jokers make up each suit. For this example, we’ll ignore the jokers and face cards, focusing solely on the number cards that are duplicated in each suit. As a result, we’re left with 40 cards, which are a collection of discrete data.
Let’s say you want to know how likely it is to get a 2 of hearts from the changed deck. The chances of getting a 2 of hearts are 1 in 40 or 2.5 percent. Because each card is unique, the chances of you pulling any of the cards in the deck are the same.
A uniform distribution in statistics is a probability distribution in which all events are equally likely. There are a finite number of outcomes in discrete uniform distributions. A statistical distribution with an endless number of equally likely measurable values is known as a continuous uniform distribution. The foundations of statistical analysis and probability theory are the notions of discrete uniform distribution and continuous uniform distribution, as well as the random variables they describe.