Bernoulli Distribution Calculator
Bernoulli Distribution Calculator
Calculate probabilities, mean, variance, and standard deviation for Bernoulli trials. Compute P(X=1), P(X=0), expected value, and distribution properties.
Bernoulli Distribution Result
P(X=1) = 0.5000
Bernoulli Distribution Formula:
Probability mass function for binary outcomes
Step-by-Step Calculation:
Distribution Moments:
Standard Deviation: 0.5000
Skewness: 0.0000
Kurtosis: -2.0000
Probability Distribution:
The Bernoulli distribution models a single trial with binary outcome (success/failure).
What is Bernoulli Distribution?
The Bernoulli distribution is a discrete probability distribution for a random variable that takes only two possible outcomes: success (coded as 1) with probability p, and failure (coded as 0) with probability 1-p. It's the simplest case of a binomial distribution and models a single trial of a binary experiment.
Named after Swiss mathematician Jacob Bernoulli, this distribution forms the foundation for more complex distributions like binomial, geometric, and negative binomial.
Bernoulli Distribution Formulas
Probability Mass Function
x ∈ {0,1}
Binary outcome
Mean (Expected Value)
First moment
Average outcome
Variance
Spread measure
Maximum at p=0.5
Standard Deviation
Dispersion
Square root of variance
Complete Bernoulli Distribution Properties
| Property | Formula | Value for p=0.5 | Interpretation |
|---|---|---|---|
| Probability Mass Function | P(X=x) = pˣ(1-p)¹⁻ˣ | P(0)=0.5, P(1)=0.5 | Binary probability function |
| Mean (Expected Value) | E[X] = p | 0.5 | Average outcome value |
| Variance | Var(X) = p(1-p) | 0.25 | Spread of distribution |
| Standard Deviation | σ = √[p(1-p)] | 0.5 | Typical deviation from mean |
| Skewness | γ₁ = (1-2p)/√[p(1-p)] | 0 | Symmetry measure |
| Kurtosis | γ₂ = [1-6p(1-p)]/[p(1-p)] | -2 | Tail heaviness |
| Moment Generating Function | M(t) = 1-p+peᵗ | 0.5+0.5eᵗ | Generates all moments |
Step-by-Step Examples
Example 1: Fair Coin Toss (p = 0.5)
- Define success: Heads (X=1)
- Success probability: p = 0.5
- Failure probability: 1-p = 0.5
- PMF: P(X=1) = 0.5¹ × 0.5⁰ = 0.5
- Mean: E[X] = p = 0.5
- Variance: Var(X) = p(1-p) = 0.5 × 0.5 = 0.25
- Standard Deviation: σ = √0.25 = 0.5
Example 2: Biased Coin (p = 0.3)
- Success probability: p = 0.3
- Failure probability: 1-p = 0.7
- PMF: P(X=1) = 0.3, P(X=0) = 0.7
- Mean: E[X] = 0.3
- Variance: Var(X) = 0.3 × 0.7 = 0.21
- Standard Deviation: σ = √0.21 ≈ 0.458
- Skewness: γ₁ = (1-0.6)/√0.21 ≈ 0.873
Example 3: Rare Event (p = 0.05)
- Success probability: p = 0.05
- Failure probability: 1-p = 0.95
- PMF: P(X=1) = 0.05, P(X=0) = 0.95
- Mean: E[X] = 0.05
- Variance: Var(X) = 0.05 × 0.95 = 0.0475
- Standard Deviation: σ = √0.0475 ≈ 0.218
- Skewness: γ₁ = (1-0.1)/0.218 ≈ 4.128 (highly skewed)
Real-World Applications
Quality Control & Manufacturing
- Defect detection: Probability that a single item is defective
- Pass/fail testing: Whether a product passes quality standards
- Binary classification: Good/bad, acceptable/unacceptable decisions
- Reliability testing: Component works/fails in single test
Medical & Healthcare
- Treatment response: Patient responds/doesn't respond to treatment
- Disease presence: Test positive/negative for disease
- Side effects: Patient experiences/doesn't experience side effect
- Surgery outcome: Successful/unsuccessful procedure
Business & Economics
- Customer conversion: Visitor makes purchase/leaves
- Credit approval: Loan approved/denied
- Market entry: Product launch succeeds/fails
- Investment outcome: Investment gains/loses value
Everyday Life
- Weather forecast: Rain/no rain on specific day
- Commute time: On time/late for work
- Sports outcome: Team wins/loses single game
- Exam result: Pass/fail a test
Relationship to Other Distributions
| Distribution | Relationship to Bernoulli | When to Use |
|---|---|---|
| Binomial | Sum of n independent Bernoulli trials | Multiple trials, count successes |
| Geometric | Number of trials until first success | Waiting time for first success |
| Negative Binomial | Number of trials until r successes | Waiting time for multiple successes |
| Poisson | Limit of Bernoulli for rare events | Count of rare events in interval |
| Exponential | Continuous analog for waiting times | Continuous time between events |
Special Cases and Properties
Symmetric Case (p = 0.5)
- Maximum variance: 0.25
- Zero skewness: γ₁ = 0 (symmetric distribution)
- Minimum kurtosis: γ₂ = -2 (platykurtic)
- Equal probabilities: P(0) = P(1) = 0.5
- Examples: Fair coin toss, unbiased decision
Extreme Cases
- p → 0: Rare events, high positive skewness
- p → 1: Almost certain, high negative skewness
- p = 0 or p = 1: Degenerate distribution (no randomness)
- Maximum variance: Occurs at p = 0.5 (σ² = 0.25)
- Minimum variance: At p = 0 or p = 1 (σ² = 0)
Related Probability Calculators
Frequently Asked Questions (FAQs)
Q: What's the difference between Bernoulli and Binomial distributions?
A: Bernoulli distribution models a single trial with binary outcome. Binomial distribution models the number of successes in n independent Bernoulli trials. Bernoulli is Binomial with n=1.
Q: Why is maximum variance at p=0.5?
A: Variance p(1-p) is maximized when p=0.5 because the product is maximized when both factors are equal. This represents maximum uncertainty - we're equally unsure about success or failure.
Q: Can Bernoulli distribution have more than two outcomes?
A: No, by definition Bernoulli distribution has exactly two possible outcomes. For more than two outcomes, use categorical or multinomial distributions.
Q: What does negative kurtosis mean for Bernoulli?
A: Negative kurtosis (platykurtic) means the distribution has lighter tails and is less peaked than a normal distribution. For p=0.5, kurtosis is -2, indicating very light tails.
Master probability distributions with Toolivaa's free Bernoulli Distribution Calculator, and explore more statistical tools in our Statistics Calculators collection.