Why Correlation Changes Uncertainty Propagation
The standard uncertainty propagation formula:
assumes that x and y are measured independently - that errors in x have nothing to do with errors in y. This assumption holds in many experiments, but not all.
When two variables share a common source of error - for example, when both are measured using the same instrument, or when one is derived from the other - their errors are correlated. Ignoring this correlation can cause you to significantly overestimate or underestimate your propagated uncertainty.
Positive Correlation
x and y both tend to be high together or low together. Example: measuring two lengths with the same ruler that has a calibration error. Both lengths are simultaneously too long or too short.
Effect: uncertainty can be larger than the independent formula suggests.
Negative Correlation
When x is high, y tends to be low, and vice versa. Example: computing f = x − y where x and y are measured with the same instrument. The same calibration error affects both in the same direction - but because you subtract, the errors partially cancel.
Effect: uncertainty can be smaller than the independent formula suggests - sometimes dramatically so.
The Full Formula with Covariance
The complete uncertainty propagation formula for a function f(x, y) is:
The extra term 2·(∂f/∂x)·(∂f/∂y)·Cov(x,y) is the covariance contribution. It vanishes when x and y are independent (Cov(x,y) = 0), recovering the standard formula.
The covariance Cov(x,y) is defined as:
where ρ (rho) is the correlation coefficient, ranging from −1 (perfectly anti-correlated) to +1 (perfectly correlated). For independent variables ρ = 0.
So the full formula can be written as:
Important Special Cases
Case 1 - Perfect positive correlation (ρ = +1)
For f = x + y:
Δf² = Δx² + Δy² + 2·Δx·Δy = (Δx + Δy)²
Δf = Δx + ΔyWith perfect positive correlation, uncertainties add directly - not in quadrature. This is the worst case for a sum.
For f = x − y:
Δf² = Δx² + Δy² − 2·Δx·Δy = (Δx − Δy)²
Δf = |Δx − Δy|With perfect positive correlation, the difference has nearly zero uncertainty if Δx ≈ Δy. The correlated errors cancel completely.
Case 2 - Perfect negative correlation (ρ = −1)
For f = x + y:
Δf = |Δx − Δy|The correlated errors partially cancel in the sum.
For f = x − y:
Δf = Δx + ΔyThe correlated errors add directly in the difference - worst case for subtraction.
Case 3 - No correlation (ρ = 0)
The covariance term vanishes and the formula reduces to the standard:
Δf = √(( ∂f/∂x · Δx)² + (∂f/∂y · Δy)²)This is the formula used throughout the rest of these guides and in the calculator.
Common Sources of Correlation in Physics Experiments
Source 1 - Same instrument used for multiple measurements
If you measure lengths x and y using the same ruler, any calibration error in the ruler affects both measurements equally and in the same direction. x and y are positively correlated with ρ ≈ +1 for the systematic component of their errors.
Source 2 - One variable derived from another
If you compute y = f(x) and then use both x and y in a further formula, x and y are fully correlated because y is not independently measured - it is calculated from x. This situation requires special treatment; the simplest approach is to substitute the expression for y and propagate from the original independent variables only.
Source 3 - Shared environmental conditions
Two measurements made in quick succession may share the same temperature, humidity, or electromagnetic environment. If these conditions fluctuate slowly, nearby measurements are positively correlated.
Source 4 - Repeated use of the same standard
If multiple measurements are all calibrated against the same reference standard, and that standard has an unknown offset, all measurements share a common systematic error - positive correlation.
Source 5 - Counting experiments
In particle physics and nuclear counting experiments, if two counts share a common background, the background uncertainty correlates them. Subtracting background from signal requires careful treatment of this correlation.
Worked Example - Measuring a Difference with the Same Instrument
A student measures two temperatures using the same thermometer to find the temperature difference ΔT = T₂ − T₁.
T₁ = 20.0 ± 0.5 °C
T₂ = 35.0 ± 0.5 °CThe thermometer has a calibration uncertainty of 0.5 °C that affects all readings equally (systematic, fully correlated, ρ = +1 for the systematic component).
Case A - Treating as independent (wrong)
ΔT = 35.0 − 20.0 = 15.0 °C
δ(ΔT) = √(0.5² + 0.5²) = √0.50 = 0.71 °C
Result: ΔT = 15.0 ± 0.7 °C
Case B - Accounting for full positive correlation (ρ = +1)
δ(ΔT)² = 0.5² + 0.5² + 2·(+1)·(−1)·(+1)·0.5·0.5
Note: ∂f/∂T₂ = +1, ∂f/∂T₁ = −1
δ(ΔT)² = 0.25 + 0.25 − 0.50 = 0.00
δ(ΔT) = 0 °C
The correlated calibration error cancels completely in the difference. Both temperatures are shifted by the same amount, so the difference is unaffected.
Case C - Realistic (random errors independent, systematic errors fully correlated)
In practice the 0.5 °C uncertainty has two components:
- Random reading uncertainty: ~0.2 °C (independent, ρ = 0)
- Systematic calibration uncertainty: ~0.46 °C (fully correlated, ρ = +1)
δ(ΔT)_random = √(0.2² + 0.2²) = 0.28 °C
δ(ΔT)_systematic = |0.46 − 0.46| = 0 °C (cancels)
Combined: δ(ΔT) ≈ 0.28 °C
Result: ΔT = 15.0 ± 0.3 °C
This is significantly more precise than the naive independent treatment (0.7 °C). Using the same instrument for both measurements is actually advantageous when computing a difference.
How to Estimate ρ in Practice
Determining ρ precisely requires either knowing the error sources in detail or having a large dataset of repeated paired measurements. In practice:
- Independent measurements: ρ = 0. Use the standard propagation formula. This applies when measurements are made with different instruments, at different times, or under genuinely independent conditions.
- Same instrument, systematic dominated: ρ ≈ +1 for the systematic component. Separate your uncertainty budget into random and systematic parts and treat each appropriately.
- Derived variable: ρ = ±1 (fully correlated). Avoid using derived variables alongside their source in propagation - substitute and propagate from independent variables only.
- From data: if you have n repeated paired measurements (xᵢ, yᵢ), estimate ρ using the sample correlation coefficient:
ρ = Σ(xᵢ − x̄)(yᵢ − ȳ) / (n−1) / (σₓ · σᵧ)
Practical Advice for Lab Reports
For most undergraduate physics experiments, the standard independent propagation formula (ρ = 0) is appropriate and expected. However, you should:
- Check for obvious correlations. If two variables are measured with the same instrument, or if one is derived from the other, flag this in your error analysis.
- Design experiments to avoid problematic correlations. If possible, measure independent quantities with independent instruments. Use a difference or ratio measurement to exploit favourable correlation.
- If correlations are significant, mention them in your error analysis even if you cannot fully quantify them. Noting that 'the calibration uncertainty in the thermometer affects both temperature measurements equally and partially cancels in the difference' demonstrates understanding without requiring full covariance matrix calculation.
- For advanced experiments, use the full covariance formula with your best estimate of ρ, and show that the result is robust by checking the sensitivity to ρ.
Common Mistakes to Avoid
Quick Reference
| Situation | ρ | Effect on Δf for sum | Effect on Δf for difference |
|---|---|---|---|
| Independent measurements | 0 | Standard quadrature | Standard quadrature |
| Same instrument (systematic) | +1 | Δf = Δx + Δy (larger) | Δf = |Δx − Δy| (smaller) |
| Opposite systematics | −1 | Δf = |Δx − Δy| (smaller) | Δf = Δx + Δy (larger) |
| Partial correlation | 0 < ρ < 1 | Between quadrature and direct sum | Between quadrature and cancellation |
Frequently Asked Questions
Q1Do I need to worry about correlated variables in my undergraduate lab report?
For most experiments, no - independent propagation is correct and expected. However, if you are measuring a difference or ratio using the same instrument for both measurements, or if one variable is derived from another, you should at least note the correlation and discuss its qualitative effect.
Q2What is covariance and how is it different from correlation?
Covariance Cov(x,y) = ρ·σₓ·σᵧ measures how much x and y vary together in absolute terms - it has units of (units of x)·(units of y). The correlation coefficient ρ = Cov(x,y)/(σₓ·σᵧ) is the dimensionless version, normalised to lie between −1 and +1. In the propagation formula, covariance appears directly; ρ is its normalised equivalent.
Q3My formula has the same variable appearing twice - for example f = x² − x. How do I propagate this?
Do not treat the two appearances of x as independent. The correct approach is to differentiate directly: ∂f/∂x = 2x − 1, so Δf = |2x − 1| · Δx. There is only one variable x, so there is no correlation issue - just a single partial derivative. The correlation problem arises only when you have two separately measured quantities that share a common error source.
Q4Can correlation make the propagated uncertainty zero?
Yes, in the limiting case of perfect positive correlation (ρ = +1) in a difference measurement where Δx = Δy. The worked example shows this exactly - the calibration error cancels completely in the temperature difference. In practice, random errors are independent so the uncertainty never reaches exactly zero, but it can be much smaller than the naive independent estimate.
Q5How do I compute covariance from experimental data?
If you have n paired measurements (xᵢ, yᵢ), the sample covariance is: Cov(x,y) = Σ(xᵢ − x̄)(yᵢ − ȳ) / (n−1). In Python: numpy.cov(x, y)[0,1]. In Excel: COVARIANCE.S(x_range, y_range). The correlation coefficient is then ρ = Cov(x,y) / (σₓ · σᵧ).
Q6Does the calculator on this site handle correlated variables?
The calculator assumes all variables are independent (ρ = 0), which is correct for the vast majority of undergraduate experiments. For correlated variables, use the calculator for the independent contribution and add the covariance correction term manually using the formula shown in Section 2 of this guide.
For independent variables - the most common case - the calculator handles everything automatically. For correlated variables, use it for the independent terms and apply the covariance correction shown above.
Open the Calculator →Continue Learning
Complete Guide
The fundamentals of uncertainty propagation.
Addition & Subtraction
Working with absolute uncertainties.
Multiplication & Division
Master the relative uncertainty rule.
Powers & Exponents
Propagating through non-linear functions.
Significant Figures
How to round and report your results.
Random vs Systematic Error
Understanding different types of experimental error.
Standard Error vs Deviation
When to use σ versus σ/√n in your reports.