The Core Distinction
Every experimental measurement contains error - deviation from the true value. But not all errors are the same. Physics distinguishes two fundamentally different types:
Random errors scatter measurements unpredictably around the true value. They are different every time you measure.
Systematic errors shift every measurement in the same direction by the same amount. They do not scatter - they bias.
This distinction matters enormously because the two types require completely different treatments. Random errors are reduced by averaging. Systematic errors are not - they must be identified and corrected.
Random Error
- • Unpredictable
- • Varies between measurements
- • Scatters around true value
- • Reduced by averaging
- • Treated by uncertainty propagation
- Example: reaction time when starting a stopwatch
Systematic Error
- • Consistent
- • Same direction every time
- • Shifts all readings
- • NOT reduced by averaging
- • Must be identified and corrected
- Example: a ruler with a worn end giving readings 1mm too long
Random Errors in Detail
Random errors arise from unpredictable fluctuations in the measurement process. No matter how careful you are, some randomness is unavoidable in any real experiment.
Common sources of random error:
- Human reaction time (starting/stopping a stopwatch)
- Reading a scale between graduation marks (interpolation)
- Electrical noise in sensitive instruments
- Vibration or air currents affecting a balance
- Natural variability in the quantity being measured
The defining characteristic of random errors is that they are equally likely to be positive or negative. If you plot a histogram of many repeated measurements, you typically see a roughly normal (Gaussian) distribution centred on the true value.
Because random errors scatter symmetrically, taking the average of many measurements reduces the uncertainty in the mean. Specifically, if a single measurement has uncertainty Δx, the uncertainty in the mean of n measurements is Δx/√n.
Systematic Errors in Detail
Systematic errors are consistent biases that affect every measurement in the same way. They do not average out - taking more measurements just gives you more precise estimates of the wrong value.
Common sources of systematic error:
- Zero error: instrument reads non-zero when it should read zero (e.g. a balance not tared, a voltmeter with offset)
- Calibration error: instrument scale is slightly wrong throughout its range
- Parallax: reading a scale from an angle rather than straight on
- Environmental factors: temperature affecting a measuring tape, air resistance ignored in a free-fall experiment
- Model error: using an oversimplified theoretical model that does not match reality
- Loading error: the measuring instrument changes the quantity being measured (e.g. a voltmeter drawing current)
Systematic errors are insidious because they are invisible in your data. If all your measurements are consistently wrong in the same direction, your data will look clean and repeatable - but the answer will still be wrong.
How to Identify Which Type You Have
The key diagnostic question is: if I repeat the measurement many times, do the readings scatter, or do they cluster tightly around a wrong value?
- Scatter → random error dominates
- Tight clustering but wrong answer → systematic error dominates
- Both → both types are present (the most common situation)
| Characteristic | Random Error | Systematic Error |
|---|---|---|
| Direction | Varies (±) | Always same direction |
| Magnitude | Varies each time | Consistent |
| Effect on mean | Cancels with averaging | Persists in mean |
| Visible in data | Yes - as scatter | No - data looks clean |
| Reduced by averaging | Yes | No |
| Shown in uncertainty | Yes | Only if modelled |
| Fix | Average more readings | Identify and correct |
Accuracy vs Precision
These two terms are often confused but have distinct meanings in experimental physics:
Precision refers to the reproducibility of measurements - how tightly clustered repeated measurements are. High precision means small random error.
Accuracy refers to how close the measurements are to the true value. High accuracy means small systematic error.
It is possible to be precise but inaccurate (all measurements cluster tightly but around the wrong value - systematic error dominates), or accurate but imprecise (measurements scatter widely but average to the correct value - random error dominates).
High precision, high accuracy
Small random error, small systematic error. Ideal experimental result.
High precision, low accuracy
Small random error, large systematic error. Data looks clean but answer is wrong. Most dangerous - easy to miss.
Low precision, high accuracy
Large random error, small systematic error. Measurements scatter but average to correct value. Fix: take more measurements.
Low precision, low accuracy
Large random error, large systematic error. Both types of error are significant. Fundamental experimental problems need addressing.
How to Reduce Each Type
• Reducing Random Error
- Take repeated measurements and calculate the mean. The uncertainty in the mean is σ/√n where σ is the standard deviation and n is the number of measurements.
- Use more precise instruments with finer graduations.
- Control environmental variables (temperature, vibration, air currents) to reduce their contribution.
- Improve experimental technique - for example, use a light gate instead of a stopwatch to eliminate reaction time error.
- Use automated data collection where possible to remove human variability.
• Reducing Systematic Error
- Calibrate instruments against a known standard before use.
- Check for and correct zero errors before every measurement.
- Use a control experiment or blank measurement to characterise the background.
- Read scales straight on to eliminate parallax.
- Use more sophisticated models that account for effects you previously ignored (air resistance, thermal expansion, etc.).
- Cross-check results using a completely independent measurement method.
How to Handle Each Type in a Lab Report
Random errors are handled through uncertainty propagation. Assign an uncertainty to each measured quantity, propagate it through your formula using the rules on this site, and report the final result with its propagated uncertainty. This is the standard treatment and what the calculator on this site is designed for.
Systematic errors require a different approach:
- Identify potential sources of systematic error in your experimental setup.
- Estimate the magnitude of each systematic effect where possible.
- Correct for it if you can (e.g. subtract a measured zero error).
- If you cannot correct for it, discuss it in your error analysis section and estimate its likely effect on your result.
- Never propagate systematic errors through the uncertainty formula as if they were random - the formula assumes random, independent errors.
Worked Example - Identifying Errors in a Pendulum Experiment
A student measures the period of a simple pendulum to determine g, the acceleration due to gravity.
Setup: Pendulum of length L = 0.800 ± 0.002 m, period measured by timing 20 complete oscillations with a stopwatch.
Random errors present:
- Reaction time starting and stopping the stopwatch (~0.2 s per reading, reduced to ~0.014 s per period by timing 20 oscillations)
- Slight variation in the amplitude of each swing
- Reading the length of the pendulum between millimetre marks
Systematic errors present:
- The pendulum formula T = 2π√(L/g) assumes small angle oscillations. If the amplitude is large, the true period is slightly longer - systematic overestimate of T, leading to underestimate of g.
- Measuring L to the centre of mass of the bob - if L is measured to the top of the bob, all measurements are consistently too short.
- Air resistance slightly lengthens the period - systematic overestimate of T.
Treatment:
- Random errors → propagate through the formula using uncertainty propagation.
- Systematic errors → discuss and estimate. The small angle error can be estimated from the large-angle correction formula. The air resistance effect is small for a dense bob and can be noted as negligible.
Common Mistakes to Avoid
Quick Reference
| Property | Random Error | Systematic Error |
|---|---|---|
| Also called | Statistical error, precision error | Bias, accuracy error |
| Direction | Random (±) | Consistent (always + or always −) |
| Reduced by averaging | Yes (by 1/√n) | No |
| Appears in propagation | Yes | Only if explicitly modelled |
| How to fix | More measurements, better instruments | Calibration, correction |
| Shown in lab report | As ± uncertainty | As discussion in error analysis |
Frequently Asked Questions
Q1What is the difference between random and systematic error?
Random errors are unpredictable fluctuations that scatter measurements around the true value - they vary each time you measure. Systematic errors are consistent biases that shift every measurement in the same direction by the same amount. Random errors are reduced by averaging; systematic errors are not.
Q2Can systematic error be positive or negative?
Yes. A systematic error can make all your measurements too high (positive bias) or too low (negative bias). What defines it as systematic is that the direction and approximate magnitude are consistent across all measurements, not that it is always positive.
Q3How do I know if my experiment has systematic error?
Compare your result with a known accepted value. If your result is consistently higher or lower than the accepted value by more than your random uncertainty, systematic error is likely present. You can also look for it by changing experimental conditions - if the result shifts when you change something that should not affect it, you have found a systematic effect.
Q4Should I include systematic error in my uncertainty calculation?
Not using the standard propagation formula, which assumes random errors. If you can estimate the magnitude of a systematic effect, report it separately - for example, as a systematic uncertainty - and combine it with the random uncertainty using quadrature if your lab requires a combined uncertainty.
Q5Does random error affect accuracy or precision?
Precision. Random error causes measurements to scatter, reducing reproducibility (precision). Systematic error affects accuracy - it shifts the result away from the true value regardless of how precise the measurements are.
The calculator handles random uncertainty propagation automatically. Use it to quantify your random errors - then apply the guidance above to identify and discuss the systematic errors in your experiment.
Open the Calculator →Continue Learning
Complete Guide
The fundamentals of uncertainty propagation.
Addition & Subtraction
Working with absolute uncertainties.
Multiplication & Division
Master the relative uncertainty rule.
Powers & Exponents
Propagating through non-linear functions.
Significant Figures
How to round and report your results.
Standard Error vs Deviation
When to use σ versus σ/√n in your reports.
Correlated Variables
Handling dependencies with covariance.