Knowing What You Are Doing With Errors Can Help You Take Better Measurements.
Every experiment conducted is surrounded by the possibility of errors occurring. Experimenting without the occurrence of any form is error is close to impossible. To stay ahead, scientists have found that identifying and categorizing the errors ahead of an experiment helps form a more reliable plan for designing experiments.
On a broader scale, errors can be classified into systematic and random errors. Random errors are unavoidable because they arise from taking measurements that vary with time. Meanwhile, systematic errors can be best explained as error that is present because of a fault in the measuring device. Every measurement taken with that device will be wrong by the same amount.
Let us take the example of a scientist trying to note downwind speeds. The speed of wind is a varying quantity and may rise or fall at different points in time. Repeated measurements will yield different readings, but eventually, they all fluctuate around the actual true value.
Systematic errors can be classified into two subcategories:
1. Offset error; wherein the error that occurs when an instrument is not set to 0 when you begin weighing items as an example. All the corresponding readings taken using that scale would amount to having the same offset error.
2. Scale Factor Errors: These errors correspond to the true measurement, wherein a stretched-out tape will always give results that are proportional to the levels the tape is stretched out.
Similarly consider the example of electronic noise that exists in the circuit of an electrical instrument or of irregular readings taken manually from a scale that is matched to a millimeter. Each repeated reading would yield in a different value, but cluster around the true value. These types of errors are referred to as Random errors.
Random errors are hard to avoid because scientists can never aim to take perfect measurements. When measuring a varying quantity, it is hard to stop the change from occurring and take a reading, no matter how detailed your measuring device may be. In most cases, taking an average of the readings taken multiple times will yield a value closest to the true value.
How many times have you ended up using faulty scales? The default value of the scale for 0 would read as 5, for 10 it would read as 15, and so on. The sort of consistent error that arises, because of a fault in the measuring device is referred to as Systematic Errors.
The easiest way to ensure and tackle systematic errors is to ensure that your devices are calibrated and up to date. Furthermore, although systematic errors introduce a constant bias into the mean or median of experimental data, no statistical analysis of the data can detect a systematic error.
The accuracy and precision of an instrument have a direct correlation with these errors.
The precision of a measurement is how closely several measurements of the same quantity agree with each other. The precision is limited by random errors. It is usually determined by repeating the measurements. The accuracy of a measurement is how close the measurement is to the true value of the quantity being measured. The accuracy of measurements is often reduced by systematic errors, which are difficult to detect even for experienced research workers.
Quantifying the level of uncertainty in your measurements is a crucial part of measurement science. No measurement will be perfect, and understanding the limitations helps to ensure that you don’t draw unwarranted conclusions because of them.