In our daily lives we carry out a large number of activities that unknowingly hold great scientific significance. What are these activities? The main one (and the one we will discuss in this article) is the measurement, a term that is present with us every day, for example, when asking for water, when checking the time, when we weigh ourselves, among others.
What is to measure?
Measure is to compare a magnitude (property or measurable attribute of a body) with another taken as a reference, called a pattern, and express how many times it contains it. We must take into account that in Physics a standard is a magnitude with a well-defined and well-known value that is used as a reference for the measurement.
Normally when we make a measurement, the value of the measurement is accompanied by a unit.
What is a unit?
A unit is a measure in which there is a something. In general, it is a standardized quantity of a certain physical quantity; By standardized we mean that it takes its value from a pattern. For example, suppose we want to measure the width of a box with the span of our hand; this will be our measurement standard. If the width of the box is 4 spans, this measurement tells us that the distance from the right end of the box to the left end is 4 times the reference width called "span".
Now, can you imagine if there was a pattern called span of the hand? Measurements would vary considerably as the span of the hand changes from person to person. For this reason, the units of measurement must be valid, reproducible and invariant so that they can be used and understood by everyone worldwide, regardless of the place and application; all in order to avoid confusion, mistakes and difficulties.
Currently the International System of Units (SI) is the one that defines the measurement units worldwide. Only three countries have not adopted it as a priority or the only one: the United States, Liberia and Burma.
Types of measures
The measures can be direct or indirect:
They are those that are the result of directly comparing (with the help of measuring instruments) the unknown quantity of the physical quantity with another known or standardized one. For example, the measurement of the height of a person, the time elapsed between two events, among others.
There are occasions when it will not be possible for us to directly measure the value of a quantity using an instrument, so we must resort to calculations that relate variables that can be measured directly. For example, speed, pressure, density, volume, among others.
If you've ever had to make a certain number of measurements of a magnitude (or know someone who has), you may have noticed that not all values are equal to each other. Surely you have asked yourself: which one will be correct? Why do I get different values?
Normally, the readings obtained from the measurements are not exactly the same, even when they are taken by the same person, with the same method and in the same environment. No matter how much care is taken in the entire measurement process, it is impossible to express the result of the measurement as exact, that is, every measurement has an error.
The term error in the field of Physics is used to refer to the numerical difference between the measured value and the actual value.
Kinds of measurement errors.
They consist of mistakes in the readings and records of the information. They are commonly caused by the observer, whether due to lack of visual acuity, tiredness, emotional disturbances, carelessness, incorrect use of measuring instruments, poor position when reading (parallax error), among others.
They are so called because they are systematically (constantly) repeated in the same value and direction in all measurements made under the same conditions. The causes are due to errors introduced by the measuring instruments, such as poor calibration, wear, connection problems, etc. These errors can be corrected using mathematical equations.
Also called stochastic, casual or circumstantial, they are due to accidental effects. The causes are mainly due to the environmental conditions in which the measurement is made; These include temperature, humidity, dust, vibrations, noise, etc.
They are errors that can occur in one measure and not in another. In this way, repeating the measurements as many times as possible is a good way to correct these random errors, since the average value will be more reliable than one of them. The average value will be closer to the true value of the magnitude the greater the number of measurements, since the random errors of each measurement compensate each other.
Measurement error calculation.
Statistical theory, developed by Gauss and which gives optimal results in the case of a large number of measurements, is used to calculate random errors. However, it is also used in the case of a small number of measurements, assuming it is valid there. It is considered as a large number of measurements when they are greater than or equal to 10.
Statistically represents the value closest to the true value.
It is the difference between the measurement and the average value. It is an indicator of the imprecision of the measurement.
It is the quotient between the absolute error and the average value. (It is expressed in absolute values regardless of the sign of the absolute error). It is an indicator of the quality of a measurement.
It is the relative error multiplied by one hundred, which is expressed in percent.
If we obtain a relative error of 0.002 in the measurement of a length, it means that in each meter there is an error of 2 mm and a 0.2% error will be obtained in the measurement made.
The final result of the measurement of a magnitude can be written as:
Where the symbol ± determines the limits within which is the magnitude of the measurement:
- The “+” sign indicates the limit on the right of the measurement (excess error) and the “-” sign, the limit on the left (default error).