Measurement of uncertainty pdf




















The corresponding state of knowledge is best described by means of a probability distribution over the set of possible values for the measurand. This probability distribution incorporates all the information available about the measurand, and indeed expresses how well one believes one knows the measurand's true value a felicitous interpretation due to Dr.

Charles Ehrlich, from NIST , and fully characterizes how the degree of this belief varies over that set of possible values. It is this distribution that imparts meaning to the parameter that is chosen to quantify measurement uncertainty. When the measurand is a vector, rather than a scalar quantity, or when it is a quantity of even greater complexity for example, a function, as in a transmittance spectrum of an optical filter , then the parameter that expresses measurement uncertainty will be a suitable generalization or analog of the standard deviation.

The following link to the browser version uses MathML. Information Technology Laboratory. Statistical Engineering Division. Measurement Uncertainty. Share Facebook. Fisher , 8th ed. Tutorial for Metrologists on the probabilistic and statistical apparatus underlying the GUM and related documents Uncertainty Machine The following link to the browser version uses MathML.

References Barry N. Taylor and Chris E. Kuyatt Bottaccini Instruments and Measurement. The basic idea is that truth refers to the attribution of a quantity value to a modeled quantity, while uncertainty characterizes both such attribution and the degree of similarity between the model of that quantity and the actual, empirical quantity, and therefore also to their combination, i.

A simple, but paradigmatic, example comes from the way in which the problem of measuring the area of a paper sheet is solved. A twofold idealization is customarily introduced, according to which the object under measurement is modeled as a rectangle and the measurand is defined as a general quantity, area, of that object and that can be evaluated by positive real numbers and which is not influenced by other quantities.

The solution is then obtained by measuring the length of the two sides of the sheet and taking their product. Hence, in this model the object under measurement has an individual area, whose true value is obtained by multiplying the lengths of its sides. This conclusion shows that the realist and the instrumentalist standpoints are not required to be thought of as 8 This role has been already acknowledged in Kuhn and Suppes For an extensive discussion of this issue in the recent epistemological debate see Psillos , where scientific realism is defended, and van Fraassen , where a strong version of the opposite position, constructive empiricism, is developed.

Still, this position, despite its interest, constitutes only a partial solution to the problem of the relations between error and uncertainty in measurement, and this for two reasons: first, because the recent developments of the operational perspective in metrology have introduced new motivations for casting doubts about the concept of a unique true value; second, because the analysis can be deepened to show that in the very case of measurement the concept of true value has application with reference not only to modeled quantities, but also to actual quantities, i.

The definition of uncertainty of measurement [ However, it is not inconsistent with other concepts of uncertainty of measurement, such as i a measure of the possible error in the estimated value of the measurand as provided by the result of a measurement; ii an estimate characterizing the range of values within which the true value of a measurand lies.

Nevertheless, whichever concept of uncertainty is adopted, an uncertainty component is always evaluated using the same data and related information. Hence, the concept of error can be avoided here and the philosophical assumptions concerning truth and true values discharged: there is no necessity to make a reference to a supposed true value, since there is no possibility to evaluate the distance between the estimated value and the true value.

The subject will not further discussed here. This assumption leads to a single rule for combining all components of uncertainty — the so called law of propagation of uncertainty — which is thus applicable to components obtained by both Type A and Type B evaluations.

About the point 2 , the fact that the information obtained by measurement is supposed to be related to a measurand introduces an unavoidable interpretive component in the measurement problem, which then requires the measurand to be specified.

This implies giving a description of both the object under measurement and the environment expected when the measurement takes place. Hence, in principle a measurand cannot be completely described without an infinite amount of information. The consequence is that the measurand, now defined as the quantity intended to be measured JCGM , 2.

These three points highlight some significant tensions in the GUM approach. The concept of definitional uncertainty is introduced, but then deprived of any operative import, since the definitional uncertainty is simply assumed to be negligible with respect to the other components of measurement uncertainty. It is surprising to declare both that the true value is eliminable, since unknowable, and identical with the best estimate of the value, which is evidently known.

Alternatively, it is surprising to declare both that the true value is the value of the measurand, identical with the best estimate of the value of the measurand, and that this value is not truly representing the measurand. In addition, there is a further price to be paid for this emphasis on the operational side of the problem.

Still, accuracy is customarily listed among the features of measuring instruments, and a numerical value for it is indicated. The way outs which are sometimes adopted to solve the puzzle, e. As a synthesis, while the introduction of the definitional uncertainty, which leads us to improve our initial picture, where a model true value is introduced, can be viewed as a positive contribution of the GUM, the apparently deflationist strategy13 underlying the elimination of the concept of true value, instead of clarifying the frame, seems to obscure important characters of the measurement process and to neglect the fact that not all quantity values are unknowable, as, e.

Three basic conditions. This condition prevents a never-ending recursive process: were the output quantity subject to measurement in its turn, a further transducer would then be required a measuring instrument whose input quantity is length in the case of a spring , and for its output quantity this condition should apply. The function dout corresponds to the evidential component of measurement.

Let us analyze it a little bit more thoroughly. In any measuring instrument there must be a point in which the relation between a quantity and a quantity value is assumed as given, as a pure datum in an operative sense, i.

In fact, measuring instruments are designed so to make the mapping from indications to indication values straightforward, being typically implemented as a process of pattern recognition, performed by human beings or technological devices: the observation of coincidence of marks, the classification of an electric quantity to a quantized level to which a digital code is associated, the numbering of right answers of a test, and so on.

In this way, the information conveyed by the mapping from indications to indication values is the best one which can be achieved by means of the instrument.

Accordingly, the obtained indication value can be properly said the true quantity value of the corresponding indication. Of course, this is a revisable, operative truth, so that the term operative true quantity value could be adopted to denote this entity, which has to be systematically distinguished from the model true quantity value introduced before. On the other hand, such operative true values are indication values, not measurand ones length values instead of force values in the case of a spring.

Hence, such truth is still not sufficient for measurement. Such problem is solved by means of instrument calibration.

Interestingly, the description of the instrument operation for measurement and calibration is the same: the instrument interacts with an object, and an indication is obtained as the result of the transduction of an input quantity of the object. Still in a simplified model, the conditions for calibration are as follows. For example, Bentley , p. In an ideal measurement system, the measured value would be equal to the true value. The same statement applies to the term «indication».

In the current document such a distinction is made. The function dref corresponds to the evidential component of calibration. Like MC2, it supposes the operative availability of unproblematic values, assigned by convention, or through a chain of responsibility delegation, typically guaranteed by a calibration hierarchy sometimes called a traceability chain from a primary measurement standard. It is in this way that in a calibrated measuring instrument the empirical component is reliably linked to the evaluation component.

This point is delicate. In some presentations the whole problem of measurement is introduced by assuming that the input of the measuring system is the true measurand value see, e. This is a overly simplified, and actually misleading, position: being empirical devices, measuring instruments interact with quantities of objects, not values.

Any criticism to true values based on this assumption is thus well founded but, as we are going to argue, this does not imply that truth, and therefore error, must vanish from the scope of measurement. Indeed, neither calibration nor measurement are aimed at producing information on the metrological capability of a measuring instrument.

To this goal two basic processes in particular can be designed: - an input quantity qin is repeatedly applied to the measuring instrument and a scale statistic e. The simplified model presented here refers to the operatively much more frequent situation in which the object under measurement and the measurement standard are not required to be simultaneously present, under the hypothesis that the measuring instrument is stable enough to maintain in measurement the behavior which was characterized in calibration.

In this sense, the model is about measurement performed as asynchronous comparison of the object under measurement and the measurement standard. Under these assumptions, any non-null value for the given scale statistic has to be considered as the indicator of errors in the transduction behavior of the measuring instrument.

The second process is more demanding, since it requires not only the stability of qin but also the knowledge of its value v, as typically obtained by means of a measurement standard, together with the calibration of the measuring instrument.

Under these assumptions, any difference between v and the location statistic has to be considered as the indicator of errors introduced by the fact that the calibration information is not correct anymore. Still, we have also highlighted that 1 it is possible to be uncertain as to the model to choose in order to analyze a given portion of the world, and that 2 it is possible to admit true values with respect to indication values and reference values.

Accordingly, the diagram: modeled quantity uncertainly similar to true value of quantity quantity is referred to value does not provide a sufficiently general account of the significance of the concepts of true value and uncertainty in measurement: some uncertainty affects the modeled quantity, and the concept of truth can be applied to values of some actual quantities too. As a conclusion, the following synthesis, asking for further developments, can be offered.

In a different context ISO , building on measurement precision and trueness, measurement accuracy is thought of as an overall indicator, which summarizes the information conveyed by both precision and trueness. How such information can be synthesized is outside the scope of this paper. Since such quantities are produced by a designed process, it appears to be legitimate to consider the values assigned to them as their operative true values.

In addition, if the simplified model of measurement introduced so far is embedded in a more realistic context, the role of models in measurement has to be taken into account. The idea is that the quality of measurement results is affected not only by measurement errors but also by other causes, which in the specific context of the given measurement are not empirically controllable and therefore can be evaluated only on the basis of given interpretive hypotheses.

The resulting effects are expressed in terms of uncertainty in measurement results. Among such other causes there are the following ones these are just hints: each of them could require a much more thorough analysis. The working standards exploited in instrument calibration customarily are calibrated in their turn, through a calibration hierarchy. While each step of this process is a calibration, the trueness-related errors might not combine linearly.

Hence, instead of trying to conceive a complex super-model including the whole calibration hierarchy, the expert knowledge on the resulting effects can be exploited, and elicited in terms of uncertainty of the quantity value of the working standard. The transduction behavior of the measuring instrument is not perfectly characterized. If the transducer is assumed to obey a parametric law typically: it is linear, i.

The transducer exploited in measurement is not perfectly selective: the transduction process is perturbed by several influence quantities, so that its output depends not only on the stated input quantity. While in principle each influence quantity might be measured in its turn and its effects properly characterized and then eliminated, this process would lead to a never-ending recursive process, due to the fact that in the measurement of an influence quantity some influence quantities should be taken into account.

Typically, some simplifying hypotheses are assumed on the effects of the influence quantities, expressed as an uncertainty of the indication value. The fact has to be admitted that the input quantity of the transducer might not be the measurand, i.

In such a case a model has to be adopted to infer the measurand value from the available information and, of course, this model could be acknowledged as implying some simplifications, whose effects can be expressed as an uncertainty on the stated measurand value. The metrological model of the measurand is the source of an important kind of uncertainty, which cannot be eliminated by means of experimental means.

If it is impossible to define the quantity that is intended to be measured, then it is, even in principle, impossible to determine its quantity value. These examples highlight that measurement uncertainty can be assumed as an encompassing concept by means of which the quality of measurement results is expressed by taking into account both the effects of measurement errors and the approximations due to measurement-related models.



0コメント

  • 1000 / 1000