A metrological reflection on uncertainty about the use of maps instead of global parameters

Recently, there has been an abnormal use of global parameters not only in economical fields, but also in scientific ones, like meteorology and climate. They are supposed to convey in a clear way the most important significance in the changes of numerical parameters. However, synthetic global parameters may miss the complexity of the issue that they are intended to qualify, so that trivializing their meaning. The paper discusses the fact that this effect is more than trivialization because the evaluation of a map, e.g. an Earth map, which is often contrasted, is less sensitive to the uncertainty that shall always be associated with any type of information, namely the numerical one, when the uncertainty evaluation is associated with the parameter value. In most cases, and especially when the map shows a great variety of situations, a visual examination allows, even to a scientist, different methods for a more reliable evaluation. General cases are reported to exemplify the above statement.


Introduction
Recently, there has been an abnormal use of global parameters not only in economical fields, but also in scientific ones, like meteorology and climate, besides, mainly with some political intention.They are supposed to convey the most important significance in the changes of numerical parameters in a clear and simple-to-understand way.
Informing about the global trends and related parameters, the trend consists in summarizing it via global parameters, typically supposed to represent the mean numerical value of a big dataset.In previous papers, the author has already discussed the issue that the evolution of some of these parameters in time are affected by an inconsistently low evaluation of their uncertainty, a "quality" factor that shall be associated with every human piece of always partial and often insufficient knowledge.This shall be done in accordance with the methodology required by the science of measurement, in particular, by metrology, the main objective of which is to study the ways of improving the precision of measurement results when the phenomena are better understood [1].
Here, the reflections will be restricted to a comparison between the above common tool of using global parameters and the use of a different tool for the same purpose of evaluation, consisting in the use of maps of the parameters in question over the whole spatial dimension of the Earth surface against the uncertainty factor.The author has already used this tool in a couple of his studies [2,3] and checked its efficacy.

The use of global parameters for meteorological parameters
The international body involved in determining the meteorological parameters is the WMO (World Meteorological Organisation) collecting hundredth of the parameters with a frequency of measurements that can reach several times per hour and storing in what today can be considered a Big Data database.For citing, three popular parameters are: the ground mean temperature (SAMT, the Surface Average Mean Temperature), the show/ice coverage and the oceans level variation -all are extremely complicated to be obtained [4].Millions of weather stations are sparse on the whole surface of the ground, while different methods are also used for liquid surfaces.Nevertheless, the mean distance between stations is large enough to require subsequent interpolation mathematical/ geometrical means to be added for computing additional points against the original network to create a sufficiently dense network.For each of these additional points, the uncertainty, necessarily associated with the data of each measurement station, shall be integrated (typically increased) by the evaluation of the interpolation uncertainty.

UDC 551.58
Overall, a database of values can be obtained, each having an associated uncertainly, that can be numerically treated to get, e.g., a mean value (like the SAMT) (see [5] for a discussion of it).It is affected by the uncertainty, e.g., by the IPCC [6]: their detailed statistical procedure for the calculation is not specifically reported and does not include an associated Uncertainty Budget (UB) as normally supplied in other fields and always in metrology [7].A fit of the database is often performed, but the only component of the uncertainty it can output is the parameter called "standard deviation" (s.d.) of the data, or its double at 97.5%.It only allows providing an indication about the consistency of the fitting trend and guiding toward the "best fit", i.e. the one providing the minimum s.d.However, the fit is only taking into account the value of each data, and not its associated uncertainty.Non-parametric methods should be used for taking into account the full information, yet they are rarely used.In all cases, a fit does not obtain the combined uncertainty of any summary parameter, but only a part of the random uncertainty components.

Plotting the data into a map
Plotting the data into a map does not merely allow passing from a mathematical-type of representation to a geometrical one, the latter being generally more significant for "geometrical minds" of scientists against "mathematical minds".
In such a representation, the measured values of the parameter(s) of interest are superposed on the geographical basic information (the map).The most efficient way, with the view to the subsequent analysis, is generally not to use a continuous shift of the map colours to represent values, but to have the values discretised in (small) steps.The result is that the colour map is formed of small quadrangles, most often squares, of uniform colour, each representing the same (small) range of values, to which the "central" value of the measured value is attributed.
Except exceptional cases, the latter operation makes practically ineffective the uncertainty associated with the value associated with each single element of area: in specific cases, it would be enough to increase accordingly the surface of those elements.
Contrary to what is done in Section Two, a picture of the distribution in space of the dataset is obtained instead of only the tabular representation of the measured values in each single place.It might be considered as a form of averaging that underpass the numerical values of the uncertainty, and is sufficient in meteorology for the purpose of the most qualitative analyses then made from the obtained results.A geometrical examination is generally much more informative than a table of numbers, and the necessary approximation produced by the discretisation is generally sufficient to compensate for the lack of a numerical indication of the uncertainty of the original data.
Conversely, getting back from the map to a list of numerical values is always possible with due precautions via informatics mean, as done by the author in a couple of their other cases [2,3].This is an additional bonus for the curious scientist that likes/needs to retrieve the numerical values for his or her mathematical analyses.
When using the maps, on the other hand, the right map representation shall also be chosen, since not all are equivalent in all cases [8].

Some examples for discussing why a map conveys more information than a database analytical treatment
Fig. 1 shows a set of several World maps of different types, where the colours indicate the distribution of recent levels of the temperature variation against a previous reference year: neither in all the same reference is used, nor all refer to the same end year of the period.
Notice that normally all the maps use the Mercator/Robinson-type Earth's representation, as it can be estimated from the large size of the Polar regions.That means that the map is not representing proportionally portions of the Earth surface, with differences that are clearly listed in Table 1.There may be a limitation in correctly comparing the real proportion of surfaces showing different temperature variations [3].The equation for Peters' surface corrections is: y = -0.0001x 2 + 0.0201x -0.0117,where x is the latitude from 0° to 90° and y is the corrected proportion to Peters' projection, for an s.d.better than 0.1°.
This can be a reason for the variety of distributions shown in the maps, but it can also arise from differences in datasets.Then, in cases like that of the SAMT, the correct map-type should be used, i.e. the Peters' or the recent EqualHearth one [9,10], conventionally not used in scientific literature instead of the conventionally used Mercator or Robinson ones.
A first lesson learned from this figure is that there is not a sufficiently univocal estimate of the patterns and of their changes, a feature that only the use of maps and their comparison can clearly show.
In particular, the location of the minima (even negative, i.e. meaning lowering of temperature) and the maxima is not univocally determined, especially for the latter: an extended maximum at the Arctic Pole region is made evident by the melting of the sea ice, but it may look much more extended than it actually is when an incorrect representation of the surface ratios (see above) occurs.
A first comparison for two NOAA [11] maps is reported in Fig. 1a, showing the consistency of two evaluations 2 years apart but using references 10 years apart or much closer: the first is evaluating year 2020 when confronted with years 1981-2010, while the F. Pavese second is evaluating year 2022 when confronted with years 1991-2020.
The next comparison is in Fig. 1b for HadCRUT [12], reported in a single map for 2020, which can be compared with one of the NOAA.
The third comparison is in Fig. 1c for NASA [13], reported in four maps.The first two maps are for 2022, one with a reference to a short old period, 1951-1955, and the second two refer to a wide period like in Fig. 1a, which is 1981-2000.Then the third is for 2020 with a reference to the period 1981-2000.The last is taken from a NASA video, having built a sequence of the map changes when averaged every 5 years: here the map is for the period 2018-2022.
The aim of this paper is not to discuss the evident differences in similar periods, but only to provide the evidence of the fact that the distribution of the surface temperature is a kind of evaluation that can be difficult to obtain by a single global parameter.
In such a situation, the importance of the uncertainty of single measured points constituting the database is strongly limited and normally can be disregarded, which is a useful issue when the uncertainty evaluation is controversial.In principle, data can also be retrieved back from each map, however being often affected by a larger uncertainty due to the larger discretisation of their geographical coordinates.
Nevertheless, important and useful is the fact that the (relative) values of the surfaces reported with the same colour can be retrieved, i.e. with the same temperature, the one assigned to that colour.That possibility has been tested by the author in two circumstances, one for the global snow/ice covering of the surface [2], and the other one for the iso-colour surfaces using as the unit the pixel for comparing the SAMT value obtained with the one found in the literature, requiring in that case to use the values from an iso-surface projection, the Peters', corrected from a Robinson one.[3] Table 1 reports the comparison for the two representations.
In addition to sufficient expertise in measurement science, such a method also requires a sufficient expertise in computer graphics.Based on it, an uncertainty of the SAMT retrieval estimated within ≈± 5% can be obtained.Value differences obtained from the published ones by ≈20% for different maps are possible: e.g., against the current SAMT value considered the reference, +1.1 °C, a difference of 20% means a maximum spread of +(0.9-1.3)°C, which still amplies within the real uncertainty of the SAMT that a metrological analysis considers a more reliable one [+1.1 ± (0.5~1)] °C -based on WMO indications [4,13].

Conclusions
The retrieval of several global parameters can be obtained by starting from the maps of distribution of the relevant parameter over the full spatial extension of the map, with an uncertainty comparable with the one attributed to the parameters from a direct analysis of their database.However, too much often the latter is affected by under-estimation due to the use of evaluation methods not considered acceptable by measurement science and by the metrological community, or is controversial.
In fact, maps also allow getting a much more extensive and complete analysis of the collected information, qualitative and quantitative, such as its distribution on the whole extension of the map: e.g., that especially concerns the extent of non-uniformity of the values and the possible relations/reasons for it.In the case of the reported example of the SAMT, that non-uniformity is particularly evident, so that the scientific mean of the global parameter can become significantly weak and rather irrelevant, irrespective of a possible controversial uncertainty affecting the usual analysis based on the dataset.It is particularly important to avoid that situation when the analysis is directed to make forecasts and to make decisions [14,15].

Table 1
Comparison of the position of the latitudes on a linear scale and on Peters' scale a a Peters projection is basically the projection of a circle vertical arc onto the radius Незалежний науковець, колишній науковий директор з метрології в Національній дослідницькій раді, Рим, Італія frpavese@gmail.com