Detection and compensation of systematic errors in a null-screen corneal topographer with artificial intelligence

Author(s):  
Andrés Peña ◽  
Manuel Campos-García
2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Patrick Henz

Abstract“In civilized life, law floats in a sea of ethics”, a quote by the former Chief Justice of the United States, Earl Warren. In a democratic society, the constitution defines the country’s values, and the laws define the preferred or at least still tolerated behavior, making deviations sanctionable. As the society is in a continuous flow, also based on scientific and technical developments, law always lags behind. Until regulations can catch up, ethics has to lead society. In less democratic societies, the water gets polluted up to poisoned, ethical behavior may be against ruling law. As for the latter, Robin Hood was a thief, but for most parts of the population, a hero. The less transparent the water, the more difficult to adapt law to new developments. This includes direct corruption, but also unfaithful lobbying. This article discusses the “nature” of Artificial Intelligence, including the risks its posing, and who is responsible for systematic errors, from a moral, but also legal point.


Author(s):  
Moritz von Zahn ◽  
Stefan Feuerriegel ◽  
Niklas Kuehl

AbstractContemporary information systems make widespread use of artificial intelligence (AI). While AI offers various benefits, it can also be subject to systematic errors, whereby people from certain groups (defined by gender, age, or other sensitive attributes) experience disparate outcomes. In many AI applications, disparate outcomes confront businesses and organizations with legal and reputational risks. To address these, technologies for so-called “AI fairness” have been developed, by which AI is adapted such that mathematical constraints for fairness are fulfilled. However, the financial costs of AI fairness are unclear. Therefore, the authors develop AI fairness for a real-world use case from e-commerce, where coupons are allocated according to clickstream sessions. In their setting, the authors find that AI fairness successfully manages to adhere to fairness requirements, while reducing the overall prediction performance only slightly. However, they find that AI fairness also results in an increase in financial cost. Thus, in this way the paper’s findings contribute to designing information systems on the basis of AI fairness.


1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.


1988 ◽  
Vol 102 ◽  
pp. 215
Author(s):  
R.M. More ◽  
G.B. Zimmerman ◽  
Z. Zinamon

Autoionization and dielectronic attachment are usually omitted from rate equations for the non–LTE average–atom model, causing systematic errors in predicted ionization states and electronic populations for atoms in hot dense plasmas produced by laser irradiation of solid targets. We formulate a method by which dielectronic recombination can be included in average–atom calculations without conflict with the principle of detailed balance. The essential new feature in this extended average atom model is a treatment of strong correlations of electron populations induced by the dielectronic attachment process.


Author(s):  
W.J. de Ruijter ◽  
Sharma Renu

Established methods for measurement of lattice spacings and angles of crystalline materials include x-ray diffraction, microdiffraction and HREM imaging. Structural information from HREM images is normally obtained off-line with the traveling table microscope or by the optical diffractogram technique. We present a new method for precise measurement of lattice vectors from HREM images using an on-line computer connected to the electron microscope. It has already been established that an image of crystalline material can be represented by a finite number of sinusoids. The amplitude and the phase of these sinusoids are affected by the microscope transfer characteristics, which are strongly influenced by the settings of defocus, astigmatism and beam alignment. However, the frequency of each sinusoid is solely a function of overall magnification and periodicities present in the specimen. After proper calibration of the overall magnification, lattice vectors can be measured unambiguously from HREM images.Measurement of lattice vectors is a statistical parameter estimation problem which is similar to amplitude, phase and frequency estimation of sinusoids in 1-dimensional signals as encountered, for example, in radar, sonar and telecommunications. It is important to properly model the observations, the systematic errors and the non-systematic errors. The observations are modelled as a sum of (2-dimensional) sinusoids. In the present study the components of the frequency vector of the sinusoids are the only parameters of interest. Non-systematic errors in recorded electron images are described as white Gaussian noise. The most important systematic error is geometric distortion. Lattice vectors are measured using a two step procedure. First a coarse search is obtained using a Fast Fourier Transform on an image section of interest. Prior to Fourier transformation the image section is multiplied with a window, which gradually falls off to zero at the edges. The user indicates interactively the periodicities of interest by selecting spots in the digital diffractogram. A fine search for each selected frequency is implemented using a bilinear interpolation, which is dependent on the window function. It is possible to refine the estimation even further using a non-linear least squares estimation. The first two steps provide the proper starting values for the numerical minimization (e.g. Gauss-Newton). This third step increases the precision with 30% to the highest theoretically attainable (Cramer and Rao Lower Bound). In the present studies we use a Gatan 622 TV camera attached to the JEM 4000EX electron microscope. Image analysis is implemented on a Micro VAX II computer equipped with a powerful array processor and real time image processing hardware. The typical precision, as defined by the standard deviation of the distribution of measurement errors, is found to be <0.003Å measured on single crystal silicon and <0.02Å measured on small (10-30Å) specimen areas. These values are ×10 times larger than predicted by theory. Furthermore, the measured precision is observed to be independent on signal-to-noise ratio (determined by the number of averaged TV frames). Obviously, the precision is restricted by geometric distortion mainly caused by the TV camera. For this reason, we are replacing the Gatan 622 TV camera with a modern high-grade CCD-based camera system. Such a system not only has negligible geometric distortion, but also high dynamic range (>10,000) and high resolution (1024x1024 pixels). The geometric distortion of the projector lenses can be measured, and corrected through re-sampling of the digitized image.


Author(s):  
David L. Poole ◽  
Alan K. Mackworth

Sign in / Sign up

Export Citation Format

Share Document