High-resolution wavefront control: adaptive optics and image processing applications

Author(s):  
Mikhail A. Vorontsov
Author(s):  
E. L. Buhle ◽  
U. Aebi

CTEM brightfield images are formed by a combination of relatively high resolution elastically scattered electrons and unscattered and inelastically scattered electrons. In the case of electron spectroscopic images (ESI), the inelastically scattered electrons cause a loss of both contrast and spatial resolution in the image. In the case of ESI imaging on the Zeiss EM902, the transmited electrons are dispersed into their various energy components by passing them through a magnetic prism spectrometer; a slit is then placed in the image plane of the prism to select the electrons of a given energy loss for image formation. The purpose of this study was to compare CTEM with ESI images recorded on a Zeiss EM902 of ordered protein arrays. Digital image processing was employed to analyze the average unit cell morphologies of the two types of images.


Perception ◽  
1986 ◽  
Vol 15 (4) ◽  
pp. 373-386 ◽  
Author(s):  
Nigel D Haig

For recognition of a target there must be some form of comparison process between the image of that target and a stored representation of that target. In the case of faces there must be a very large number of such stored representations, yet human beings seem able to perform comparisons at phenomenal speed. It is possible that faces are memorised by fitting unusual features or combinations of features onto a bland prototypical face, and such a data-compression technique would help to explain our computational speed. If humans do indeed function in this fashion, it is necessary to ask just what are the features that distinguish one face from another, and also, what are the features that form the basic set of the prototypical face. The distributed apertures technique was further developed in an attempt to answer both questions. Four target faces, stored in an image-processing computer, were each divided up into 162 contiguous squares that could be displayed in their correct positions in any combination of 24 or fewer squares. Each observer was required to judge which of the four target faces was displayed during a 1 s presentation, and the proportion of correct responses for each individual square was computed. The resultant response distributions, displayed as brightness maps, give a vivid impression of the relative saliency of each feature square, both for the individual targets and for all of them combined. The results, while broadly confirming previous work, contain some very interesting and surprising details about the differences between the target faces.


Geophysics ◽  
1996 ◽  
Vol 61 (4) ◽  
pp. 1115-1127 ◽  
Author(s):  
Igor B. Morozov ◽  
Scott B. Smithson

We address three areas of the problem of the stacking velocity determination: (1) the development of a new high‐resolution velocity determination technique, (2) the choice of an optimal velocity trial scenario, and (3) a unified approach to the comparison of time‐velocity spectra produced by various methods. We present a class of high‐resolution coherency measures providing five‐eight times better velocity resolution than conventional measures. The measure is based on the rigorous theory of statistical hypothesis testing and on the statistics of directional data. In its original form, our method analyzes only the phase distributions of the data, thus making unnecessary careful spherical divergence corrections and other normalization procedures. Besides the statistical one, we develop an “instantaneous” version of the conventional coherency measure. This measure is based on the concept of the trace envelope, thus eliminating the need for an averaging procedure. Finally, we design a hybrid high‐resolution coherency measure, incorporating the latter and the statistical one. Carrying out a systematic comparison of various measures of coherency, we present a simple estimate of an attainable velocity resolution. Based on this estimate, we define an optimal velocity grid, providing uniform coverage of all details of the time‐velocity spectrum. To facilitate quantitative comparisons of different coherency functions, we develop a unified normalization approach, based on techniques known in image processing. Described methods are tested on synthetic and field data. In both cases, we obtained a remarkable improvement in the time‐velocity resolution. The methods are general, very simple in implementation, and robust and reliable in application.


Sign in / Sign up

Export Citation Format

Share Document