Determination of the fracture toughness by automatic image processing

1996 ◽  
Vol 78 (1) ◽  
pp. 35-44 ◽  
Author(s):  
J. Stampfl ◽  
S. Scherer ◽  
M. Berchthaler ◽  
M. Gruber ◽  
O. Kolednik
Author(s):  
B. Roy Frieden

Despite the skill and determination of electro-optical system designers, the images acquired using their best designs often suffer from blur and noise. The aim of an “image enhancer” such as myself is to improve these poor images, usually by digital means, such that they better resemble the true, “optical object,” input to the system. This problem is notoriously “ill-posed,” i.e. any direct approach at inversion of the image data suffers strongly from the presence of even a small amount of noise in the data. In fact, the fluctuations engendered in neighboring output values tend to be strongly negative-correlated, so that the output spatially oscillates up and down, with large amplitude, about the true object. What can be done about this situation? As we shall see, various concepts taken from statistical communication theory have proven to be of real use in attacking this problem. We offer below a brief summary of these concepts.


Author(s):  
Stuart McKernan

For many years the concept of quantitative diffraction contrast experiments might have consisted of the determination of dislocation Burgers vectors using a g.b = 0 criterion from several different 2-beam images. Since the advent of the personal computer revolution, the available computing power for performing image-processing and image-simulation calculations is enormous and ubiquitous. Several programs now exist to perform simulations of diffraction contrast images using various approximations. The most common approximations are the use of only 2-beams or a single systematic row to calculate the image contrast, or calculating the image using a column approximation. The increasing amount of literature showing comparisons of experimental and simulated images shows that it is possible to obtain very close agreement between the two images; although the choice of parameters used, and the assumptions made, in performing the calculation must be properly dealt with. The simulation of the images of defects in materials has, in many cases, therefore become a tractable problem.


Author(s):  
Weiping Liu ◽  
John W. Sedat ◽  
David A. Agard

Any real world object is three-dimensional. The principle of tomography, which reconstructs the 3-D structure of an object from its 2-D projections of different view angles has found application in many disciplines. Electron Microscopic (EM) tomography on non-ordered structures (e.g., subcellular structures in biology and non-crystalline structures in material science) has been exercised sporadically in the last twenty years or so. As vital as is the 3-D structural information and with no existing alternative 3-D imaging technique to compete in its high resolution range, the technique to date remains the kingdom of a brave few. Its tedious tasks have been preventing it from being a routine tool. One keyword in promoting its popularity is automation: The data collection has been automated in our lab, which can routinely yield a data set of over 100 projections in the matter of a few hours. Now the image processing part is also automated. Such automations finish the job easier, faster and better.


2005 ◽  
Vol 96 (8) ◽  
pp. 924-932
Author(s):  
M. Tarafder ◽  
Swati Dey ◽  
S. Sivaprasad ◽  
S. Tarafder ◽  
M. Nasipuri

Sign in / Sign up

Export Citation Format

Share Document