A characterization of log-space computable functions

1973 ◽  
Vol 5 (3) ◽  
pp. 26-29 ◽  
Author(s):  
John Lind ◽  
Albert R. Meyer
Keyword(s):  
2014 ◽  
Vol 9 ◽  
Author(s):  
Shalom Lappin

Classical intensional semantic frameworks, like Montague’s Intensional Logic (IL), identify intensional identity with logical equivalence. This criterion of co-intensionality is excessively coarse-grained, and it gives rise to several well-known difficulties. Theories of fine-grained intensionality have been been proposed to avoid this problem. Several of these provide a formal solution to the problem, but they do not ground this solution in a substantive account of intensional difference. Applying the distinction between operational and denotational meaning, developed for the semantics of programming languages, to the interpretation of natural language expressions, offers the basis for such an account. It permits us to escape some of the complications generated by the traditional modal characterization of intensions.


2013 ◽  
Vol 21 (3) ◽  
pp. 289-294
Author(s):  
Apoloniusz Tyszka

Abstract Let En ={xi = 1; xi + xj = xk; xi ∙ xj = xk : i; j; k ∊ {1;. . . ; n}}. We present two algorithms. The first accepts as input any computable function f : ℕ → ℕ and returns a positive integer m( f ) and a computable function g which to each integer n ≥ m( f ) assigns a system S ⊆ En such that S is satisfiable over integers and each integer tuple (x1;. . .; xn) that solves S satisfies x1 = f (n). The second accepts as input any computable function f: ℕ → ℕ and returns a positive integer w( f ) and a computable function h which to each integer n ≥ w( f ) assigns a system S ⊆ En such that S is satisfiable over non-negative integers and each tuple (x1;. . .; xn) of non-negative integers that solves S satisfies x1 = f (n).


1976 ◽  
Vol 41 (1) ◽  
pp. 199-214 ◽  
Author(s):  
Donald A. Alton

The theory of computational complexity deals with those functions which can be computed subject to certain restrictions on the resources (for instance, time or memory) available for computation. Blum [5] gave an axiomatic characterization of some of the properties which should be possessed by a measure of computational complexity and established the existence of speed-upable functions—computable functions which fail to possess optimal programs in a particularly strong sense. Recursion theorists tend to like such functions, and people concerned with the specifics of real computing tend to consider such functions somewhat pathological. In Theorem 2 we show that such pathology is rampant: there is a great diversity of behavior among the collections of “run-times” of different functions which do not possess optimal programs, where such diversity is gauged by certain algebraic criteria which have computational significance. Roughly speaking, these algebraic criteria concern the ways in which various functions can be intermixed to satisfy requirements that certain functions can or cannot be computed more easily than certain other functions. (More detailed motivation for the relevance of these algebraic notions is given later.) Specifically, in Theorem 2 we generalize the embeddability theorem of McCreight and Meyer [12] (discussed below) by making speed-upable functions responsible for the embedding.


Author(s):  
B. L. Soloff ◽  
T. A. Rado

Mycobacteriophage R1 was originally isolated from a lysogenic culture of M. butyricum. The virus was propagated on a leucine-requiring derivative of M. smegmatis, 607 leu−, isolated by nitrosoguanidine mutagenesis of typestrain ATCC 607. Growth was accomplished in a minimal medium containing glycerol and glucose as carbon source and enriched by the addition of 80 μg/ ml L-leucine. Bacteria in early logarithmic growth phase were infected with virus at a multiplicity of 5, and incubated with aeration for 8 hours. The partially lysed suspension was diluted 1:10 in growth medium and incubated for a further 8 hours. This permitted stationary phase cells to re-enter logarithmic growth and resulted in complete lysis of the culture.


Author(s):  
A.R. Pelton ◽  
A.F. Marshall ◽  
Y.S. Lee

Amorphous materials are of current interest due to their desirable mechanical, electrical and magnetic properties. Furthermore, crystallizing amorphous alloys provides an avenue for discerning sequential and competitive phases thus allowing access to otherwise inaccessible crystalline structures. Previous studies have shown the benefits of using AEM to determine crystal structures and compositions of partially crystallized alloys. The present paper will discuss the AEM characterization of crystallized Cu-Ti and Ni-Ti amorphous films.Cu60Ti40: The amorphous alloy Cu60Ti40, when continuously heated, forms a simple intermediate, macrocrystalline phase which then transforms to the ordered, equilibrium Cu3Ti2 phase. However, contrary to what one would expect from kinetic considerations, isothermal annealing below the isochronal crystallization temperature results in direct nucleation and growth of Cu3Ti2 from the amorphous matrix.


Author(s):  
B. H. Kear ◽  
J. M. Oblak

A nickel-base superalloy is essentially a Ni/Cr solid solution hardened by additions of Al (Ti, Nb, etc.) to precipitate a coherent, ordered phase. In most commercial alloy systems, e.g. B-1900, IN-100 and Mar-M200, the stable precipitate is Ni3 (Al,Ti) γ′, with an LI2structure. In A lloy 901 the normal precipitate is metastable Nis Ti3 γ′ ; the stable phase is a hexagonal Do2 4 structure. In Alloy 718 the strengthening precipitate is metastable γ″, which has a body-centered tetragonal D022 structure.Precipitate MorphologyIn most systems the ordered γ′ phase forms by a continuous precipitation re-action, which gives rise to a uniform intragranular dispersion of precipitate particles. For zero γ/γ′ misfit, the γ′ precipitates assume a spheroidal.


Author(s):  
R. E. Herfert

Studies of the nature of a surface, either metallic or nonmetallic, in the past, have been limited to the instrumentation available for these measurements. In the past, optical microscopy, replica transmission electron microscopy, electron or X-ray diffraction and optical or X-ray spectroscopy have provided the means of surface characterization. Actually, some of these techniques are not purely surface; the depth of penetration may be a few thousands of an inch. Within the last five years, instrumentation has been made available which now makes it practical for use to study the outer few 100A of layers and characterize it completely from a chemical, physical, and crystallographic standpoint. The scanning electron microscope (SEM) provides a means of viewing the surface of a material in situ to magnifications as high as 250,000X.


Author(s):  
D. F. Blake ◽  
L. F. Allard ◽  
D. R. Peacor

Echinodermata is a phylum of marine invertebrates which has been extant since Cambrian time (c.a. 500 m.y. before the present). Modern examples of echinoderms include sea urchins, sea stars, and sea lilies (crinoids). The endoskeletons of echinoderms are composed of plates or ossicles (Fig. 1) which are with few exceptions, porous, single crystals of high-magnesian calcite. Despite their single crystal nature, fracture surfaces do not exhibit the near-perfect {10.4} cleavage characteristic of inorganic calcite. This paradoxical mix of biogenic and inorganic features has prompted much recent work on echinoderm skeletal crystallography. Furthermore, fossil echinoderm hard parts comprise a volumetrically significant portion of some marine limestones sequences. The ultrastructural and microchemical characterization of modern skeletal material should lend insight into: 1). The nature of the biogenic processes involved, for example, the relationship of Mg heterogeneity to morphological and structural features in modern echinoderm material, and 2). The nature of the diagenetic changes undergone by their ancient, fossilized counterparts. In this study, high resolution TEM (HRTEM), high voltage TEM (HVTEM), and STEM microanalysis are used to characterize tha ultrastructural and microchemical composition of skeletal elements of the modern crinoid Neocrinus blakei.


Author(s):  
Simon Thomas

Trends in the technology development of very large scale integrated circuits (VLSI) have been in the direction of higher density of components with smaller dimensions. The scaling down of device dimensions has been not only laterally but also in depth. Such efforts in miniaturization bring with them new developments in materials and processing. Successful implementation of these efforts is, to a large extent, dependent on the proper understanding of the material properties, process technologies and reliability issues, through adequate analytical studies. The analytical instrumentation technology has, fortunately, kept pace with the basic requirements of devices with lateral dimensions in the micron/ submicron range and depths of the order of nonometers. Often, newer analytical techniques have emerged or the more conventional techniques have been adapted to meet the more stringent requirements. As such, a variety of analytical techniques are available today to aid an analyst in the efforts of VLSI process evaluation. Generally such analytical efforts are divided into the characterization of materials, evaluation of processing steps and the analysis of failures.


Sign in / Sign up

Export Citation Format

Share Document