scholarly journals Transformative emotional sequence: Towards a common principle of change.

2012 ◽  
Vol 22 (2) ◽  
pp. 109-136 ◽  
Author(s):  
Hans Welling
Keyword(s):  
Author(s):  
Ben Seymour ◽  
Ivo Vlaev ◽  
Irma Kurniawan ◽  
Julia Trommerhauser ◽  
Ray Dolan ◽  
...  

2011 ◽  
Vol 13 (3) ◽  
pp. 227-234 ◽  
Author(s):  
Helena W. Morrison ◽  
Charles A. Downs

Scientists and clinicians frequently use immunological methods (IMs) to investigate complex biological phenomena. Commonly used IMs include immunocytochemistry (IC), enzyme-linked immunosorbent assays (ELISA) and flow cytometry. Each of these methodologies exploits a common principle in IMs —the binding of an antibody to its antigen. Scientists continue to develop new methodologies, such as high-throughput immunohistochemistry (IHC) and in vivo imaging techniques, which exploit antibody—antigen binding, to more accurately answer complex research questions involving single cells up to whole organ systems. The purpose of this paper is to discuss established and evolving IMs and to illustrate the application of these methods to nursing research.


2019 ◽  
Vol 5 (1) ◽  
pp. 427-449 ◽  
Author(s):  
Alison I. Weber ◽  
Kamesh Krishnamurthy ◽  
Adrienne L. Fairhall

Adaptation is a common principle that recurs throughout the nervous system at all stages of processing. This principle manifests in a variety of phenomena, from spike frequency adaptation, to apparent changes in receptive fields with changes in stimulus statistics, to enhanced responses to unexpected stimuli. The ubiquity of adaptation leads naturally to the question: What purpose do these different types of adaptation serve? A diverse set of theories, often highly overlapping, has been proposed to explain the functional role of adaptive phenomena. In this review, we discuss several of these theoretical frameworks, highlighting relationships among them and clarifying distinctions. We summarize observations of the varied manifestations of adaptation, particularly as they relate to these theoretical frameworks, focusing throughout on the visual system and making connections to other sensory systems.


The object of the general investigation, of which the commencement is given in this paper, is to determine the relative composition of the various resins which occur in nature, and to trace the analogies they exhibit in their constitution; and also to ascertain how far they may be regarded as being derived from one common principle, and whether they admit of being all represented by one or more general formulæ. The chemical investigation of the resin of mastic shows that this substance consists of two resins; the one soluble, and acid; the other insoluble, and having no acid properties. The formulæ expressing the analysis of each of these are given by the author. He also shows that a series of analyses may be obtained which do not indicate the true constitution of a resin. The soluble resin, when exposed to the prolonged action of a heat exceeding 300° Fahr. is partly converted into a resin containing three, and partly into one containing five equivalent parts of oxygen, the proportion of carbon remaining constant. The same resin combines with bases, so as to form four series of salts; which, in the case of oxide of lead, consist of equivalents of resin and of oxide in the proportions, respectively, of two to one; three to two; one to one; and one to two. This soluble resin in combining with bases does not part with any of its oxygen; but if any change takes place in its constitution, it consists in the hydrogen being replaced by an equivalent proportion of a metal; and formulæ are given representing the salts of lead on this theoretical view. By boiling the resin in contact with ammonia and nitrate of silver, or perhaps with nitrate of ammonia, it is converted into a resin which forms a bisalt with oxide of silver, in winch there is also an apparent replacement of hydrogen by silver .


Author(s):  
Luca Bagnato ◽  
Antonio Punzo

Abstract Many statistical problems involve the estimation of a $$\left( d\times d\right) $$ d × d orthogonal matrix $$\varvec{Q}$$ Q . Such an estimation is often challenging due to the orthonormality constraints on $$\varvec{Q}$$ Q . To cope with this problem, we use the well-known PLU decomposition, which factorizes any invertible $$\left( d\times d\right) $$ d × d matrix as the product of a $$\left( d\times d\right) $$ d × d permutation matrix $$\varvec{P}$$ P , a $$\left( d\times d\right) $$ d × d unit lower triangular matrix $$\varvec{L}$$ L , and a $$\left( d\times d\right) $$ d × d upper triangular matrix $$\varvec{U}$$ U . Thanks to the QR decomposition, we find the formulation of $$\varvec{U}$$ U when the PLU decomposition is applied to $$\varvec{Q}$$ Q . We call the result as PLR decomposition; it produces a one-to-one correspondence between $$\varvec{Q}$$ Q and the $$d\left( d-1\right) /2$$ d d - 1 / 2 entries below the diagonal of $$\varvec{L}$$ L , which are advantageously unconstrained real values. Thus, once the decomposition is applied, regardless of the objective function under consideration, we can use any classical unconstrained optimization method to find the minimum (or maximum) of the objective function with respect to $$\varvec{L}$$ L . For illustrative purposes, we apply the PLR decomposition in common principle components analysis (CPCA) for the maximum likelihood estimation of the common orthogonal matrix when a multivariate leptokurtic-normal distribution is assumed in each group. Compared to the commonly used normal distribution, the leptokurtic-normal has an additional parameter governing the excess kurtosis; this makes the estimation of $$\varvec{Q}$$ Q in CPCA more robust against mild outliers. The usefulness of the PLR decomposition in leptokurtic-normal CPCA is illustrated by two biometric data analyses.


2018 ◽  
Vol 116 (2) ◽  
pp. 670-678 ◽  
Author(s):  
John Cowgill ◽  
Vadim A. Klenchin ◽  
Claudia Alvarez-Baron ◽  
Debanjan Tewari ◽  
Alexander Blair ◽  
...  

Despite sharing a common architecture with archetypal voltage-gated ion channels (VGICs), hyperpolarization- and cAMP-activated ion (HCN) channels open upon hyperpolarization rather than depolarization. The basic motions of the voltage sensor and pore gates are conserved, implying that these domains are inversely coupled in HCN channels. Using structure-guided protein engineering, we systematically assembled an array of mosaic channels that display the full complement of voltage-activation phenotypes observed in the VGIC superfamily. Our studies reveal that the voltage sensor of the HCN channel has an intrinsic ability to drive pore opening in either direction and that the extra length of the HCN S4 is not the primary determinant for hyperpolarization activation. Tight interactions at the HCN voltage sensor–pore interface drive the channel into an hERG-like inactivated state, thereby obscuring its opening upon depolarization. This structural element in synergy with the HCN cyclic nucleotide-binding domain and specific interactions near the pore gate biases the channel toward hyperpolarization-dependent opening. Our findings reveal an unexpected common principle underpinning voltage gating in the VGIC superfamily and identify the essential determinants of gating polarity.


2019 ◽  
Vol 12 (10) ◽  
pp. 5519-5534 ◽  
Author(s):  
Marie Lothon ◽  
Paul Barnéoud ◽  
Omar Gabella ◽  
Fabienne Lohou ◽  
Solène Derrien ◽  
...  

Abstract. In the context of a network of sky cameras installed on atmospheric multi-instrumented sites, we present an algorithm named ELIFAN, which aims to estimate the cloud cover amount from full-sky visible daytime images with a common principle and procedure. ELIFAN was initially developed for a self-made full-sky image system presented in this article and adapted to a set of other systems in the network. It is based on red-to-blue ratio thresholding for the distinction of cloudy and cloud-free pixels of the image and on the use of a cloud-free sky library, without taking account of aerosol loading. Both an absolute (without the use of a cloud-free reference image) and a differential (based on a cloud-free reference image) red-to-blue ratio thresholding are used. An evaluation of the algorithm based on a 1-year-long series of images shows that the proposed algorithm is very convincing for most of the images, with about 97 % of relevance in the process, outside the sunrise and sunset transitions. During those latter periods, however, ELIFAN has large difficulties in appropriately processing the image due to a large difference in color composition and potential confusion between cloud-free and cloudy sky at that time. This issue also impacts the library of cloud-free images. Beside this, the library also reveals some limitations during daytime, with the possible presence of very small and/or thin clouds. However, the latter have only a small impact on the cloud cover estimate. The two thresholding methodologies, the absolute and the differential red-to-blue ratio thresholding processes, agree very well, with departure usually below 8 % except in sunrise–sunset periods and in some specific conditions. The use of the cloud-free image library gives generally better results than the absolute process. It particularly better detects thin cirrus clouds. But the absolute thresholding process turns out to be better sometimes, for example in some cases in which the sun is hidden by a cloud.


Sign in / Sign up

Export Citation Format

Share Document