scholarly journals Deconstructing multivariate decoding for the study of brain function

2017 ◽  
Author(s):  
Martin N. Hebart ◽  
Chris I. Baker

AbstractMultivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function.HighlightsWe highlight two sources of confusion that affect the interpretation of multivariate decoding resultsOne confusion arises from the dual use of multivariate decoding for predictions in real-world applications and for interpretation in terms of brain functionThe other confusion arises from the different statistical and conceptual frameworks underlying classical univariate analysis to multivariate decodingWe highlight six differences between classical univariate analysis and multivariate decoding and differences in the interpretation of signal and noiseThese confusions are illustrated in four examples revealing assumptions and limitations of multivariate decoding for interpretation

Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 197
Author(s):  
Ali Seman ◽  
Azizian Mohd Sapawi

In the conventional k-means framework, seeding is the first step toward optimization before the objects are clustered. In random seeding, two main issues arise: the clustering results may be less than optimal and different clustering results may be obtained for every run. In real-world applications, optimal and stable clustering is highly desirable. This report introduces a new clustering algorithm called the zero k-approximate modal haplotype (Zk-AMH) algorithm that uses a simple and novel seeding mechanism known as zero-point multidimensional spaces. The Zk-AMH provides cluster optimality and stability, therefore resolving the aforementioned issues. Notably, the Zk-AMH algorithm yielded identical mean scores to maximum, and minimum scores in 100 runs, producing zero standard deviation to show its stability. Additionally, when the Zk-AMH algorithm was applied to eight datasets, it achieved the highest mean scores for four datasets, produced an approximately equal score for one dataset, and yielded marginally lower scores for the other three datasets. With its optimality and stability, the Zk-AMH algorithm could be a suitable alternative for developing future clustering tools.


Author(s):  
Kannan Balasubramanian ◽  
Mala K.

Zero knowledge protocols provide a way of proving that a statement is true without revealing anything other than the correctness of the claim. Zero knowledge protocols have practical applications in cryptography and are used in many applications. While some applications only exist on a specification level, a direction of research has produced real-world applications. Zero knowledge protocols, also referred to as zero knowledge proofs, are a type of protocol in which one party, called the prover, tries to convince the other party, called the verifier, that a given statement is true. Sometimes the statement is that the prover possesses a particular piece of information. This is a special case of zero knowledge protocol called a zero-knowledge proof of knowledge. Formally, a zero-knowledge proof is a type of interactive proof.


1977 ◽  
Vol 7 (4) ◽  
pp. 285-293 ◽  
Author(s):  
Robert D. Dycus

The effect of proposal appearance on technical evaluation scoring was examined experimentally. Two mock proposals were prepared—one from the A Corporation and the other from the B Corporation. Each proposal was prepared in two versions—a “nice” appearing version (stylized “logoed” pages, offset two-color printing, heavy paper stock, plastic 19-ring spiral binding), and a “poor” appearing version (single-spaced typed pages, xerox reproduction, cheap transparent plastic cover, staple binding.) The proposals were scored against a set of eight evaluation questions by twenty-eight experienced government evaluators in a 2 × 2 factorial design experiment. No statistically significant effects of appearance on evaluation scoring were detected. A general model is presented that describes impression in terms of proposal appearance versus proposal thought content. The experiment is interpreted in terms of this model, and “real-world” applications of the model are discussed.


Author(s):  
Darren J. Croton

AbstractThe Hubble constant, H0, or its dimensionless equivalent, “little h”, is a fundamental cosmological property that is now known to an accuracy better than a few per cent. Despite its cosmological nature, little h commonly appears in the measured properties of individual galaxies. This can pose unique challenges for users of such data, particularly with survey data. In this paper we show how little h arises in the measurement of galaxies, how to compare like-properties from different datasets that have assumed different little h cosmologies, and how to fairly compare theoretical data with observed data, where little h can manifest in vastly different ways. This last point is particularly important when observations are used to calibrate galaxy formation models, as calibrating with the wrong (or no) little h can lead to disastrous results when the model is later converted to the correct h cosmology. We argue that in this modern age little h is an anachronism, being one of least uncertain parameters in astrophysics, and we propose that observers and theorists instead treat this uncertainty like any other. We conclude with a ‘cheat sheet’ of nine points that should be followed when dealing with little h in data analysis.


Author(s):  
Xuanfei Zhang

The study made a comparison with the common applications on the hedonic pricing model that valuing ecosystem services between Europe, the United States, and China. By analyzing various reasons impacting housing prices, cultural and historical backgrounds played roles in the real-world applications.


Author(s):  
Jie Wen ◽  
Zheng Zhang ◽  
Yong Xu ◽  
Bob Zhang ◽  
Lunke Fei ◽  
...  

Multi-view clustering aims to partition data collected from diverse sources based on the assumption that all views are complete. However, such prior assumption is hardly satisfied in many real-world applications, resulting in the incomplete multi-view learning problem. The existing attempts on this problem still have the following limitations: 1) the underlying semantic information of the missing views is commonly ignored; 2) The local structure of data is not well explored; 3) The importance of different views is not effectively evaluated. To address these issues, this paper proposes a Unified Embedding Alignment Framework (UEAF) for robust incomplete multi-view clustering. In particular, a locality-preserved reconstruction term is introduced to infer the missing views such that all views can be naturally aligned. A consensus graph is adaptively learned and embedded via the reverse graph regularization to guarantee the common local structure of multiple views and in turn can further align the incomplete views and inferred views. Moreover, an adaptive weighting strategy is designed to capture the importance of different views. Extensive experimental results show that the proposed method can significantly improve the clustering performance in comparison with some state-of-the-art methods.


Author(s):  
Parvin Shaikh

<div><p><em>Organizational culture refers to the common beliefs and values that are present in the organization which guides the behavior of its members. Organizational culture affects the way people and groups interact with each other, with clients, and with stakeholders. The main purpose of this paper is to study the OCTAPACE Culture at two different organizations belonging to Hospitality sector in Nagpur. The paper also aims to find out if there are differences in the culture of the two organizations. OCTAPACE profile instrument developed by Udai Pareek was used to study the cultural ethos at the selected organizations. Data analysis was done using SPSS. Findings indicate that the both the organizations scored within the normative values on five dimensions (Collaboration, Trust, Autonomy, Pro action, Confrontation), whereas one organization had excellent scores on two dimensions (Authenticity, Experimenting), the other scored below the lowest normative value on Openness.</em></p></div>


2015 ◽  
Vol 24 (03) ◽  
pp. 1550003 ◽  
Author(s):  
Armin Daneshpazhouh ◽  
Ashkan Sami

The task of semi-supervised outlier detection is to find the instances that are exceptional from other data, using some labeled examples. In many applications such as fraud detection and intrusion detection, this issue becomes more important. Most existing techniques are unsupervised. On the other hand, semi-supervised approaches use both negative and positive instances to detect outliers. However, in many real world applications, very few positive labeled examples are available. This paper proposes an innovative approach to address this problem. The proposed method works as follows. First, some reliable negative instances are extracted by a kNN-based algorithm. Afterwards, fuzzy clustering using both negative and positive examples is utilized to detect outliers. Experimental results on real data sets demonstrate that the proposed approach outperforms the previous unsupervised state-of-the-art methods in detecting outliers.


AI Magazine ◽  
2008 ◽  
Vol 29 (4) ◽  
pp. 25 ◽  
Author(s):  
Jorge A, Baier ◽  
Sheila A. McIlraith

Automated Planning is an old area of AI that focuses on the development of techniques for finding a plan that achieves a given goal from a given set of initial states as quickly as possible. In most real-world applications, users of planning systems have preferences over the multitude of plans that achieve a given goal. These preferences allow to distinguish plans that are more desirable from those that are less desirable. Planning systems should therefore be able to construct high-quality plans, or at the very least they should be able to build plans that have a reasonably good quality given the resources available.In the last few years we have seen a significant amount of research that has focused on developing rich and compelling languages for expressing preferences over plans. On the other hand, we have seen the development of planning techniques that aim at finding high-quality plans quickly, exploiting some of the ideas developed for classical planning. In this paper we review the latest developments in automated preference-based planning. We also review various approaches for preference representation, and the main practical approaches developed so far.


2019 ◽  
Vol 27 (3) ◽  
pp. 341-356
Author(s):  
Paula Droege

Since the introduction of new technologies, the deluge of neuroscientific data has been overwhelming. On one hand this new information has produced remarkable breakthroughs in our understanding of brain function and development as well as lifesaving treatments for trauma and disease. On the other hand, the lure and reward for explanations of mental phenomena in terms of simple, manipulable brain processes has led to questionable research methodologies and unsubstantiated claims. A more fundamental issue is raised by the attempt to explain consciousness by means of information, as proposed by the Information Integration Theory (IIT). While the models produced by this massive computation of data will no doubt improve our understanding of brain function and capacity, a strict information processing approach cannot address the problem of meaning. A solution to this problem demands an evolutionary, developmental, and dynamic account of an organism in its environment. Data analysis will play a role in this inclusive explanatory program, but explanation is insufficient by data alone.


Sign in / Sign up

Export Citation Format

Share Document