scholarly journals Modelling Early Word Acquisition through Multiplex Lexical Networks and Machine Learning

2019 ◽  
Vol 3 (1) ◽  
pp. 10 ◽  
Author(s):  
Massimo Stella

Early language acquisition is a complex cognitive task. Recent data-informed approaches showed that children do not learn words uniformly at random but rather follow specific strategies based on the associative representation of words in the mental lexicon, a conceptual system enabling human cognitive computing. Building on this evidence, the current investigation introduces a combination of machine learning techniques, psycholinguistic features (i.e., frequency, length, polysemy and class) and multiplex lexical networks, representing the semantics and phonology of the mental lexicon, with the aim of predicting normative acquisition of 529 English words by toddlers between 22 and 26 months. Classifications using logistic regression and based on four psycholinguistic features achieve the best baseline cross-validated accuracy of 61.7% when half of the words have been acquired. Adding network information through multiplex closeness centrality enhances accuracy (up to 67.7%) more than adding multiplex neighbourhood density/degree (62.4%) or multiplex PageRank versatility (63.0%) or the best single-layer network metric, i.e., free association degree (65.2%), instead. Multiplex closeness operationalises the structural relevance of words for semantic and phonological information flow. These results indicate that the whole, global, multi-level flow of information and structure of the mental lexicon influence word acquisition more than single-layer or local network features of words when considered in conjunction with language norms. The highlighted synergy of multiplex lexical structure and psycholinguistic norms opens new ways for understanding human cognition and language processing through powerful and data-parsimonious cognitive computing approaches.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew K. C. Wong ◽  
Pei-Yuan Zhou ◽  
Zahid A. Butt

AbstractMachine Learning has made impressive advances in many applications akin to human cognition for discernment. However, success has been limited in the areas of relational datasets, particularly for data with low volume, imbalanced groups, and mislabeled cases, with outputs that typically lack transparency and interpretability. The difficulties arise from the subtle overlapping and entanglement of functional and statistical relations at the source level. Hence, we have developed Pattern Discovery and Disentanglement System (PDD), which is able to discover explicit patterns from the data with various sizes, imbalanced groups, and screen out anomalies. We present herein four case studies on biomedical datasets to substantiate the efficacy of PDD. It improves prediction accuracy and facilitates transparent interpretation of discovered knowledge in an explicit representation framework PDD Knowledge Base that links the sources, the patterns, and individual patients. Hence, PDD promises broad and ground-breaking applications in genomic and biomedical machine learning.


Author(s):  
Joel Weijia Lai ◽  
Candice Ke En Ang ◽  
U. Rajendra Acharya ◽  
Kang Hao Cheong

Artificial Intelligence in healthcare employs machine learning algorithms to emulate human cognition in the analysis of complicated or large sets of data. Specifically, artificial intelligence taps on the ability of computer algorithms and software with allowable thresholds to make deterministic approximate conclusions. In comparison to traditional technologies in healthcare, artificial intelligence enhances the process of data analysis without the need for human input, producing nearly equally reliable, well defined output. Schizophrenia is a chronic mental health condition that affects millions worldwide, with impairment in thinking and behaviour that may be significantly disabling to daily living. Multiple artificial intelligence and machine learning algorithms have been utilized to analyze the different components of schizophrenia, such as in prediction of disease, and assessment of current prevention methods. These are carried out in hope of assisting with diagnosis and provision of viable options for individuals affected. In this paper, we review the progress of the use of artificial intelligence in schizophrenia.


2018 ◽  
Vol 8 (4) ◽  
pp. 34 ◽  
Author(s):  
Vishal Saxena ◽  
Xinyu Wu ◽  
Ira Srivastava ◽  
Kehan Zhu

The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e., on hand-held devices that are energy constrained, which is energy prohibitive when employing digital von Neumann architectures. Recent explorations in digital neuromorphic hardware have shown promise, but offer low neurosynaptic density needed for scaling to applications such as intelligent cognitive assistants (ICA). Large-scale integration of nanoscale emerging memory devices with Complementary Metal Oxide Semiconductor (CMOS) mixed-signal integrated circuits can herald a new generation of Neuromorphic computers that will transcend the von Neumann bottleneck for cognitive computing tasks. Such hybrid Neuromorphic System-on-a-chip (NeuSoC) architectures promise machine learning capability at chip-scale form factor, and several orders of magnitude improvement in energy efficiency. Practical demonstration of such architectures has been limited as performance of emerging memory devices falls short of the expected behavior from the idealized memristor-based analog synapses, or weights, and novel machine learning algorithms are needed to take advantage of the device behavior. In this article, we review the challenges involved and present a pathway to realize large-scale mixed-signal NeuSoCs, from device arrays and circuits to spike-based deep learning algorithms with ‘brain-like’ energy-efficiency.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Elena Goi ◽  
Xi Chen ◽  
Qiming Zhang ◽  
Benjamin P. Cumming ◽  
Steffen Schoenhardt ◽  
...  

AbstractOptical machine learning has emerged as an important research area that, by leveraging the advantages inherent to optical signals, such as parallelism and high speed, paves the way for a future where optical hardware can process data at the speed of light. In this work, we present such optical devices for data processing in the form of single-layer nanoscale holographic perceptrons trained to perform optical inference tasks. We experimentally show the functionality of these passive optical devices in the example of decryptors trained to perform optical inference of single or whole classes of keys through symmetric and asymmetric decryption. The decryptors, designed for operation in the near-infrared region, are nanoprinted on complementary metal-oxide–semiconductor chips by galvo-dithered two-photon nanolithography with axial nanostepping of 10 nm1,2, achieving a neuron density of >500 million neurons per square centimetre. This power-efficient commixture of machine learning and on-chip integration may have a transformative impact on optical decryption3, sensing4, medical diagnostics5 and computing6,7.


2021 ◽  
pp. 875529302110423
Author(s):  
Zoran Stojadinović ◽  
Miloš Kovačević ◽  
Dejan Marinković ◽  
Božidar Stojadinović

This article proposes a new framework for rapid earthquake loss assessment based on a machine learning damage classification model and a representative sampling algorithm. A random forest classification model predicts a damage probability distribution that, combined with an expert-defined repair cost matrix, enables the calculation of the expected repair costs for each building and, in aggregate, of direct losses in the earthquake-affected area. The proposed building representation does not include explicit information about the earthquake and the soil type. Instead, such information is implicitly contained in the spatial distribution of damage. To capture this distribution, a sampling algorithm, based on K-means clustering, is used to select a minimal number of buildings that represent the area of interest in terms of its seismic risk, independently of future earthquakes. To observe damage states in the representative set after an earthquake, the proposed framework utilizes a local network of trained damage assessors. The model is updated after each damage observation cycle, thus increasing the accuracy of the current loss assessment. The proposed framework is exemplified using the 2010 Kraljevo, Serbia earthquake dataset.


2018 ◽  
Vol 15 (1) ◽  
pp. 199-215 ◽  
Author(s):  
Thomas Edward Marshall ◽  
Sherwood Lane Lambert

ABSTRACT This paper presents a cognitive computing model, based on artificial intelligence (AI) technologies, supporting task automation in the accounting industry. Drivers and consequences of task automation, globally and in accounting, are reviewed. A framework supporting cognitive task automation is discussed. The paper recognizes essential differences between cognitive computing and data analytics. Cognitive computing technologies that support task automation are incorporated into a model delivering federated knowledge. The impact of task automation on accounting job roles and the resulting creation of new accounting job roles supporting innovation are presented. The paper develops a hypothetical use case of building a cloud-based intelligent accounting application design, defined as cognitive services, using machine learning based on AI. The paper concludes by recognizing the significance of future research into task automation in accounting and suggests the federated knowledge model as a framework for future research into the process of digital transformation based on cognitive computing.


Author(s):  
Yingxu Wang ◽  
Yousheng Tian ◽  
Kendal Hu

Towards the formalization of ontological methodologies for dynamic machine learning and semantic analyses, a new form of denotational mathematics known as concept algebra is introduced. Concept Algebra (CA) is a denotational mathematical structure for formal knowledge representation and manipulation in machine learning and cognitive computing. CA provides a rigorous knowledge modeling and processing tool, which extends the informal, static, and application-specific ontological technologies to a formal, dynamic, and general mathematical means. An operational semantics for the calculus of CA is formally elaborated using a set of computational processes in real-time process algebra (RTPA). A case study is presented on how machines, cognitive robots, and software agents may mimic the key ability of human beings to autonomously manipulate knowledge in generic learning using CA. This work demonstrates the expressive power and a wide range of applications of CA for both humans and machines in cognitive computing, semantic computing, machine learning, and computational intelligence.


Author(s):  
Yingxu Wang ◽  
Bernard Widrow ◽  
Lotfi A. Zadeh ◽  
Newton Howard ◽  
Sally Wood ◽  
...  

The theme of IEEE ICCI*CC'16 on Cognitive Informatics (CI) and Cognitive Computing (CC) was on cognitive computers, big data cognition, and machine learning. CI and CC are a contemporary field not only for basic studies on the brain, computational intelligence theories, and denotational mathematics, but also for engineering applications in cognitive systems towards deep learning, deep thinking, and deep reasoning. This paper reports a set of position statements presented in the plenary panel (Part I) in IEEE ICCI*CC'16 at Stanford University. The summary is contributed by invited panelists who are part of the world's renowned scholars in the transdisciplinary field of CI and CC.


2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Saad Awadh Alanazi ◽  
M. M. Kamruzzaman ◽  
Madallah Alruwaili ◽  
Nasser Alshammari ◽  
Salman Ali Alqahtani ◽  
...  

COVID-19 presents an urgent global challenge because of its contagious nature, frequently changing characteristics, and the lack of a vaccine or effective medicines. A model for measuring and preventing the continued spread of COVID-19 is urgently required to provide smart health care services. This requires using advanced intelligent computing such as artificial intelligence, machine learning, deep learning, cognitive computing, cloud computing, fog computing, and edge computing. This paper proposes a model for predicting COVID-19 using the SIR and machine learning for smart health care and the well-being of the citizens of KSA. Knowing the number of susceptible, infected, and recovered cases each day is critical for mathematical modeling to be able to identify the behavioral effects of the pandemic. It forecasts the situation for the upcoming 700 days. The proposed system predicts whether COVID-19 will spread in the population or die out in the long run. Mathematical analysis and simulation results are presented here as a means to forecast the progress of the outbreak and its possible end for three types of scenarios: “no actions,” “lockdown,” and “new medicines.” The effect of interventions like lockdown and new medicines is compared with the “no actions” scenario. The lockdown case delays the peak point by decreasing the infection and affects the area equality rule of the infected curves. On the other side, new medicines have a significant impact on infected curve by decreasing the number of infected people about time. Available forecast data on COVID-19 using simulations predict that the highest level of cases might occur between 15 and 30 November 2020. Simulation data suggest that the virus might be fully under control only after June 2021. The reproductive rate shows that measures such as government lockdowns and isolation of individuals are not enough to stop the pandemic. This study recommends that authorities should, as soon as possible, apply a strict long-term containment strategy to reduce the epidemic size successfully.


2019 ◽  
Vol 23 (4) ◽  
pp. 740-751 ◽  
Author(s):  
Alexis Hervais-Adelman ◽  
Laura Babcock

Simultaneous interpreting is a complex cognitive task that requires the concurrent execution of multiple processes: listening, comprehension, conversion of a message from one language to another, speech production, and self-monitoring. This requires the deployment of an array of linguistic and cognitive control mechanisms that must coordinate the various brain systems implicated in handling these tasks. How the brain handles this challenge remains an open question, and recent brain imaging investigations have begun to complement the theories based on behavioural data. fMRI studies have shown that simultaneous interpreting engages a network of brain regions encompassing those implicated in speech perception and production, language switching, self-monitoring, and selection. Structural imaging studies have been carried out that also indicate modifications to a similar set of structures. In the present paper, we review the extant data and propose an integrative model of simultaneous interpreting that piggybacks on existing theories of multilingual language control.


Sign in / Sign up

Export Citation Format

Share Document