event representations
Recently Published Documents


TOTAL DOCUMENTS

95
(FIVE YEARS 20)

H-INDEX

13
(FIVE YEARS 0)

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 3
Author(s):  
Giacomo Frisoni ◽  
Gianluca Moro ◽  
Giulio Carlassare ◽  
Antonella Carbonaro

The automatic extraction of biomedical events from the scientific literature has drawn keen interest in the last several years, recognizing complex and semantically rich graphical interactions otherwise buried in texts. However, very few works revolve around learning embeddings or similarity metrics for event graphs. This gap leaves biological relations unlinked and prevents the application of machine learning techniques to promote discoveries. Taking advantage of recent deep graph kernel solutions and pre-trained language models, we propose Deep Divergence Event Graph Kernels (DDEGK), an unsupervised inductive method to map events into low-dimensional vectors, preserving their structural and semantic similarities. Unlike most other systems, DDEGK operates at a graph level and does not require task-specific labels, feature engineering, or known correspondences between nodes. To this end, our solution compares events against a small set of anchor ones, trains cross-graph attention networks for drawing pairwise alignments (bolstering interpretability), and employs transformer-based models to encode continuous attributes. Extensive experiments have been done on nine biomedical datasets. We show that our learned event representations can be effectively employed in tasks such as graph classification, clustering, and visualization, also facilitating downstream semantic textual similarity. Empirical results demonstrate that DDEGK significantly outperforms other state-of-the-art methods.


2021 ◽  
Vol 118 (51) ◽  
pp. e2119670118
Author(s):  
Lynn Nadel

The question of why our conceptions of space and time are intertwined with memory in the hippocampal formation is at the forefront of much current theorizing about this brain system. In this article I argue that animals bridge spatial and temporal gaps through the creation of internal models that allow them to act on the basis of things that exist in a distant place and/or existed at a different time. The hippocampal formation plays a critical role in these processes by stitching together spatiotemporally disparate entities and events. It does this by 1) constructing cognitive maps that represent extended spatial contexts, incorporating and linking aspects of an environment that may never have been experienced together; 2) creating neural trajectories that link the parts of an event, whether they occur in close temporal proximity or not, enabling the construction of event representations even when elements of that event were experienced at quite different times; and 3) using these maps and trajectories to simulate possible futures. As a function of these hippocampally driven processes, our subjective sense of both space and time are interwoven constructions of the mind, much as the philosopher Immanuel Kant postulated.


2021 ◽  
Author(s):  
Marcos P. S. Gôlo ◽  
Rafael G. Rossi ◽  
Ricardo M. Marcacini

Events are phenomena that occur at a specific time and place. Its detection can bring benefits to society since it is possible to extract knowledge from these events. Event detection is a multimodal task since these events have textual, geographical, and temporal components. Most multimodal research in the literature uses the concatenation of the components to represent the events. These approaches use multi-class or binary learning to detect events of interest which intensifies the user's labeling effort, in which the user should label event classes even if there is no interest in detecting them. In this paper, we present the Triple-VAE approach that learns a unified representation from textual, spatial, and density modalities through a variational autoencoder, one of the state-ofthe-art in representation learning. Our proposed Triple-VAE obtains suitable event representations for one-class classification, where users provide labels only for events of interest, thereby reducing the labeling effort. We carried out an experimental evaluation with ten real-world event datasets, four multimodal representation methods, and five evaluation metrics. Triple-VAE outperforms and presents a statistically significant difference considering the other three representation methods in all datasets. Therefore, Triple-VAE proved to be promising to represent the events in the one-class event detection scenario.


2021 ◽  

Event structures are central in Linguistics and Artificial Intelligence research: people can easily refer to changes in the world, identify their participants, distinguish relevant information, and have expectations of what can happen next. Part of this process is based on mechanisms similar to narratives, which are at the heart of information sharing. But it remains difficult to automatically detect events or automatically construct stories from such event representations. This book explores how to handle today's massive news streams and provides multidimensional, multimodal, and distributed approaches, like automated deep learning, to capture events and narrative structures involved in a 'story'. This overview of the current state-of-the-art on event extraction, temporal and casual relations, and storyline extraction aims to establish a new multidisciplinary research community with a common terminology and research agenda. Graduate students and researchers in natural language processing, computational linguistics, and media studies will benefit from this book.


2021 ◽  
Author(s):  
Johannes Mahr ◽  
Joshua D. Greene ◽  
Daniel L. Schacter

A prominent feature of mental event (i.e. ‘episodic’) simulations is their temporality: human adults can generate episodic representations directed towards the past or the future. The ability to entertain event representations with different temporal orientations allows these representations to play various cognitive roles. Here, we investigated how the temporal orientation of imagined events relates to the contents (i.e. ‘what is happening’) of these events. Is the temporal orientation of an episode part of its contents? Or are the processes for assigning temporality to an event representation distinct from those generating its contents? In three experiments (N = 360), we asked participants to generate and later recall a series of imagined events differing in (1) location (indoors vs. outdoors), (2) time of day (daytime vs. nighttime), (3) temporal orientation (past vs. future), and (4) weekday (Monday vs. Friday). We then tested to what extent successful recall of episodic content (i.e. (1) and (2)) would predict recall of temporality and/or weekday information. Results showed that while recall of temporal orientation was predicted by content recall, weekday recall was not. However, temporal orientation was only weakly integrated with episodic contents. This finding suggests that episodic simulations are unlikely to be intrinsically temporal in nature. Instead, similar to other forms of temporal information, temporal orientation might be determined from such contents by reconstructive post-retrieval processes. These results have implications for how the human ability to ‘mentally travel’ in time is cognitively implemented.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 843
Author(s):  
Peter Gärdenfors

The aim of this article is to provide an evolutionarily grounded explanation of central aspects of the structure of language. It begins with an account of the evolution of human causal reasoning. A comparison between humans and non-human primates suggests that human causal cognition is based on reasoning about the underlying forces that are involved in events, while other primates hardly understand external forces. This is illustrated by an analysis of the causal cognition required for early hominin tool use. Second, the thinking concerning forces in causation is used to motivate a model of human event cognition. A mental representation of an event contains two vectors representing a cause as well as a result but also entities such as agents, patients, instruments and locations. The fundamental connection between event representations and language is that declarative sentences express events (or states). The event structure also explains why sentences are constituted of noun phrases and verb phrases. Finally, the components of the event representation show up in language, where causes and effects are expressed by verbs, agents and patients by nouns (modified by adjectives), locations by prepositions, etc. Thus, the evolution of the complexity of mental event representations also provides insight into the evolution of the structure of language.


Sign in / Sign up

Export Citation Format

Share Document