schema learning
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 0)

H-INDEX

7
(FIVE YEARS 0)

2018 ◽  
Vol 115 (14) ◽  
pp. E3313-E3322 ◽  
Author(s):  
Kevin J. P. Woods ◽  
Josh H. McDermott

The cocktail party problem requires listeners to infer individual sound sources from mixtures of sound. The problem can be solved only by leveraging regularities in natural sound sources, but little is known about how such regularities are internalized. We explored whether listeners learn source “schemas”—the abstract structure shared by different occurrences of the same type of sound source—and use them to infer sources from mixtures. We measured the ability of listeners to segregate mixtures of time-varying sources. In each experiment a subset of trials contained schema-based sources generated from a common template by transformations (transposition and time dilation) that introduced acoustic variation but preserved abstract structure. Across several tasks and classes of sound sources, schema-based sources consistently aided source separation, in some cases producing rapid improvements in performance over the first few exposures to a schema. Learning persisted across blocks that did not contain the learned schema, and listeners were able to learn and use multiple schemas simultaneously. No learning was evident when schema were presented in the task-irrelevant (i.e., distractor) source. However, learning from task-relevant stimuli showed signs of being implicit, in that listeners were no more likely to report that sources recurred in experiments containing schema-based sources than in control experiments containing no schema-based sources. The results implicate a mechanism for rapidly internalizing abstract sound structure, facilitating accurate perceptual organization of sound sources that recur in the environment.


2018 ◽  
Author(s):  
Catherine Chen ◽  
Qihong Lu ◽  
Andre Beukers ◽  
Chris Baldassano ◽  
Kenneth Norman

2017 ◽  
Vol 5 ◽  
pp. 233-246 ◽  
Author(s):  
Bhavana Dalvi Mishra ◽  
Niket Tandon ◽  
Peter Clark

Our goal is to construct a domain-targeted, high precision knowledge base (KB), containing general (subject,predicate,object) statements about the world, in support of a downstream question-answering (QA) application. Despite recent advances in information extraction (IE) techniques, no suitable resource for our task already exists; existing resources are either too noisy, too named-entity centric, or too incomplete, and typically have not been constructed with a clear scope or purpose. To address these, we have created a domain-targeted, high precision knowledge extraction pipeline, leveraging Open IE, crowdsourcing, and a novel canonical schema learning algorithm (called CASI), that produces high precision knowledge targeted to a particular domain - in our case, elementary science. To measure the KB’s coverage of the target domain’s knowledge (its “comprehensiveness” with respect to science) we measure recall with respect to an independent corpus of domain text, and show that our pipeline produces output with over 80% precision and 23% recall with respect to that target, a substantially higher coverage of tuple-expressible science knowledge than other comparable resources. We have made the KB publicly available.


Author(s):  
Jürgen Sturm ◽  
Christian Plagemann ◽  
Wolfram Burgard
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document