scholarly journals Covariate-Profile Similarity Weighting and Bagging Studies with the Study Strap: Multi-Study Learning for Human Neurochemical Sensing

2019 ◽  
Author(s):  
Gabriel Loewinger ◽  
Prasad Patil ◽  
Kenneth Kishida ◽  
Giovanni Parmigiani

Prediction settings with multiple studies have become increasingly common. Ensembling models trained on individual studies has been shown to improve replicability in new studies. Motivated by a groundbreaking new technology in human neuroscience, we introduce two generalizations of multi-study ensemble predictions. First, while existing methods weight ensemble elements by cross-study prediction performance, we extend weighting schemes to also incorporate covariate similarity between training data and target validation studies. Second, we introduce a hierarchical resampling scheme to generate pseudo-study replicates (“study straps”) and ensemble classifiers trained on these rather than the original studies themselves. We demonstrate analytically that existing methods are special cases. Through a tuning parameter, our approach forms a continuum between merging all training data and training with existing multi-study ensembles. Leveraging this continuum helps accommodate different levels of between-study heterogeneity.Our methods are motivated by the application of Voltammetry in humans. This technique records electrical brain measurements and converts signals into neurotransmitter concentration estimates using a prediction model. Using this model in practice presents a cross-study challenge, for which we show marked improvements after application of our methods. We verify our methods in simulations and provide the studyStrap R package.

2019 ◽  
Vol 28 (4) ◽  
pp. 993-1005 ◽  
Author(s):  
Gitte Keidser ◽  
Nicole Matthews ◽  
Elizabeth Convery

Purpose The aim of this study was to examine how hearing aid candidates perceive user-driven and app-controlled hearing aids and the effect these concepts have on traditional hearing health care delivery. Method Eleven adults (3 women, 8 men), recruited among 60 participants who had completed a research study evaluating an app-controlled, self-fitting hearing aid for 12 weeks, participated in a semistructured interview. Participants were over 55 years of age and had varied experience with hearing aids and smartphones. A template analysis was applied to data. Results Five themes emerged from the interviews: (a) prerequisites to the successful implementation of user-driven and app-controlled technologies, (b) benefits and advantages of user-driven and app-controlled technologies, (c) barriers to the acceptance and use of user-driven and app-controlled technologies, (d) beliefs that age is a significant factor in how well people will adopt new technology, and (e) consequences that flow from the adoption of user-driven and app-controlled technologies. Specifically, suggested benefits of the technology included fostering empowerment and providing cheaper and more discrete options, while challenges included lack of technological self-efficacy among older adults. Training and support were emphasized as necessary for successful adaptation and were suggested to be a focus of audiologic services in the future. Conclusion User perceptions of user-driven and app-controlled hearing technologies challenge the audiologic profession to provide adequate support and training for use of the technology and manufacturers to make the technology more accessible to older people.


2019 ◽  
Vol 12 (2) ◽  
pp. 120-127 ◽  
Author(s):  
Wael Farag

Background: In this paper, a Convolutional Neural Network (CNN) to learn safe driving behavior and smooth steering manoeuvring, is proposed as an empowerment of autonomous driving technologies. The training data is collected from a front-facing camera and the steering commands issued by an experienced driver driving in traffic as well as urban roads. Methods: This data is then used to train the proposed CNN to facilitate what it is called “Behavioral Cloning”. The proposed Behavior Cloning CNN is named as “BCNet”, and its deep seventeen-layer architecture has been selected after extensive trials. The BCNet got trained using Adam’s optimization algorithm as a variant of the Stochastic Gradient Descent (SGD) technique. Results: The paper goes through the development and training process in details and shows the image processing pipeline harnessed in the development. Conclusion: The proposed approach proved successful in cloning the driving behavior embedded in the training data set after extensive simulations.


2021 ◽  
Vol 139 (1) ◽  
pp. 32-58
Author(s):  
Orietta Da Rold

Abstract In this essay, I offer a brief history of manuscript cataloguing and some observations on the innovations this practice introduced especially in the digital form. This history reveals that as the cataloguing of medieval manuscripts developed over time, so did the research needs it served. What was often considered traditional cataloguing practices had to be mediated to accommodate new scholarly advance, posing interesting questions, for example, on what new technologies can bring to this discussion. In the digital age, in particular, how do digital catalogues interact with their analogue counterparts? What skills and training are required of scholars interacting with this new technology? To this end, I will consider the importance of the digital environment to enable a more flexible approach to cataloguing. I will also discuss new insights into digital projects, especially the experience accrued by the The Production and Use of English Manuscripts 1060 to 1220 Project, and then propose that in the future cataloguing should be adaptable and shareable, and make full use of the different approaches to manuscripts generated by collaboration between scholars and librarians or the work of postgraduate students and early career researchers.


Data Science ◽  
2021 ◽  
pp. 1-21
Author(s):  
Caspar J. Van Lissa ◽  
Andreas M. Brandmaier ◽  
Loek Brinkman ◽  
Anna-Lena Lamprecht ◽  
Aaron Peikert ◽  
...  

Adopting open science principles can be challenging, requiring conceptual education and training in the use of new tools. This paper introduces the Workflow for Open Reproducible Code in Science (WORCS): A step-by-step procedure that researchers can follow to make a research project open and reproducible. This workflow intends to lower the threshold for adoption of open science principles. It is based on established best practices, and can be used either in parallel to, or in absence of, top-down requirements by journals, institutions, and funding bodies. To facilitate widespread adoption, the WORCS principles have been implemented in the R package worcs, which offers an RStudio project template and utility functions for specific workflow steps. This paper introduces the conceptual workflow, discusses how it meets different standards for open science, and addresses the functionality provided by the R implementation, worcs. This paper is primarily targeted towards scholars conducting research projects in R, conducting research that involves academic prose, analysis code, and tabular data. However, the workflow is flexible enough to accommodate other scenarios, and offers a starting point for customized solutions. The source code for the R package and manuscript, and a list of examplesof WORCS projects, are available at https://github.com/cjvanlissa/worcs.


2021 ◽  
Vol 7 (3) ◽  
pp. 59
Author(s):  
Yohanna Rodriguez-Ortega ◽  
Dora M. Ballesteros ◽  
Diego Renza

With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, known as the copy-move technique. Traditional image processing approaches manually look for patterns related to the duplicated content, limiting their use in mass data classification. In contrast, approaches based on deep learning have shown better performance and promising results, but they present generalization problems with a high dependence on training data and the need for appropriate selection of hyperparameters. To overcome this, we propose two approaches that use deep learning, a model by a custom architecture and a model by transfer learning. In each case, the impact of the depth of the network is analyzed in terms of precision (P), recall (R) and F1 score. Additionally, the problem of generalization is addressed with images from eight different open access datasets. Finally, the models are compared in terms of evaluation metrics, and training and inference times. The model by transfer learning of VGG-16 achieves metrics about 10% higher than the model by a custom architecture, however, it requires approximately twice as much inference time as the latter.


1988 ◽  
Vol 32 (13) ◽  
pp. 760-764
Author(s):  
Robert F. Randolph

Leaders of task-oriented production groups play an important role in their group's functioning and performance. That role also evolves as groups mature and learn to work together more smoothly. The present study uses a functional analysis of the evolving role of supervisors of underground coal mining crews to evaluate the impact of supervisors' characteristics and behaviors on their crews' efficiency and safety, and makes recommendations for improving supervisory selection and training. Data were gathered from a sample of 138 supervisors at 13 underground coal mines. Detailed structured observations of the supervisors indicated that most of their time was spent attending to hardware and paperwork, while comparatively little time was spent on person to person “leadership”. The findings point out that while group needs changed over time, the supervisors' behaviors typically did not keep pace and probably restricted group performance.


Author(s):  
Yukiko Inoue ◽  
Suzanne Bell

Bill Gates stated in a speech, “In all areas of the curriculum, teachers must teach an information-based inquiry process to meet the demands of the Information Age. This is the challenge for the world’s most important profession. Meeting this challenge will be impossible unless educators are willing to join the revolution and embrace the new technology tools available.”…. Every educator looks at the integration of technology—and its challenges — from a different perspective. Technology coordinators view the problems of insufficient hardware, software, and training as major obstacles. Teachers consider the lack of time to develop technology-based lesson a concern. Administrators identify teachers’ lack of experience using technology in instruction as yet another challenge. Teachers and administrators, however, can and are beginning to overcome these barriers with effective leadership, proper training, planning, and a commitment to enhancing teaching and learning using technologies. (Shelly, Cashman, Gunter, & Gunter, 2004, pp. 6.10-6.11)


Sign in / Sign up

Export Citation Format

Share Document