scholarly journals Progressive Teaching Improvement For Small Scale Learning: A Case Study in China

2020 ◽  
Vol 12 (8) ◽  
pp. 137
Author(s):  
Bo Jiang ◽  
Yanbai He ◽  
Rui Chen ◽  
Chuanyan Hao ◽  
Sijiang Liu ◽  
...  

Learning data feedback and analysis have been widely investigated in all aspects of education, especially for large scale remote learning scenario like Massive Open Online Courses (MOOCs) data analysis. On-site teaching and learning still remains the mainstream form for most teachers and students, and learning data analysis for such small scale scenario is rarely studied. In this work, we first develop a novel user interface to progressively collect students’ feedback after each class of a course with WeChat mini program inspired by the evaluation mechanism of most popular shopping website. Collected data are then visualized to teachers and pre-processed. We also propose a novel artificial neural network model to conduct a progressive study performance prediction. These prediction results are reported to teachers for next-class and further teaching improvement. Experimental results show that the proposed neural network model outperforms other state-of-the-art machine learning methods and reaches a precision value of 74.05% on a 3-class classifying task at the end of the term.

2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


2021 ◽  
Vol 11 (21) ◽  
pp. 10366
Author(s):  
César Córcoles ◽  
Germán Cobo ◽  
Ana-Elena Guerrero-Roldán

A variety of tools are available to collect, process and analyse learning data obtained from the clickstream generated by students watching learning resources in video format. There is also some literature on the uses of such data in order to better understand and improve the teaching-learning process. Most of the literature focuses on large scale learning scenarios, such as MOOCs, where videos are watched hundreds or thousands of times. We have developed a solution to collect clickstream analytics data applicable to smaller scenarios, much more common in primary, secondary and higher education, where videos are watched tens or hundreds of times, and to analyse whether the solution is useful to teachers to improve the learning process. We have deployed it in a real scenario and collected real data. Furthermore, we have processed and presented the data visually to teachers for those scenarios and have collected and analysed their perception of their usefulness. We conclude that the collected data are perceived as useful by teachers to improve the teaching and learning process.


Author(s):  
Jedediah M. Singer ◽  
Scott Novotney ◽  
Devin Strickland ◽  
Hugh K. Haddox ◽  
Nicholas Leiby ◽  
...  

AbstractEngineered proteins generally must possess a stable structure in order to achieve their designed function. Stable designs, however, are astronomically rare within the space of all possible amino acid sequences. As a consequence, many designs must be tested computationally and experimentally in order to find stable ones, which is expensive in terms of time and resources. Here we report a neural network model that predicts protein stability based only on sequences of amino acids, and demonstrate its performance by evaluating the stability of almost 200,000 novel proteins. These include a wide range of sequence perturbations, providing a baseline for future work in the field. We also report a second neural network model that is able to generate novel stable proteins. Finally, we show that the predictive model can be used to substantially increase the stability of both expert-designed and model-generated proteins.


Sign in / Sign up

Export Citation Format

Share Document