scholarly journals Building Biofoundry India: Challenges and Path Forward

2021 ◽  
Author(s):  
Binay Panda ◽  
Pawan K Dhar

ABSTRACT Biofoundry is a place where biomanufacturing meets automation. The highly modular structure of a biofoundry helps accelerate the design-build-test-learn (DBTL) workflow to deliver products fast and in a streamlined fashion. In this perspective, we describe our efforts to build Biofoundry India (BI) and where we see the facility add substantial value in supporting research, innovation, and entrepreneurship. We describe three key areas of our focus, harnessing the potential of non-expressing parts of the sequenced genomes, using deep learning in pathway reconstruction, and synthesis of enzymes and metabolites. Towards the end, we describe specific challenges in building such a facility in India and the path forward to mitigate some of those in working with the other biofoundries worldwide.

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3068
Author(s):  
Soumaya Dghim ◽  
Carlos M. Travieso-González ◽  
Radim Burget

The use of image processing tools, machine learning, and deep learning approaches has become very useful and robust in recent years. This paper introduces the detection of the Nosema disease, which is considered to be one of the most economically significant diseases today. This work shows a solution for recognizing and identifying Nosema cells between the other existing objects in the microscopic image. Two main strategies are examined. The first strategy uses image processing tools to extract the most valuable information and features from the dataset of microscopic images. Then, machine learning methods are applied, such as a neural network (ANN) and support vector machine (SVM) for detecting and classifying the Nosema disease cells. The second strategy explores deep learning and transfers learning. Several approaches were examined, including a convolutional neural network (CNN) classifier and several methods of transfer learning (AlexNet, VGG-16 and VGG-19), which were fine-tuned and applied to the object sub-images in order to identify the Nosema images from the other object images. The best accuracy was reached by the VGG-16 pre-trained neural network with 96.25%.


2021 ◽  
Author(s):  
Matheus Xavier Sampaio ◽  
Regis Pires Magalhães ◽  
Ticiana Linhares Coelho da Silva ◽  
Lívia Almada Cruz ◽  
Davi Romero de Vasconcelos ◽  
...  

Automatic Speech Recognition (ASR) is an essential task for many applications like automatic caption generation for videos, voice search, voice commands for smart homes, and chatbots. Due to the increasing popularity of these applications and the advances in deep learning models for transcribing speech into text, this work aims to evaluate the performance of commercial solutions for ASR that use deep learning models, such as Facebook Wit.ai, Microsoft Azure Speech, and Google Cloud Speech-to-Text. The results demonstrate that the evaluated solutions slightly differ. However, Microsoft Azure Speech outperformed the other analyzed APIs.


Author(s):  
S. Arokiaraj ◽  
Dr. N. Viswanathan

With the advent of Internet of things(IoT),HA (HA) recognition has contributed the more application in health care in terms of diagnosis and Clinical process. These devices must be aware of human movements to provide better aid in the clinical applications as well as user’s daily activity.Also , In addition to machine and deep learning algorithms, HA recognition systems has significantly improved in terms of high accurate recognition. However, the most of the existing models designed needs improvisation in terms of accuracy and computational overhead. In this research paper, we proposed a BAT optimized Long Short term Memory (BAT-LSTM) for an effective recognition of human activities using real time IoT systems. The data are collected by implanting the Internet of things) devices invasively. Then, proposed BAT-LSTM is deployed to extract the temporal features which are then used for classification to HA. Nearly 10,0000 dataset were collected and used for evaluating the proposed model. For the validation of proposed framework, accuracy, precision, recall, specificity and F1-score parameters are chosen and comparison is done with the other state-of-art deep learning models. The finding shows the proposed model outperforms the other learning models and finds its suitability for the HA recognition.


Author(s):  
G. Touya ◽  
F. Brisebard ◽  
F. Quinton ◽  
A. Courtial

Abstract. Visually impaired people cannot use classical maps but can learn to use tactile relief maps. These tactile maps are crucial at school to learn geography and history as well as the other students. They are produced manually by professional transcriptors in a very long and costly process. A platform able to generate tactile maps from maps scanned from geography textbooks could be extremely useful to these transcriptors, to fasten their production. As a first step towards such a platform, this paper proposes a method to infer the scale and the content of the map from its image. We used convolutional neural networks trained with a few hundred maps from French geography textbooks, and the results show promising results to infer labels about the content of the map (e.g. ”there are roads, cities and administrative boundaries”), and to infer the extent of the map (e.g. a map of France or of Europe).


2021 ◽  
Author(s):  
Jaydip Sen ◽  
Sidra Mehtab ◽  
Gourab Nath

Prediction of future movement of stock prices has been a subject matter of many research work. On one hand, we have proponents of the Efficient Market Hypothesis who claim that stock prices cannot be predicted, on the other hand, there are propositions illustrating that, if appropriately modeled, stock prices can be predicted with a high level of accuracy. There is also a gamut of literature on technical analysis of stock prices where the objective is to identify patterns in stock price movements and profit from it. In this work, we propose a hybrid approach for stock price prediction using five deep learning-based regression models. We select the NIFTY 50 index values of the National Stock Exchange (NSE) of India, over a period of December 29, 2014 to July 31, 2020. Based on the NIFTY data during December 29, 2014 to December 28, 2018, we build two regression models using <i>convolutional neural networks</i> (CNNs), and three regression models using <i>long-and-short-term memory</i> (LSTM) networks for predicting the <i>open</i> values of the NIFTY 50 index records for the period December 31, 2018 to July 31, 2020. We adopted a multi-step prediction technique with <i>walk-forward validation</i>. The parameters of the five deep learning models are optimized using the grid-search technique so that the validation losses of the models stabilize with an increasing number of epochs in the model training, and the training and validation accuracies converge. Extensive results are presented on various metrics for all the proposed regression models. The results indicate that while both CNN and LSTM-based regression models are very accurate in forecasting the NIFTY 50 <i>open</i> values, the CNN model that previous one week’s data as the input is the fastest in its execution. On the other hand, the encoder-decoder convolutional LSTM model uses the previous two weeks’ data as the input is found to be the most accurate in its forecasting results.


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
...  

Abstract This paper provides the state of the art of data science in economics. Through a novel taxonomy of applications and methods advances in data science are investigated. The data science advances are investigated in three individual classes of deep learning models, ensemble models, and hybrid models. Application domains include stock market, marketing, E-commerce, corporate banking, and cryptocurrency. Prisma method, a systematic literature review methodology is used to ensure the quality of the survey. The findings revealed that the trends are on advancement of hybrid models as more than 51% of the reviewed articles applied hybrid model. On the other hand, it is found that based on the RMSE accuracy metric, hybrid models had higher prediction accuracy than other algorithms. While it is expected the trends go toward the advancements of deep learning models.


Author(s):  
Hari Kishan Kondaveeti ◽  
Gonugunta Priyatham Brahma ◽  
Dandhibhotla Vijaya Sahithi

Deep learning (DL), a part of machine learning (ML), comprises a contemporary technique for processing the images and analyzing the big data with promising outcomes. Deep learning methods are successfully being used in various sectors to gain better results. Agriculture sector is one of the sectors that could be benefitted from the deep learning techniques since the current agriculture techniques cannot keep up with the rapid growth in population. In this chapter, the recent trends in the applications of deep learning techniques in the agricultural sector and the survey of the research efforts that employ deep learning techniques are going to be discussed. Also, the models that are implemented are going to be analyzed and compared with the other existing models.


Author(s):  
Iaakov Exman

The unrelenting trend of larger and larger sizes of Software Systems and data has made software comprehensibility an increasingly difficult problem. However, a tacit consensus that human understanding of software is essential for most software related activities, stimulated software developers to embed comprehensibility in their systems’ design. On the other hand, recent empirical successes of Deep Learning neural networks, in several application areas, seem to challenge the tacit consensus: is software comprehensibility a necessity, or just superfluous? This introductory paper, to the 2020 special issue on Theoretical Software Engineering, offers reasons justifying our standpoint on the referred controversy. This paper also points out to specific techniques enabling Human Understanding of software systems relevant to this issue’s papers.


2021 ◽  
Vol 7 (4) ◽  
pp. 65
Author(s):  
Daniel Silva ◽  
Armando Sousa ◽  
Valter Costa

Object recognition represents the ability of a system to identify objects, humans or animals in images. Within this domain, this work presents a comparative analysis among different classification methods aiming at Tactode tile recognition. The covered methods include: (i) machine learning with HOG and SVM; (ii) deep learning with CNNs such as VGG16, VGG19, ResNet152, MobileNetV2, SSD and YOLOv4; (iii) matching of handcrafted features with SIFT, SURF, BRISK and ORB; and (iv) template matching. A dataset was created to train learning-based methods (i and ii), and with respect to the other methods (iii and iv), a template dataset was used. To evaluate the performance of the recognition methods, two test datasets were built: tactode_small and tactode_big, which consisted of 288 and 12,000 images, holding 2784 and 96,000 regions of interest for classification, respectively. SSD and YOLOv4 were the worst methods for their domain, whereas ResNet152 and MobileNetV2 showed that they were strong recognition methods. SURF, ORB and BRISK demonstrated great recognition performance, while SIFT was the worst of this type of method. The methods based on template matching attained reasonable recognition results, falling behind most other methods. The top three methods of this study were: VGG16 with an accuracy of 99.96% and 99.95% for tactode_small and tactode_big, respectively; VGG19 with an accuracy of 99.96% and 99.68% for the same datasets; and HOG and SVM, which reached an accuracy of 99.93% for tactode_small and 99.86% for tactode_big, while at the same time presenting average execution times of 0.323 s and 0.232 s on the respective datasets, being the fastest method overall. This work demonstrated that VGG16 was the best choice for this case study, since it minimised the misclassifications for both test datasets.


2019 ◽  
Author(s):  
Cheng Yang ◽  
Man Zhou ◽  
Haoling Xie ◽  
Huaiqiu Zhu

Long non-coding RNAs (lncRNAs, length above 200 nt) exert crucial biological roles and have been implicated in cancers1,2. To characterize newly discovered transcripts, one major issue is to distinguish lncRNAs from mRNAs. Since experimental methods are time-consuming and costly, computational methods are preferred for large-scale lncRNA identification. In a recent study, Amin et al.3 evaluated three deep-learning-based lncRNA identification tools (i.e., lncRNAnet4, LncADeep5, and lncFinder6) and concluded “The LncADeep PR (precision recall) curve is just above the no-skill model and LncADeep showed poor overall performance”. This surprising conclusion is based on the authors’ use of a non-default setting of LncADeep. Actually, LncADeep has two models, one for full-length transcripts, and the other for transcripts including partial-length. Being aware of the difficulty of assembling full-length transcripts from RNA-seq dataset, LncADeep’s default model is for transcripts including partial-length. However, according to the results posted on Amin et al.’s website, the authors used LncADeep with full-length model, while they claimed to use the default setting of LncADeep, to identify lncRNAs from GENCODE dataset, which is composed of full- and partial-length transcripts. Thus, in their evaluation, the performance of LncADeep was underestimated. In this correspondence, we have tested LncADeep’s default setting (i.e., model for transcripts including partial-length) on the datasets used in Amin et al.3, and LncADeep achieved overall the best performance compared with the other tools’ results reported by Amin et al.


Sign in / Sign up

Export Citation Format

Share Document