scholarly journals Deep Learning Enabled Nanophotonics

Author(s):  
Lujun Huang ◽  
Lei Xu ◽  
Andrey E. Miroshnichenko

Deep learning has become a vital approach to solving a big-data-driven problem. It has found tremendous applications in computer vision and natural language processing. More recently, deep learning has been widely used in optimising the performance of nanophotonic devices, where the conventional computational approach may require much computation time and significant computation source. In this chapter, we briefly review the recent progress of deep learning in nanophotonics. We overview the applications of the deep learning approach to optimising the various nanophotonic devices. It includes multilayer structures, plasmonic/dielectric metasurfaces and plasmonic chiral metamaterials. Also, nanophotonic can directly serve as an ideal platform to mimic optical neural networks based on nonlinear optical media, which in turn help to achieve high-performance photonic chips that may not be realised based on conventional design method.

2020 ◽  
Author(s):  
Shabir Moosa ◽  
Abbes Amira ◽  
Sabri Boughorbel

Abstract Background: The data explosion caused by unprecedented advancements in the field of genomics is constantly challenging the conventional methods used in the interpretation of the human genome. The demand for robust algorithms over the recent years has brought huge success in the field of Deep Learning (DL) in solving many difficult tasks in image, speech and natural language processing by automating the manual process of architecture design. This has been fueled through the development of new DL architectures. Yet genomics possesses unique challenges as we expect DL to provide a super human intelligence that easily interprets a human genome.Methods: We adapted a differential architecture search method for the interpretation of biological sequences and applied it to the splice site recognition task on DNA sequences to discover new high-performance convolutional architectures in an automated manner. The discovered architecture was benchmarked on CPU and multiple GPU architectures in terms of computational time and classification performance.Results: Our experimental evaluation demonstrated that the discovered architecture outperformed fixed baseline architectures for classification of splice sites. The benchmarking experiments of execution time and precision on architecture search and evaluation process showed that they performed better on recently available GPU models.Conclusions: We applied differential architecture search mechanism to perform splice site classification on raw DNA sequences, and discovered new models with better performance than major baseline models. The results have shown a potential of using this automated architecture search mechanism for solving various problems in genomics domain.


2021 ◽  
Author(s):  
Shujun He ◽  
Baizhen Gao ◽  
Rushant Sabnis ◽  
Qing Sun

AbstractMuch work has been done to apply machine learning and deep learning to genomics tasks, but these applications usually require extensive domain knowledge and the resulting models provide very limited interpretability. Here we present the Nucleic Transformer, a conceptually simple but effective and interpretable model architecture that excels in a variety of DNA/RNA tasks. The Nucleic Transformer processes nucleic acid sequences with self-attention and convolutions, two deep learning techniques that have proved dominant in the fields of computer vision and natural language processing. We demonstrate that the Nucleic Transformer can be trained in both supervised and unsupervised fashion without much domain knowledge to achieve high performance with limited amounts of data in E. coli promoter classification, viral genome identification, and degradation properties of COVID-19 mRNA vaccine candidates. Additionally, we showcase extraction of promoter motifs from learned attention and how direct visualization of self-attention maps assists informed decision making using deep learning models.


2020 ◽  
Vol 10 (10) ◽  
pp. 3634
Author(s):  
Huynh Thanh Thien ◽  
Pham-Viet Tuan ◽  
Insoo Koo

Recently, simultaneous wireless information and power transfer (SWIPT) systems, which can supply efficiently throughput and energy, have emerged as a potential research area in fifth-generation (5G) system. In this paper, we study SWIPT with multi-user, single-input single-output (SISO) system. First, we solve the transmit power optimization problem, which provides the optimal strategy for getting minimum power while satisfying sufficient signal-to-noise ratio (SINR) and harvested energy requirements to ensure receiver circuits work in SWIPT systems where receivers are equipped with a power-splitting structure. Although optimization algorithms are able to achieve relatively high performance, they often entail a significant number of iterations, which raises many issues in computation costs and time for real-time applications. Therefore, we aim at providing a deep learning-based approach, which is a promising solution to address this challenging issue. Deep learning architectures used in this paper include a type of Deep Neural Network (DNN): the Feed-Forward Neural Network (FFNN) and three types of Recurrent Neural Network (RNN): the Layer Recurrent Network (LRN), the Nonlinear AutoRegressive network with eXogenous inputs (NARX), and Long Short-Term Memory (LSTM). Through simulations, we show that the deep learning approaches can approximate a complex optimization algorithm that optimizes transmit power in SWIPT systems with much less computation time.


Nanophotonics ◽  
2020 ◽  
Vol 9 (5) ◽  
pp. 1041-1057 ◽  
Author(s):  
Sunae So ◽  
Trevon Badloe ◽  
Jaebum Noh ◽  
Jorge Bravo-Abad ◽  
Junsuk Rho

AbstractDeep learning has become the dominant approach in artificial intelligence to solve complex data-driven problems. Originally applied almost exclusively in computer-science areas such as image analysis and nature language processing, deep learning has rapidly entered a wide variety of scientific fields including physics, chemistry and material science. Very recently, deep neural networks have been introduced in the field of nanophotonics as a powerful way of obtaining the nonlinear mapping between the topology and composition of arbitrary nanophotonic structures and their associated functional properties. In this paper, we have discussed the recent progress in the application of deep learning to the inverse design of nanophotonic devices, mainly focusing on the three existing learning paradigms of supervised-, unsupervised-, and reinforcement learning. Deep learning forward modelling i.e. how artificial intelligence learns how to solve Maxwell’s equations, is also discussed, along with an outlook of this rapidly evolving research area.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 1939
Author(s):  
Jun Wei Chen ◽  
Xanno K. Sigalingging ◽  
Jenq-Shiou Leu ◽  
Jun-Ichi Takada

In recent years, Chinese has become one of the most popular languages globally. The demand for automatic Chinese sentence correction has gradually increased. This research can be adopted to Chinese language learning to reduce the cost of learning and feedback time, and help writers check for wrong words. The traditional way to do Chinese sentence correction is to check if the word exists in the predefined dictionary. However, this kind of method cannot deal with semantic error. As deep learning becomes popular, an artificial neural network can be applied to understand the sentence’s context to correct the semantic error. However, there are still many issues that need to be discussed. For example, the accuracy and the computation time required to correct a sentence are still lacking, so maybe it is still not the time to adopt the deep learning based Chinese sentence correction system to large-scale commercial applications. Our goal is to obtain a model with better accuracy and computation time. Combining recurrent neural network and Bidirectional Encoder Representations from Transformers (BERT), a recently popular model, known for its high performance and slow inference speed, we introduce a hybrid model which can be applied to Chinese sentence correction, improving the accuracy and also the inference speed. Among the results, BERT-GRU has obtained the highest BLEU Score in all experiments. The inference speed of the transformer-based original model can be improved by 1131% in beam search decoding in the 128-word experiment, and greedy decoding can also be improved by 452%. The longer the sequence, the larger the improvement.


Author(s):  
Rene Avalloni de Morais ◽  
Baidya Nath Saha

Deep learning algorithms have received dramatic progress in the area of natural language processing and automatic human speech recognition. However, the accuracy of the deep learning algorithms depends on the amount and quality of the data and training deep models requires high-performance computing resources. In this backdrop, this paper adresses an end-to-end speech recognition system where we finetune Mozilla DeepSpeech architecture using two different datasets: LibriSpeech clean dataset and Harvard speech dataset. We train Long Short Term Memory (LSTM) based deep Recurrent Neural Netowrk (RNN) models in Google Colab platform and use their GPU resources. Extensive experimental results demonstrate that Mozilla DeepSpeech model could be fine-tuned for different audio datasets to recognize speeches successfully.


Author(s):  
Tony Hey ◽  
Keith Butler ◽  
Sam Jackson ◽  
Jeyarajan Thiyagalingam

This paper reviews some of the challenges posed by the huge growth of experimental data generated by the new generation of large-scale experiments at UK national facilities at the Rutherford Appleton Laboratory (RAL) site at Harwell near Oxford. Such ‘Big Scientific Data’ comes from the Diamond Light Source and Electron Microscopy Facilities, the ISIS Neutron and Muon Facility and the UK's Central Laser Facility. Increasingly, scientists are now required to use advanced machine learning and other AI technologies both to automate parts of the data pipeline and to help find new scientific discoveries in the analysis of their data. For commercially important applications, such as object recognition, natural language processing and automatic translation, deep learning has made dramatic breakthroughs. Google's DeepMind has now used the deep learning technology to develop their AlphaFold tool to make predictions for protein folding. Remarkably, it has been able to achieve some spectacular results for this specific scientific problem. Can deep learning be similarly transformative for other scientific problems? After a brief review of some initial applications of machine learning at the RAL, we focus on challenges and opportunities for AI in advancing materials science. Finally, we discuss the importance of developing some realistic machine learning benchmarks using Big Scientific Data coming from several different scientific domains. We conclude with some initial examples of our ‘scientific machine learning’ benchmark suite and of the research challenges these benchmarks will enable. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


Author(s):  
Sumit Kaur

Abstract- Deep learning is an emerging research area in machine learning and pattern recognition field which has been presented with the goal of drawing Machine Learning nearer to one of its unique objectives, Artificial Intelligence. It tries to mimic the human brain, which is capable of processing and learning from the complex input data and solving different kinds of complicated tasks well. Deep learning (DL) basically based on a set of supervised and unsupervised algorithms that attempt to model higher level abstractions in data and make it self-learning for hierarchical representation for classification. In the recent years, it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. This paper will present a survey on different deep learning techniques for remote sensing image classification. 


2020 ◽  
Vol 2 (3) ◽  
pp. 1007-1023 ◽  
Author(s):  
Ravi S. Hegde

We review recent progress in the application of Deep Learning (DL) techniques for photonic nanostructure design and provide a perspective on current limitations and fruitful directions for further development.


Sign in / Sign up

Export Citation Format

Share Document