scholarly journals Inverse Design for Silicon Photonics: From Iterative Optimization Algorithms to Deep Neural Networks

2021 ◽  
Vol 11 (9) ◽  
pp. 3822
Author(s):  
Simei Mao ◽  
Lirong Cheng ◽  
Caiyue Zhao ◽  
Faisal Nadeem Khan ◽  
Qian Li ◽  
...  

Silicon photonics is a low-cost and versatile platform for various applications. For design of silicon photonic devices, the light-material interaction within its complex subwavelength geometry is difficult to investigate analytically and therefore numerical simulations are majorly adopted. To make the design process more time-efficient and to improve the device performance to its physical limits, various methods have been proposed over the past few years to manipulate the geometries of silicon platform for specific applications. In this review paper, we summarize the design methodologies for silicon photonics including iterative optimization algorithms and deep neural networks. In case of iterative optimization methods, we discuss them in different scenarios in the sequence of increased degrees of freedom: empirical structure, QR-code like structure and irregular structure. We also review inverse design approaches assisted by deep neural networks, which generate multiple devices with similar structure much faster than iterative optimization methods and are thus suitable in situations where piles of optical components are needed. Finally, the applications of inverse design methodology in optical neural networks are also discussed. This review intends to provide the readers with the suggestion for the most suitable design methodology for a specific scenario.

2021 ◽  
pp. 1-1
Author(s):  
Keisuke Kojima ◽  
Mohammad H. Tahersima ◽  
Toshiaki Koike-Akino ◽  
Devesh K. Jha ◽  
Yingheng Tang ◽  
...  

2021 ◽  
Author(s):  
Rajendra P. ◽  
Hanumantha Ravi. P. V. N. ◽  
Gunavardhana Naidu T.

Author(s):  
Derya Soydaner

In recent years, we have witnessed the rise of deep learning. Deep neural networks have proved their success in many areas. However, the optimization of these networks has become more difficult as neural networks going deeper and datasets becoming bigger. Therefore, more advanced optimization algorithms have been proposed over the past years. In this study, widely used optimization algorithms for deep learning are examined in detail. To this end, these algorithms called adaptive gradient methods are implemented for both supervised and unsupervised tasks. The behavior of the algorithms during training and results on four image datasets, namely, MNIST, CIFAR-10, Kaggle Flowers and Labeled Faces in the Wild are compared by pointing out their differences against basic optimization algorithms.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-25
Author(s):  
Elbruz Ozen ◽  
Alex Orailoglu

As deep learning algorithms are widely adopted, an increasing number of them are positioned in embedded application domains with strict reliability constraints. The expenditure of significant resources to satisfy performance requirements in deep neural network accelerators has thinned out the margins for delivering safety in embedded deep learning applications, thus precluding the adoption of conventional fault tolerance methods. The potential of exploiting the inherent resilience characteristics of deep neural networks remains though unexplored, offering a promising low-cost path towards safety in embedded deep learning applications. This work demonstrates the possibility of such exploitation by juxtaposing the reduction of the vulnerability surface through the proper design of the quantization schemes with shaping the parameter distributions at each layer through the guidance offered by appropriate training methods, thus delivering deep neural networks of high resilience merely through algorithmic modifications. Unequaled error resilience characteristics can be thus injected into safety-critical deep learning applications to tolerate bit error rates of up to at absolutely zero hardware, energy, and performance costs while improving the error-free model accuracy even further.


2020 ◽  
Vol 117 (44) ◽  
pp. 27162-27170
Author(s):  
Adityanarayanan Radhakrishnan ◽  
Mikhail Belkin ◽  
Caroline Uhler

Identifying computational mechanisms for memorization and retrieval of data is a long-standing problem at the intersection of machine learning and neuroscience. Our main finding is that standard overparameterized deep neural networks trained using standard optimization methods implement such a mechanism for real-valued data. We provide empirical evidence that 1) overparameterized autoencoders store training samples as attractors and thus iterating the learned map leads to sample recovery, and that 2) the same mechanism allows for encoding sequences of examples and serves as an even more efficient mechanism for memory than autoencoding. Theoretically, we prove that when trained on a single example, autoencoders store the example as an attractor. Lastly, by treating a sequence encoder as a composition of maps, we prove that sequence encoding provides a more efficient mechanism for memory than autoencoding.


2021 ◽  
Vol 35 (11) ◽  
pp. 1336-1337
Author(s):  
Clayton Fowler ◽  
Sensong An ◽  
Bowen Zheng ◽  
Hong Tang ◽  
Hang Li ◽  
...  

This paper presents a deep learning approach for the inverse-design of metal-insulator-metal metasurfaces for hyperspectral imaging applications. Deep neural networks are able to compensate for the complex interactions between electromagnetic waves and metastructures to efficiently produce design solutions that would be difficult to obtain using other methods. Since electromagnetic spectra are sequential in nature, recurrent neural networks are especially suited for relating such spectra to structural parameters.


2021 ◽  
Author(s):  
Sangyun Oh ◽  
Hyeonuk Sim ◽  
Sugil Lee ◽  
Jongeun Lee

ACS Photonics ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 1365-1369 ◽  
Author(s):  
Dianjing Liu ◽  
Yixuan Tan ◽  
Erfan Khoram ◽  
Zongfu Yu

Sign in / Sign up

Export Citation Format

Share Document