Web-based real-time multiview 3D display system

2003 ◽  
Author(s):  
Young-Gyoo Park ◽  
Kyung-Hoon Bae ◽  
Sang-Tae Lee ◽  
Eun-Soo Kim
2003 ◽  
Author(s):  
Hoonjong Kang ◽  
Chung-Hyun Ahn ◽  
Chieteuk Ahn ◽  
Seung-Hyun Lee

2013 ◽  
Vol 52 (34) ◽  
pp. 8411 ◽  
Author(s):  
Do-Hyeong Kim ◽  
Munkh-Uchral Erdenebat ◽  
Ki-Chul Kwon ◽  
Ji-Seong Jeong ◽  
Jae-Won Lee ◽  
...  

2018 ◽  
Vol 35 (3) ◽  
pp. 303-321
Author(s):  
Ran Liu ◽  
Mingming Liu ◽  
Yanzhen Zhang ◽  
Dehao Li ◽  
Yangting Zheng

Author(s):  
Ying Yuan ◽  
Xiaorui Wang ◽  
Yang Yang ◽  
Hang Yuan ◽  
Chao Zhang ◽  
...  

Abstract The full-chain system performance characterization is very important for the optimization design of an integral imaging three-dimensional (3D) display system. In this paper, the acquisition and display processes of 3D scene will be treated as a complete light field information transmission process. The full-chain performance characterization model of an integral imaging 3D display system is established, which uses the 3D voxel, the image depth, and the field of view of the reconstructed images as the 3D display quality evaluation indicators. Unlike most of the previous research results using the ideal integral imaging model, the proposed full-chain performance characterization model considering the diffraction effect and optical aberration of the microlens array, the sampling effect of the detector, 3D image data scaling, and the human visual system, can accurately describe the actual 3D light field transmission and convergence characteristics. The relationships between key parameters of an integral imaging 3D display system and the 3D display quality evaluation indicators are analyzed and discussed by the simulation experiment. The results will be helpful for the optimization design of a high-quality integral imaging 3D display system.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4045
Author(s):  
Alessandro Sassu ◽  
Jose Francisco Saenz-Cogollo ◽  
Maurizio Agelli

Edge computing is the best approach for meeting the exponential demand and the real-time requirements of many video analytics applications. Since most of the recent advances regarding the extraction of information from images and video rely on computation heavy deep learning algorithms, there is a growing need for solutions that allow the deployment and use of new models on scalable and flexible edge architectures. In this work, we present Deep-Framework, a novel open source framework for developing edge-oriented real-time video analytics applications based on deep learning. Deep-Framework has a scalable multi-stream architecture based on Docker and abstracts away from the user the complexity of cluster configuration, orchestration of services, and GPU resources allocation. It provides Python interfaces for integrating deep learning models developed with the most popular frameworks and also provides high-level APIs based on standard HTTP and WebRTC interfaces for consuming the extracted video data on clients running on browsers or any other web-based platform.


Sign in / Sign up

Export Citation Format

Share Document