Object-based rate allocation with spatio-temporal trade-offs

2002 ◽  
Author(s):  
Jeong-Woo Lee ◽  
Anthony Vetro ◽  
Yao Wang ◽  
Yo-Sung Ho
Author(s):  
Gaurav Chaurasia ◽  
Arthur Nieuwoudt ◽  
Alexandru-Eugen Ichim ◽  
Richard Szeliski ◽  
Alexander Sorkine-Hornung

We present an end-to-end system for real-time environment capture, 3D reconstruction, and stereoscopic view synthesis on a mobile VR headset. Our solution allows the user to use the cameras on their VR headset as their eyes to see and interact with the real world while still wearing their headset, a feature often referred to as Passthrough. The central challenge when building such a system is the choice and implementation of algorithms under the strict compute, power, and performance constraints imposed by the target user experience and mobile platform. A key contribution of this paper is a complete description of a corresponding system that performs temporally stable passthrough rendering at 72 Hz with only 200 mW power consumption on a mobile Snapdragon 835 platform. Our algorithmic contributions for enabling this performance include the computation of a coarse 3D scene proxy on the embedded video encoding hardware, followed by a depth densification and filtering step, and finally stereoscopic texturing and spatio-temporal up-sampling. We provide a detailed discussion and evaluation of the challenges we encountered, as well as algorithm and performance trade-offs in terms of compute and resulting passthrough quality.;AB@The described system is available to users as the Passthrough+ feature on Oculus Quest. We believe that by publishing the underlying system and methods, we provide valuable insights to the community on how to design and implement real-time environment sensing and rendering on heavily resource constrained hardware.


Author(s):  
Niels Svane ◽  
Troels Lange ◽  
Sara Egemose ◽  
Oliver Dalby ◽  
Aris Thomasberger ◽  
...  

Traditional monitoring (e.g., in-water based surveys) of eelgrass meadows and perennial macroalgae in coastal areas is time and labor intensive, requires extensive equipment, and the collected data has a low temporal resolution. Further, divers and Remotely Operated Vehicles (ROVs) have a low spatial extent that cover small fractions of full systems. The inherent heterogeneity of eelgrass meadows and macroalgae assemblages in these coastal systems makes interpolation and extrapolation of observations complicated and, as such, methods to collect data on larger spatial scales whilst retaining high spatial resolution is required to guide management. Recently, the utilization of Unoccupied Aerial Vehicles (UAVs) has gained popularity in ecological sciences due to their ability to rapidly collect large amounts of area-based and georeferenced data, making it possible to monitor the spatial extent and status of SAV communities with limited equipment requirements compared to ROVs or diver surveys. This paper is focused on the increased value provided by UAV-based, data collection (visual/Red Green Blue imagery) and Object Based Image Analysis for gaining an improved understanding of eelgrass recovery. It is demonstrated that delineation and classification of two species of SAV ( Fucus vesiculosus and Zostera marina) is possible; with an error matrix indicating 86–92% accuracy. Classified maps also highlighted the increasing biomass and areal coverage of F. vesiculosus as a potential stressor to eelgrass meadows. Further, authors derive a statistically significant conversion of percentage cover to biomass ( R2 = 0.96 for Fucus vesiculosus, R2 = 0.89 for Zostera marina total biomass, and R2 = 0.94 for AGB alone, p < 0.001). Results here provide an example of mapping cover and biomass of SAV and provide a tool to undertake spatio-temporal analyses to enhance the understanding of eelgrass ecosystem dynamics.


2002 ◽  
Vol 38 (19) ◽  
pp. 1088 ◽  
Author(s):  
Jeong-Woo Lee ◽  
A. Vetro ◽  
Yao Wang ◽  
Yo-Sung Ho

2001 ◽  
Vol 01 (03) ◽  
pp. 507-526 ◽  
Author(s):  
TONG LIN ◽  
HONG-JIANG ZHANG ◽  
QING-YUN SHI

In this paper, we present a novel scheme on video content representation by exploring the spatio-temporal information. A pseudo-object-based shot representation containing more semantics is proposed to measure shot similarity and force competition approach is proposed to group shots into scene based on content coherences between shots. Two content descriptors, color objects: Dominant Color Histograms (DCH) and Spatial Structure Histograms (SSH), are introduced. To represent temporal content variations, a shot can be segmented into several subshots that are of coherent content, and shot similarity measure is formulated as subshot similarity measure that serves to shot retrieval. With this shot representation, scene structure can be extracted by analyzing the splitting and merging force competitions at each shot boundary. Experimental results on real-world sports video prove that our proposed approach for video shot retrievals achieve the best performance on the average recall (AR) and average normalized modified retrieval rank (ANMRR), and Experiment on MPEG-7 test videos achieves promising results by the proposed scene extraction algorithm.


2018 ◽  
Vol 89 ◽  
pp. 828-839 ◽  
Author(s):  
Daniela Braun ◽  
Alexander Damm ◽  
Lars Hein ◽  
Owen L. Petchey ◽  
Michael E. Schaepman

2018 ◽  
Vol 75 (6) ◽  
pp. 1849-1863 ◽  
Author(s):  
Thomas Kiørboe ◽  
André Visser ◽  
Ken H Andersen

Abstract Trait-based ecology merges evolutionary with classical population and community ecology and is a rapidly developing branch of ecology. It describes ecosystems as consisting of individuals rather than species, and characterizes individuals by few key traits that are interrelated through trade-offs. The fundamental rationale is that the spatio-temporal distribution of organisms and their functional role in ecosystems depend on their traits rather than on their taxonomical affiliation. The approach respects that interactions are between individuals, not between species or populations, and in trait-based models ecosystem structure emerges as a result of interactions between individuals and with the environments, rather than being prescribed. It offers an alternative to classical species-centric approaches and has the potential to describe complex ecosystems in simple ways and to assess the effects of environmental change on ecosystem structure and function. Here, we describe the components of the trait-based approach and apply it to describe and model marine ecosystems. Our description is illustrated with multiple examples of life in the ocean from unicellular plankton to fish.


2020 ◽  
Vol 12 (22) ◽  
pp. 3798
Author(s):  
Lei Ma ◽  
Michael Schmitt ◽  
Xiaoxiang Zhu

Recently, time-series from optical satellite data have been frequently used in object-based land-cover classification. This poses a significant challenge to object-based image analysis (OBIA) owing to the presence of complex spatio-temporal information in the time-series data. This study evaluates object-based land-cover classification in the northern suburbs of Munich using time-series from optical Sentinel data. Using a random forest classifier as the backbone, experiments were designed to analyze the impact of the segmentation scale, features (including spectral and temporal features), categories, frequency, and acquisition timing of optical satellite images. Based on our analyses, the following findings are reported: (1) Optical Sentinel images acquired over four seasons can make a significant contribution to the classification of agricultural areas, even though this contribution varies between spectral bands for the same period. (2) The use of time-series data alleviates the issue of identifying the “optimal” segmentation scale. The finding of this study can provide a more comprehensive understanding of the effects of classification uncertainty on object-based dense multi-temporal image classification.


Sign in / Sign up

Export Citation Format

Share Document