Waveguide-Based Chemical and Spectroelectrochemical Sensor Platforms

2019 ◽  
Vol 19 (6) ◽  
pp. 109-117
Author(s):  
Brooke M. Beam ◽  
Adam Simmonds ◽  
Peter A. Veneman ◽  
Erin Ratcliff ◽  
Sergio B. Mendes ◽  
...  
2005 ◽  
Author(s):  
Ryan Giedd ◽  
Kartik Ghosh ◽  
Matt Curry ◽  
Rishi Patel ◽  
Paul Durham ◽  
...  

2021 ◽  
pp. 138077
Author(s):  
Clara Pérez-Ràfols ◽  
Keying Guo ◽  
Maria Alba ◽  
Rou Jun Toh ◽  
Núria Serrano ◽  
...  

2013 ◽  
Vol 32 (3) ◽  
Author(s):  
Sayandev Chatterjee ◽  
Samuel A. Bryan ◽  
Carl J. Seliskar ◽  
William R. Heineman

Author(s):  
Benjamin Babjak ◽  
Sandor Szilvasi ◽  
Peter Volgyesi ◽  
Ozgur Yapar ◽  
Prodyot K. Basu

2009 ◽  
Vol 9 (3) ◽  
pp. 1865-1871 ◽  
Author(s):  
Gökhan Demirel ◽  
Zakir Rzaev ◽  
Süleyman Patir ◽  
Erhan Pişkin

Author(s):  
O. Hasler ◽  
S. Nebiker

Abstract. Estimating the pose of a mobile robotic platform is a challenging task, especially when the pose needs to be estimated in a global or local reference frame and when the estimation has to be performed while the platform is moving. While the position of a platform can be measured directly via modern tachymetry or with the help of a global positioning service GNSS, the absolute platform orientation is harder to derive. Most often, only the relative orientation is estimated with the help of a sensor mounted on the robotic platform such as an IMU, with one or multiple cameras, with a laser scanner or with a combination of any of those. Then, a sensor fusion of the relative orientation and the absolute position is performed. In this work, an additional approach is presented: first, an image-based relative pose estimation with frames from a panoramic camera using a state-of-the-art visual odometry implementation is performed. Secondly, the position of the platform in a reference system is estimated using motorized tachymetry. Lastly, the absolute orientation is calculated using a visual marker, which is placed in the space, where the robotic platform is moving. The marker can be detected in the camera frame and since the position of this marker is known in the reference system, the absolute pose can be estimated. To improve the absolute pose estimation, a sensor fusion is conducted. Results with a Lego model train as a mobile platform show, that the trajectory of the absolute pose calculated independently with four different markers have a deviation < 0.66 degrees 50% of the time and that the average difference is < 1.17 degrees. The implementation is based on the popular Robotic Operating System ROS.


Author(s):  
M. Weinmann ◽  
M. Weinmann

<p><strong>Abstract.</strong> In this paper, we address the semantic interpretation of urban environments on the basis of multi-modal data in the form of RGB color imagery, hyperspectral data and LiDAR data acquired from aerial sensor platforms. We extract radiometric features based on the given RGB color imagery and the given hyperspectral data, and we also consider different transformations to potentially better data representations. For the RGB color imagery, these are achieved via color invariants, normalization procedures or specific assumptions about the scene. For the hyperspectral data, we involve techniques for dimensionality reduction and feature selection as well as a transformation to multispectral Sentinel-2-like data of the same spatial resolution. Furthermore, we extract geometric features describing the local 3D structure from the given LiDAR data. The defined feature sets are provided separately and in different combinations as input to a Random Forest classifier. To assess the potential of the different feature sets and their combination, we present results achieved for the MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set.</p>


2014 ◽  
Vol 557 ◽  
pp. 012034
Author(s):  
T T Toh ◽  
S W Wright ◽  
M E Kiziroglou ◽  
P D Mitcheson ◽  
E M Yeatman

Sign in / Sign up

Export Citation Format

Share Document