scholarly journals A CNN-SIFT Hybrid Pedestrian Navigation Method Based on First-Person Vision

2018 ◽  
Vol 10 (8) ◽  
pp. 1229 ◽  
Author(s):  
Qi Zhao ◽  
Boxue Zhang ◽  
Shuchang Lyu ◽  
Hong Zhang ◽  
Daniel Sun ◽  
...  

The emergence of new wearable technologies, such as action cameras and smart glasses, has driven the use of the first-person perspective in computer applications. This field is now attracting the attention and investment of researchers aiming to develop methods to process first-person vision (FPV) video. The current approaches present particular combinations of different image features and quantitative methods to accomplish specific objectives, such as object detection, activity recognition, user–machine interaction, etc. FPV-based navigation is necessary in some special areas, where Global Position System (GPS) or other radio-wave strength methods are blocked, and is especially helpful for visually impaired people. In this paper, we propose a hybrid structure with a convolutional neural network (CNN) and local image features to achieve FPV pedestrian navigation. A novel end-to-end trainable global pooling operator, called AlphaMEX, has been designed to improve the scene classification accuracy of CNNs. A scale-invariant feature transform (SIFT)-based tracking algorithm is employed for movement estimation and trajectory tracking of the person through each frame of FPV images. Experimental results demonstrate the effectiveness of the proposed method. The top-1 error rate of the proposed AlphaMEX-ResNet outperforms the original ResNet (k = 12) by 1.7% on the ImageNet dataset. The CNN-SIFT hybrid pedestrian navigation system reaches 0.57 m average absolute error, which is an adequate accuracy for pedestrian navigation. Both positions and movements can be well estimated by the proposed pedestrian navigation algorithm with a single wearable camera.

2021 ◽  
Vol 11 (4) ◽  
pp. 521
Author(s):  
Jonathan Erez ◽  
Marie-Eve Gagnon ◽  
Adrian M. Owen

Investigating human consciousness based on brain activity alone is a key challenge in cognitive neuroscience. One of its central facets, the ability to form autobiographical memories, has been investigated through several fMRI studies that have revealed a pattern of activity across a network of frontal, parietal, and medial temporal lobe regions when participants view personal photographs, as opposed to when they view photographs from someone else’s life. Here, our goal was to attempt to decode when participants were re-experiencing an entire event, captured on video from a first-person perspective, relative to a very similar event experienced by someone else. Participants were asked to sit passively in a wheelchair while a researcher pushed them around a local mall. A small wearable camera was mounted on each participant, in order to capture autobiographical videos of the visit from a first-person perspective. One week later, participants were scanned while they passively viewed different categories of videos; some were autobiographical, while others were not. A machine-learning model was able to successfully classify the video categories above chance, both within and across participants, suggesting that there is a shared mechanism differentiating autobiographical experiences from non-autobiographical ones. Moreover, the classifier brain maps revealed that the fronto-parietal network, mid-temporal regions and extrastriate cortex were critical for differentiating between autobiographical and non-autobiographical memories. We argue that this novel paradigm captures the true nature of autobiographical memories, and is well suited to patients (e.g., with brain injuries) who may be unable to respond reliably to traditional experimental stimuli.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Doerte Kuhrt ◽  
Natalie R. St. John ◽  
Jacob L. S. Bellmund ◽  
Raphael Kaplan ◽  
Christian F. Doeller

AbstractAdvances in virtual reality (VR) technology have greatly benefited spatial navigation research. By presenting space in a controlled manner, changing aspects of the environment one at a time or manipulating the gain from different sensory inputs, the mechanisms underlying spatial behaviour can be investigated. In parallel, a growing body of evidence suggests that the processes involved in spatial navigation extend to non-spatial domains. Here, we leverage VR technology advances to test whether participants can navigate abstract knowledge. We designed a two-dimensional quantity space—presented using a head-mounted display—to test if participants can navigate abstract knowledge using a first-person perspective navigation paradigm. To investigate the effect of physical movement, we divided participants into two groups: one walking and rotating on a motion platform, the other group using a gamepad to move through the abstract space. We found that both groups learned to navigate using a first-person perspective and formed accurate representations of the abstract space. Interestingly, navigation in the quantity space resembled behavioural patterns observed in navigation studies using environments with natural visuospatial cues. Notably, both groups demonstrated similar patterns of learning. Taken together, these results imply that both self-movement and remote exploration can be used to learn the relational mapping between abstract stimuli.


Philosophies ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 5
Author(s):  
S. J. Blodgett-Ford

The phenomenon and ethics of “voting” will be explored in the context of human enhancements. “Voting” will be examined for enhanced humans with moderate and extreme enhancements. Existing patterns of discrimination in voting around the globe could continue substantially “as is” for those with moderate enhancements. For extreme enhancements, voting rights could be challenged if the very humanity of the enhanced was in doubt. Humans who were not enhanced could also be disenfranchised if certain enhancements become prevalent. Voting will be examined using a theory of engagement articulated by Professor Sophie Loidolt that emphasizes the importance of legitimization and justification by “facing the appeal of the other” to determine what is “right” from a phenomenological first-person perspective. Seeking inspiration from the Universal Declaration of Human Rights (UDHR) of 1948, voting rights and responsibilities will be re-framed from a foundational working hypothesis that all enhanced and non-enhanced humans should have a right to vote directly. Representative voting will be considered as an admittedly imperfect alternative or additional option. The framework in which voting occurs, as well as the processes, temporal cadence, and role of voting, requires the participation from as diverse a group of humans as possible. Voting rights delivered by fiat to enhanced or non-enhanced humans who were excluded from participation in the design and ratification of the governance structure is not legitimate. Applying and extending Loidolt’s framework, we must recognize the urgency that demands the impossible, with openness to that universality in progress (or universality to come) that keeps being constituted from the outside.


Sign in / Sign up

Export Citation Format

Share Document