scholarly journals Evaluation of Bag of Visual Words for Category Level Object Recognition

Author(s):  
K. S. Sujatha ◽  
G. M. Karthiga ◽  
B. Vinod

Object recognition in a large scale collection of images has become an important application in machine vision. The recent advances in the object or image recognition for classification of objects shows that Bag-of-visual words approach is a better method for image classification problems. In this work, the effect of different possible parameters and performance evaluation of Bag of visual words approach in terms of their recognition performance such as Accuracy rate, Precision and F1 measure using 8 different classes of real world datasets that are commonly used in restaurant applications is explored. The system presented here is based on visual vocabulary. Features are extracted, clustered, trained and evaluated on an image database of 1600 images of different categories. To validate the obtained results,a performance evaluation on vehicle datasetsunder SURF and SIFT descriptors with Kmeans and K-medoid clustering and KNN classifier has been made. Among these SURF K-means performs better.

Author(s):  
Billy Peralta ◽  
◽  
Luis Alberto Caro

Generic object recognition algorithms usually require complex classificationmodels because of intrinsic difficulties arising from problems such as changes in pose, lighting conditions, or partial occlusions. Decision trees present an inexpensive alternative for classification tasks and offer the advantage of being simple to understand. On the other hand, a common scheme for object recognition is given by the appearances of visual words, also known as the bag-of-words method. Although multiple co-occurrences of visual words are more informative regarding visual classes, a comprehensive evaluation of such combinations is unfeasible because it would result in a combinatorial explosion. In this paper, we propose to obtain the multiple co-occurrences of visual words using a variant of the CLIQUE subspace-clustering algorithm for improving the object recognition performance of simple decision trees. Experiments on standard object datasets show that our method improves the accuracy of the classification of generic objects in comparison to traditional decision tree techniques that are similar, in terms of accuracy, to ensemble techniques. In future we plan to evaluate other variants of decision trees, and apply other subspace-clustering algorithms.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
José L. Hernández-Ramos ◽  
Georgios Karopoulos ◽  
Dimitris Geneiatakis ◽  
Tania Martin ◽  
Georgios Kambourakis ◽  
...  

During 2021, different worldwide initiatives have been established for the development of digital vaccination certificates to alleviate the restrictions associated with the COVID-19 pandemic to vaccinated individuals. Although diverse technologies can be considered for the deployment of such certificates, the use of blockchain has been suggested as a promising approach due to its decentralization and transparency features. However, the proposed solutions often lack realistic experimental evaluation that could help to determine possible practical challenges for the deployment of a blockchain platform for this purpose. To fill this gap, this work introduces a scalable, blockchain-based platform for the secure sharing of COVID-19 or other disease vaccination certificates. As an indicative use case, we emulate a large-scale deployment by considering the countries of the European Union. The platform is evaluated through extensive experiments measuring computing resource usage, network response time, and bandwidth. Based on the results, the proposed scheme shows satisfactory performance across all major evaluation criteria, suggesting that it can set the pace for real implementations. Vis-à-vis the related work, the proposed platform is novel, especially through the prism of a large-scale, full-fledged implementation and its assessment.


2021 ◽  
Author(s):  
David Miralles ◽  
Guillem Garrofé ◽  
Calota Parés ◽  
Alejandro González ◽  
Gerard Serra ◽  
...  

Abstract The cognitive connection between the senses of touch and vision is probably the best-known case of cross-modality. Recent discoveries suggest that the mapping between both senses is learned rather than innate. These evidences open the door to a dynamic cross-modality that allows individuals to adaptively develop within their environment. Mimicking this aspect of human learning, we propose a new cross-modal mechanism that allows artificial cognitive systems (ACS) to adapt quickly to unforeseen perceptual anomalies generated by the environment or by the system itself. In this context, visual recognition systems have advanced remarkably in recent years thanks to the creation of large-scale datasets together with the advent of deep learning algorithms. However, such advances have not occurred on the haptic mode, mainly due to the lack of two-handed dexterous datasets that allow learning systems to process the tactile information of human object exploration. This data imbalance limits the creation of synchronized multimodal datasets that would enable the development of cross-modality in ACS during object exploration. In this work, we use a multimodal dataset recently generated from tactile sensors placed on a collection of objects that capture haptic data from human manipulation, together with the corresponding visual counterpart. Using this data, we create a cross-modal learning transfer mechanism capable of detecting both sudden and permanent anomalies in the visual channel and still maintain visual object recognition performance by retraining the visual mode for a few minutes using haptic information. Here we show the importance of cross-modality in perceptual awareness and its ecological capabilities to self-adapt to different environments.


Sign in / Sign up

Export Citation Format

Share Document