Towards Native Code Offloading Platforms for Image Processing in Mobile Applications: A Case Study

Author(s):  
Guillermo Valenzuela ◽  
Andres Neyem ◽  
Jose I. Benedetto ◽  
Jaime Navon ◽  
Pablo Sanabria ◽  
...  
Author(s):  
José Rouillard

Designing and developing multimodal mobile applications is an important knowledge for researchers and industrial engineers. It is crucial to be able to rapidly develop prototypes for smartphones and tablet devices in order to test and evaluate mobile multimedia solutions, without necessarily being an expert in signal processing (image processing, objects recognition, sensors processing, etc.). This chapter proposes to follow the development process of a scientific experiment, in which a mobile application will be used to determine which modality (touch, voice, QRcode) is preferred for entering expiration dates of alimentary products. For the conception and the generation of the mobile application, the AppInventor framework is used. Benefits and limitations of this visual tool are presented across the “Pervasive Fridge” case study, and the obtained final prototype is discussed.


Author(s):  
Abel Méndez-Porras ◽  
Jorge Alfaro-Velasco ◽  
Marcelo Jenkins ◽  
Alexandra Martínez Porras

Context: Mobile applications support a set of user-interaction features that are inde- pendent of the application logic. Rotating the device, scrolling, or zooming are examples of such features. Some bugs in mobile applications can be attributed to user-interaction features. Objective: This paper proposes and evaluates a bug analyzer based on user- interaction features that uses digital image processing to find bugs. Method: Our bug analyzer detects bugs by comparing the similarity between images taken before and after a user-interaction. SURF, an interest point detector and descriptor, is used to compare the images. To evaluate the bug analyzer, we conducted a case study with 15 randomly selected mobile applications. First, we identified user-interaction bugs by manually testing the applications. Images were captured before and after applying each user-interaction feature. Then, image pairs were processed with SURF to obtain interest points, from which a similarity percentage was computed, to finally decide whether there was a bug. Results: We performed a total of 49 user-interaction feature tests. When manually testing the applications, 17 bugs were found, whereas when using image processing, 15 bugs were detected. Conclusions: 8 out of 15 mobile applications tested had bugs associated to user-interaction features. Our bug analyzer based on image processing was able to detect 88% (15 out of 17) of the user-interaction bugs found with manual testing.


2020 ◽  
Author(s):  
Tim Opperwall ◽  
◽  
Ben Holter ◽  
Simon Yardley ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document