scholarly journals Cross-Device Computation Coordination for Mobile Collocated Interactions with Wearables

Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 796 ◽  
Author(s):  
Hyoseok Yoon ◽  
Choonsung Shin

Mobile devices, wearables and Internet-of-Things are crammed into smaller form factors and batteries, yet they encounter demanding applications such as big data analysis, data mining, machine learning, augmented reality and virtual reality. To meet such high demands in the multi-device ecology, multiple devices should communicate collectively to share computation burdens and stay energy-efficient. In this paper, we present a cross-device computation coordination method for scenarios of mobile collocated interactions with wearables. We formally define a cross-device computation coordination problem and propose a method for solving this problem. Lastly, we demonstrate the feasibility of our approach through experiments and exemplar cases using 12 commercial Android devices with varying computation capabilities.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jui-Chan Huang ◽  
Po-Chang Ko ◽  
Cher-Min Fong ◽  
Sn-Man Lai ◽  
Hsin-Hung Chen ◽  
...  

With the increase in the number of online shopping users, customer loyalty is directly related to product sales. This research mainly explores the statistical modeling and simulation of online shopping customer loyalty based on machine learning and big data analysis. This research mainly uses machine learning clustering algorithm to simulate customer loyalty. Call the k-means interactive mining algorithm based on the Hash structure to perform data mining on the multidimensional hierarchical tree of corporate credit risk, continuously adjust the support thresholds for different levels of data mining according to specific requirements and select effective association rules until satisfactory results are obtained. After conducting credit risk assessment and early warning modeling for the enterprise, the initial preselected model is obtained. The information to be collected is first obtained by the web crawler from the target website to the temporary web page database, where it will go through a series of preprocessing steps such as completion, deduplication, analysis, and extraction to ensure that the crawled web page is correctly analyzed, to avoid incorrect data due to network errors during the crawling process. The correctly parsed data will be stored for the next step of data cleaning or data analysis. For writing a Java program to parse HTML documents, first set the subject keyword and URL and parse the HTML from the obtained file or string by analyzing the structure of the website. Secondly, use the CSS selector to find the web page list information, retrieve the data, and store it in Elements. In the overall fit test of the model, the root mean square error approximation (RMSEA) value is 0.053, between 0.05 and 0.08. The results show that the model designed in this study achieves a relatively good fitting effect and strengthens customers’ perception of shopping websites, and relationship trust plays a greater role in maintaining customer loyalty.


2021 ◽  
Vol 8 (32) ◽  
pp. 22-38
Author(s):  
José Manuel Amigo

Concepts like Machine Learning, Data Mining or Artificial Intelligence have become part of our daily life. This is mostly due to the incredible advances made in computation (hardware and software), the increasing capabilities of generating and storing all types of data and, especially, the benefits (societal and economical) that generate the analysis of such data. Simultaneously, Chemometrics has played an important role since the late 1970s, analyzing data within natural science (and especially in Analytical Chemistry). Even with the strong parallelisms between all of the abovementioned terms and being popular with most of us, it is still difficult to clearly define or differentiate the meaning of Machine Learning, Data Mining, Artificial Intelligence, Deep Learning and Chemometrics. This manuscript brings some light to the definitions of Machine Learning, Data Mining, Artificial Intelligence and Big Data Analysis, defines their application ranges and seeks an application space within the field of analytical chemistry (a.k.a. Chemometrics). The manuscript is full of personal, sometimes probably subjective, opinions and statements. Therefore, all opinions here are open for constructive discussion with the only purpose of Learning (like the Machines do nowadays).


2019 ◽  
Vol 25 (7) ◽  
pp. 1783-1801 ◽  
Author(s):  
Shu-hsien Liao ◽  
Yi-Shan Tasi

Purpose In the retailing industry, database is the time and place where a retail transaction is completed. E-business processes are increasingly adopting databases that can obtain in-depth customers and sales knowledge with the big data analysis. The specific big data analysis on a database system allows a retailer designing and implementing business process management (BPM) to maximize profits, minimize costs and satisfy customers on a business model. Thus, the research of big data analysis on the BPM in the retailing is a critical issue. The paper aims to discuss this issue. Design/methodology/approach This paper develops a database, ER model, and uses cluster analysis, C&R tree and the a priori algorithm as approaches to illustrate big data analysis/data mining results for generating business intelligence and process management, which then obtain customer knowledge from the case firm’s database system. Findings Big data analysis/data mining results such as customer profiles, product/brand display classifications and product/brand sales associations can be used to propose alternatives to the case firm for store layout and bundling sales business process and management development. Originality/value This research paper is an example to develop the BPM of database model and big data/data mining based on insights from big data analysis applications for store layout and bundling sales in the retailing industry.


Author(s):  
Cerene Mariam Abraham ◽  
Mannathazhathu Sudheep Elayidom ◽  
Thankappan Santhanakrishnan

Background: Machine learning is one of the most popular research areas today. It relates closely to the field of data mining, which extracts information and trends from large datasets. Aims: The objective of this paper is to (a) illustrate big data analytics for the Indian derivative market and (b) identify trends in the data. Methods: Based on input from experts in the equity domain, the data are verified statistically using data mining techniques. Specifically, ten years of daily derivative data is used for training and testing purposes. The methods that are adopted for this research work include model generation using ARIMA, Hadoop framework which comprises mapping and reducing for big data analysis. Results: The results of this work are the observation of a trend that indicates the rise and fall of price in derivatives , generation of time-series similarity graph and plotting of frequency of temporal data. Conclusion: Big data analytics is an underexplored topic in the Indian derivative market and the results from this paper can be used by investors to earn both short-term and long-term benefits.


Sign in / Sign up

Export Citation Format

Share Document