Differential Privacy Preserving of Training Model in Wireless Big Data with Edge Computing

2020 ◽  
Vol 6 (2) ◽  
pp. 283-295 ◽  
Author(s):  
Miao Du ◽  
Kun Wang ◽  
Zhuoqun Xia ◽  
Yan Zhang
2020 ◽  
Vol 17 (9) ◽  
pp. 50-65 ◽  
Author(s):  
Mengnan Bi ◽  
Yingjie Wang ◽  
Zhipeng Cai ◽  
Xiangrong Tong

2016 ◽  
Vol 40 (4) ◽  
Author(s):  
Chi Lin ◽  
Zihao Song ◽  
Houbing Song ◽  
Yanhong Zhou ◽  
Yi Wang ◽  
...  

2018 ◽  
Vol 56 (8) ◽  
pp. 62-67 ◽  
Author(s):  
Miao Du ◽  
Kun Wang ◽  
Yuanfang Chen ◽  
Xiaoyan Wang ◽  
Yanfei Sun

2021 ◽  
Vol 21 (3) ◽  
pp. 1-17
Author(s):  
Wu Chen ◽  
Yong Yu ◽  
Keke Gai ◽  
Jiamou Liu ◽  
Kim-Kwang Raymond Choo

In existing ensemble learning algorithms (e.g., random forest), each base learner’s model needs the entire dataset for sampling and training. However, this may not be practical in many real-world applications, and it incurs additional computational costs. To achieve better efficiency, we propose a decentralized framework: Multi-Agent Ensemble. The framework leverages edge computing to facilitate ensemble learning techniques by focusing on the balancing of access restrictions (small sub-dataset) and accuracy enhancement. Specifically, network edge nodes (learners) are utilized to model classifications and predictions in our framework. Data is then distributed to multiple base learners who exchange data via an interaction mechanism to achieve improved prediction. The proposed approach relies on a training model rather than conventional centralized learning. Findings from the experimental evaluations using 20 real-world datasets suggest that Multi-Agent Ensemble outperforms other ensemble approaches in terms of accuracy even though the base learners require fewer samples (i.e., significant reduction in computation costs).


2021 ◽  
Author(s):  
Longxiang Gao ◽  
Tom H. Luan ◽  
Bruce Gu ◽  
Youyang Qu ◽  
Yong Xiang

Sign in / Sign up

Export Citation Format

Share Document