scholarly journals Deep Learning versus Spectral Techniques for Frequency Estimation of Single Tones: Reduced Complexity for Software-Defined Radio and IoT Sensor Communications

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2729
Author(s):  
Hind R. Almayyali ◽  
Zahir M. Hussain

Despite the increasing role of machine learning in various fields, very few works considered artificial intelligence for frequency estimation (FE). This work presents comprehensive analysis of a deep-learning (DL) approach for frequency estimation of single tones. A DL network with two layers having a few nodes can estimate frequency more accurately than well-known classical techniques can. While filling the gap in the existing literature, the study is comprehensive, analyzing errors under different signal-to-noise ratios (SNRs), numbers of nodes, and numbers of input samples under missing SNR information. DL-based FE is not significantly affected by SNR bias or number of nodes. A DL-based approach can properly work using a minimal number of input nodes N at which classical methods fail. DL could use as few as two layers while having two or three nodes for each, with the complexity of O{N} compared with discrete Fourier transform (DFT)-based FE with O{Nlog2 (N)} complexity. Furthermore, less N is required for DL. Therefore, DL can significantly reduce FE complexity, memory cost, and power consumption, which is attractive for resource-limited systems such as some Internet of Things (IoT) sensor applications. Reduced complexity also opens the door for hardware-efficient implementation using short-word-length (SWL) or time-efficient software-defined radio (SDR) communications.

Author(s):  
Hind Almayyali ◽  
Zahir M. Hussain

Despite the increasing role of machine learning in various fields, very few works considered artificial intelligence for frequency estimation (FE). This work presents a comprehensive analysis of deep-learning (DL) approach for frequency estimation of single-tones. It is shown that DL network with two layers having a few nodes can estimate frequency more accurately than well-known classical techniques. The study is comprehensive, filling gaps of existing works, where it analyzes error under different signal-to-noise ratios, numbers of nodes, and numbers of input samples; also, under missing SNR information. It is found that DL-based FE is not significantly affected by SNR bias or number of nodes. DL-based approach can work properly using minimal number of input nodes N at which classical methods fail. It is possible for DL to use as little as two layers with two or three nodes each, with complexity of O{N} versus O{Nlog2 (N)} for DFT-based FE, noting that less N is required for DL. Hence, DL can significantly reduce FE complexity, memory, cost, and power consumption, making DL-based FE attractive for resource-limited systems like some IoT sensor applications. Also, reduced complexity opens the door for hardware-efficient implementation using short word-length (SWL) or time-efficient software-defined radio (SDR) communications.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3936
Author(s):  
Yannis Spyridis ◽  
Thomas Lagkas ◽  
Panagiotis Sarigiannidis ◽  
Vasileios Argyriou ◽  
Antonios Sarigiannidis ◽  
...  

Unmanned aerial vehicles (UAVs) in the role of flying anchor nodes have been proposed to assist the localisation of terrestrial Internet of Things (IoT) sensors and provide relay services in the context of the upcoming 6G networks. This paper considered the objective of tracing a mobile IoT device of unknown location, using a group of UAVs that were equipped with received signal strength indicator (RSSI) sensors. The UAVs employed measurements of the target’s radio frequency (RF) signal power to approach the target as quickly as possible. A deep learning model performed clustering in the UAV network at regular intervals, based on a graph convolutional network (GCN) architecture, which utilised information about the RSSI and the UAV positions. The number of clusters was determined dynamically at each instant using a heuristic method, and the partitions were determined by optimising an RSSI loss function. The proposed algorithm retained the clusters that approached the RF source more effectively, removing the rest of the UAVs, which returned to the base. Simulation experiments demonstrated the improvement of this method compared to a previous deterministic approach, in terms of the time required to reach the target and the total distance covered by the UAVs.


Author(s):  
Mahmood Alzubaidi ◽  
Haider Dhia Zubaydi ◽  
Ali Bin-Salem ◽  
Alaa A Abd-Alrazaq ◽  
Arfan Ahmed ◽  
...  

2021 ◽  
pp. 102685
Author(s):  
Parjanay Sharma ◽  
Siddhant Jain ◽  
Shashank Gupta ◽  
Vinay Chamola

Author(s):  
Yuanrui Dong ◽  
Peng Zhao ◽  
Hanqiao Yu ◽  
Cong Zhao ◽  
Shusen Yang

The emerging edge-cloud collaborative Deep Learning (DL) paradigm aims at improving the performance of practical DL implementations in terms of cloud bandwidth consumption, response latency, and data privacy preservation. Focusing on bandwidth efficient edge-cloud collaborative training of DNN-based classifiers, we present CDC, a Classification Driven Compression framework that reduces bandwidth consumption while preserving classification accuracy of edge-cloud collaborative DL. Specifically, to reduce bandwidth consumption, for resource-limited edge servers, we develop a lightweight autoencoder with a classification guidance for compression with classification driven feature preservation, which allows edges to only upload the latent code of raw data for accurate global training on the Cloud. Additionally, we design an adjustable quantization scheme adaptively pursuing the tradeoff between bandwidth consumption and classification accuracy under different network conditions, where only fine-tuning is required for rapid compression ratio adjustment. Results of extensive experiments demonstrate that, compared with DNN training with raw data, CDC consumes 14.9× less bandwidth with an accuracy loss no more than 1.06%, and compared with DNN training with data compressed by AE without guidance, CDC introduces at least 100% lower accuracy loss.


Sign in / Sign up

Export Citation Format

Share Document