hardware cost
Recently Published Documents


TOTAL DOCUMENTS

154
(FIVE YEARS 22)

H-INDEX

10
(FIVE YEARS 1)

Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2823
Author(s):  
Maarten Vandersteegen ◽  
Kristof Van Beeck ◽  
Toon Goedemé

Quantization of neural networks has been one of the most popular techniques to compress models for embedded (IoT) hardware platforms with highly constrained latency, storage, memory-bandwidth, and energy specifications. Limiting the number of bits per weight and activation has been the main focus in the literature. To avoid major degradation of accuracy, common quantization methods introduce additional scale factors to adapt the quantized values to the diverse data ranges, present in full-precision (floating-point) neural networks. These scales are usually kept in high precision, requiring the target compute engine to support a few high-precision multiplications, which is not desirable due to the larger hardware cost. Little effort has yet been invested in trying to avoid high-precision multipliers altogether, especially in combination with 4 bit weights. This work proposes a new quantization scheme, based on power-of-two quantization scales, that works on-par compared to uniform per-channel quantization with full-precision 32 bit quantization scales when using only 4 bit weights. This is done through the addition of a low-precision lookup-table that translates stored 4 bit weights into nonuniformly distributed 8 bit weights for internal computation. All our quantized ImageNet CNNs achieved or even exceeded the Top-1 accuracy of their full-precision counterparts, with ResNet18 exceeding its full-precision model by 0.35%. Our MobileNetV2 model achieved state-of-the-art performance with only a slight drop in accuracy of 0.51%.


2021 ◽  
Vol 2113 (1) ◽  
pp. 012012
Author(s):  
Xiaoya Quan

Abstract UAV base stations (UAVBS’s) have been proposed as a revolution for the new architecture of 5G networks. The UAVBS’s can be deployed as access points to provide wireless services to users in emergency scenarios. However, it is challenging to solve the highly coupled problem for UAVBS deployment and power allocation. In the meanwhile, the hybrid analog and digital beamforming is leverage to reduce the hardware cost for beamforming in 5G networks. In this work, we first use k-means algorithm to solve the 3D placement of UAVBS’s by exploiting the optimal coverage altitude. Next, power allocation problem is resolved using the difference-of-two-convex functions (D.C.) programming algorithm. Furthermore, the quality of service (QoS) for each user is guaranteed by adjusting the transmitted power. Finally, extensive experiments are conducted to demonstrate the feasibility of the proposed algorithm.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6807
Author(s):  
Yong Xie ◽  
Yili Guo ◽  
Sheng Yang ◽  
Jian Zhou ◽  
Xiaobai Chen

The introduction of various networks into automotive cyber-physical systems (ACPS) brings great challenges on security protection of ACPS functions, the auto industry recommends to adopt the hardware security module (HSM)-based multicore ECU to secure in-vehicle networks while meeting the delay constraint. However, this approach incurs significant hardware cost. Consequently, this paper aims to reduce security enhancing-related hardware cost by proposing two efficient design space exploration (DSE) algorithms, namely, stepwise decreasing-based heuristic algorithm (SDH) and interference balancing-based heuristic algorithm (IBH), which explore the task assignment, task scheduling, and message scheduling to minimize the number of required HSMs. Experiments on both synthetical and real data sets show that the proposed SDH and IBH are superior than state-of-the-art algorithm, and the advantage of SDH and IBH becomes more obvious as the increase about the percentage of security-critical tasks. For synthetic data sets, the hardware cost can be reduced by 61.4% and 45.6% averagely for IBH and SDH, respectively; for real data sets, the hardware cost can be reduced by 64.3% and 54.4% on average for IBH and SDH, respectively. Furthermore, IBH is better than SDH in most cases, and the runtime of IBH is two or three orders of magnitude smaller than SDH and state-of-the-art algorithm.


Author(s):  
Hongyi Liu ◽  
Xiangao Qi ◽  
Yuqing Lou ◽  
Liang Qi ◽  
Zuo-Wei Yeh ◽  
...  
Keyword(s):  

Author(s):  
Hadjer Benmeziane ◽  
Kaoutar El Maghraoui ◽  
Hamza Ouarnoughi ◽  
Smail Niar ◽  
Martin Wistuba ◽  
...  

There is no doubt that making AI mainstream by bringing powerful, yet power hungry deep neural networks (DNNs) to resource-constrained devices would required an efficient co-design of algorithms, hardware and software. The increased popularity of DNN applications deployed on a wide variety of platforms, from tiny microcontrollers to data-centers, have resulted in multiple questions and challenges related to constraints introduced by the hardware. In this survey on hardware-aware neural architecture search (HW-NAS), we present some of the existing answers proposed in the literature for the following questions: "Is it possible to build an efficient DL model that meets the latency and energy constraints of tiny edge devices?", "How can we reduce the trade-off between the accuracy of a DL model and its ability to be deployed in a variety of platforms?". The survey provides a new taxonomy of HW-NAS and assesses the hardware cost estimation strategies. We also highlight the challenges and limitations of existing approaches and potential future directions. We hope that this survey will help to fuel the research towards efficient deep learning.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4628
Author(s):  
Xiaowen Teng ◽  
Guangsheng Zhou ◽  
Yuxuan Wu ◽  
Chenglong Huang ◽  
Wanjing Dong ◽  
...  

The three-dimensional reconstruction method using RGB-D camera has a good balance in hardware cost and point cloud quality. However, due to the limitation of inherent structure and imaging principle, the acquired point cloud has problems such as a lot of noise and difficult registration. This paper proposes a 3D reconstruction method using Azure Kinect to solve these inherent problems. Shoot color images, depth images and near-infrared images of the target from six perspectives by Azure Kinect sensor with black background. Multiply the binarization result of the 8-bit infrared image with the RGB-D image alignment result provided by Microsoft corporation, which can remove ghosting and most of the background noise. A neighborhood extreme filtering method is proposed to filter out the abrupt points in the depth image, by which the floating noise point and most of the outlier noise will be removed before generating the point cloud, and then using the pass-through filter eliminate rest of the outlier noise. An improved method based on the classic iterative closest point (ICP) algorithm is presented to merge multiple-views point clouds. By continuously reducing both the size of the down-sampling grid and the distance threshold between the corresponding points, the point clouds of each view are continuously registered three times, until get the integral color point cloud. Many experiments on rapeseed plants show that the success rate of cloud registration is 92.5% and the point cloud accuracy obtained by this method is 0.789 mm, the time consuming of a integral scanning is 302 seconds, and with a good color restoration. Compared with a laser scanner, the proposed method has considerable reconstruction accuracy and a significantly ahead of the reconstruction speed, but the hardware cost is much lower when building a automatic scanning system. This research shows a low-cost, high-precision 3D reconstruction technology, which has the potential to be widely used for non-destructive measurement of rapeseed and other crops phenotype.


2021 ◽  
Vol 5 (3) ◽  
pp. 1-39
Author(s):  
Shrey Baheti ◽  
Shreyas Badiger ◽  
Yogesh Simmhan

Internet of Things (IoT) deployments have been growing manifold, encompassing sensors, networks, edge, fog, and cloud resources. Despite the intense interest from researchers and practitioners, most do not have access to large-scale IoT testbeds for validation. Simulation environments that allow analytical modeling are a poor substitute for evaluating software platforms or application workloads in realistic computing environments. Here, we propose a virtual environment for validating Internet of Things at large scales (VIoLET), an emulator for defining and launching large-scale IoT deployments within cloud VMs. It allows users to declaratively specify container-based compute resources that match the performance of native IoT compute devices using Docker. These can be inter-connected by complex topologies on which bandwidth and latency rules are enforced. Users can configure synthetic sensors for data generation as well. We also incorporate models for CPU resource dynamism, and for failure and recovery of the underlying devices. We offer a detailed comparison of VIoLET’s compute and network performance between the virtual and physical deployments, evaluate its scaling with deployments with up to 1, 000 devices and 4, 000 device-cores, and validate its ability to model resource dynamism. Our extensive experiments show that the performance of the virtual IoT environment accurately matches the expected behavior, with deviations levels within what is seen in actual physical devices. It also scales to 1, 000s of devices and at a modest cloud computing costs of under 0.15% of the actual hardware cost, per hour of use, with minimal management effort. This IoT emulation environment fills an essential gap between IoT simulators and real deployments.


Author(s):  
Yuancan Lin ◽  
Lei Xie ◽  
Chuyu Wang ◽  
Yanling Bu ◽  
Sanglu Lu

As an important indicator of the infusion monitoring for clinical treatment, the drip rate is expected to be monitored in an accurate and real-time manner. However, state-of-the-art drip rate monitoring schemes either suffer from high maintenance or incur high hardware cost. In this paper, we propose DropMonitor, an RFID-based approach to perform the mm-level sensing for infusion drip rate monitoring. By attaching a pair of batteryless RFID tags on the drip chamber, we can estimate the drip rate by capturing the RF-signals reflected from the vibrating liquid surface caused by the falling droplets. Particularly, we use the sensing tag to perceive the liquid surface vibration in the drip chamber and further derive the drip rate for infusion monitoring. Moreover, to sufficiently mitigate the multi-path interference from the surrounding human activities, we use the reference tag to perceive the multi-path signals from the indoor environment. By computing the difference of RF-signals from tag pairs, we cancel the multi-path interference and extract the drip-rate-related signals. We have implemented a prototype system and evaluated its performance in real applications. The experiment results show that DropMonitor can accurately estimate the infusion drip rate, and the average relative error of drip rate estimation is below 1% for conventional cases. In this way, considering the essential sampling rates of each tag, DropMonitor is able to monitor the drip rate for over a dozen of infusion bottles/bags in parallel with one COTS RFID system.


Sign in / Sign up

Export Citation Format

Share Document