scholarly journals Fault Detection, Diagnosis, and Isolation Strategy in Li-Ion Battery Management Systems of HEVs Using 1-D Wavelet Signal Analysis

2020 ◽  
Author(s):  
Nicolae Tudoroiu ◽  
Mohammed Zaheeruddin ◽  
Roxana-Elena Tudoroiu ◽  
Sorin Mihai Radu

Nowadays, the wavelet transformation and the 1-D wavelet technique provide valuable tools for signal processing, design, and analysis, in a wide range of control systems industrial applications, audio image and video compression, signal denoising, interpolation, image zooming, texture analysis, time-scale features extraction, multimedia, electrocardiogram signals analysis, and financial prediction. Based on this awareness of the vast applicability of 1-D wavelet in signal processing applications as a feature extraction tool, this paper aims to take advantage of its ability to extract different patterns from signal data sets collected from healthy and faulty input-output signals. It is beneficial for developing various techniques, such as coding, signal processing (denoising, filtering, reconstruction), prediction, diagnosis, detection and isolation of defects. The proposed case study intends to extend the applicability of these techniques to detect the failures that occur in the battery management control system, such as sensor failures to measure the current, voltage and temperature inside an HEV rechargeable battery, as an alternative to Kalman filtering estimation techniques. The MATLAB simulation results conducted on a MATLAB R2020a software platform demonstrate the effectiveness of the proposed scheme in terms of detection accuracy, computation time, and robustness against measurement uncertainty.

2019 ◽  
Vol 31 (6) ◽  
pp. 844-850 ◽  
Author(s):  
Kevin T. Huang ◽  
Michael A. Silva ◽  
Alfred P. See ◽  
Kyle C. Wu ◽  
Troy Gallerani ◽  
...  

OBJECTIVERecent advances in computer vision have revolutionized many aspects of society but have yet to find significant penetrance in neurosurgery. One proposed use for this technology is to aid in the identification of implanted spinal hardware. In revision operations, knowing the manufacturer and model of previously implanted fusion systems upfront can facilitate a faster and safer procedure, but this information is frequently unavailable or incomplete. The authors present one approach for the automated, high-accuracy classification of anterior cervical hardware fusion systems using computer vision.METHODSPatient records were searched for those who underwent anterior-posterior (AP) cervical radiography following anterior cervical discectomy and fusion (ACDF) at the authors’ institution over a 10-year period (2008–2018). These images were then cropped and windowed to include just the cervical plating system. Images were then labeled with the appropriate manufacturer and system according to the operative record. A computer vision classifier was then constructed using the bag-of-visual-words technique and KAZE feature detection. Accuracy and validity were tested using an 80%/20% training/testing pseudorandom split over 100 iterations.RESULTSA total of 321 total images were isolated containing 9 different ACDF systems from 5 different companies. The correct system was identified as the top choice in 91.5% ± 3.8% of the cases and one of the top 2 or 3 choices in 97.1% ± 2.0% and 98.4 ± 13% of the cases, respectively. Performance persisted despite the inclusion of variable sizes of hardware (i.e., 1-level, 2-level, and 3-level plates). Stratification by the size of hardware did not improve performance.CONCLUSIONSA computer vision algorithm was trained to classify at least 9 different types of anterior cervical fusion systems using relatively sparse data sets and was demonstrated to perform with high accuracy. This represents one of many potential clinical applications of machine learning and computer vision in neurosurgical practice.


2021 ◽  
Vol 13 (3) ◽  
pp. 1522
Author(s):  
Raja Majid Ali Ujjan ◽  
Zeeshan Pervez ◽  
Keshav Dahal ◽  
Wajahat Ali Khan ◽  
Asad Masood Khattak ◽  
...  

In modern network infrastructure, Distributed Denial of Service (DDoS) attacks are considered as severe network security threats. For conventional network security tools it is extremely difficult to distinguish between the higher traffic volume of a DDoS attack and large number of legitimate users accessing a targeted network service or a resource. Although these attacks have been widely studied, there are few works which collect and analyse truly representative characteristics of DDoS traffic. The current research mostly focuses on DDoS detection and mitigation with predefined DDoS data-sets which are often hard to generalise for various network services and legitimate users’ traffic patterns. In order to deal with considerably large DDoS traffic flow in a Software Defined Networking (SDN), in this work we proposed a fast and an effective entropy-based DDoS detection. We deployed generalised entropy calculation by combining Shannon and Renyi entropy to identify distributed features of DDoS traffic—it also helped SDN controller to effectively deal with heavy malicious traffic. To lower down the network traffic overhead, we collected data-plane traffic with signature-based Snort detection. We then analysed the collected traffic for entropy-based features to improve the detection accuracy of deep learning models: Stacked Auto Encoder (SAE) and Convolutional Neural Network (CNN). This work also investigated the trade-off between SAE and CNN classifiers by using accuracy and false-positive results. Quantitative results demonstrated SAE achieved relatively higher detection accuracy of 94% with only 6% of false-positive alerts, whereas the CNN classifier achieved an average accuracy of 93%.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiawei Lian ◽  
Junhong He ◽  
Yun Niu ◽  
Tianze Wang

Purpose The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy for tiny defect detection, which is contrary to the high real-time and accuracy, limited computing resources and storage required by industrial applications. Therefore, an improved YOLOv4 named as YOLOv4-Defect is proposed aim to solve the above problems. Design/methodology/approach On the one hand, this study performs multi-dimensional compression processing on the feature extraction network of YOLOv4 to simplify the model and improve the feature extraction ability of the model through knowledge distillation. On the other hand, a prediction scale with more detailed receptive field is added to optimize the model structure, which can improve the detection performance for tiny defects. Findings The effectiveness of the method is verified by public data sets NEU-CLS and DAGM 2007, and the steel ingot data set collected in the actual industrial field. The experimental results demonstrated that the proposed YOLOv4-Defect method can greatly improve the recognition efficiency and accuracy and reduce the size and computation consumption of the model. Originality/value This paper proposed an improved YOLOv4 named as YOLOv4-Defect for the detection of surface defect, which is conducive to application in various industrial scenarios with limited storage and computing resources, and meets the requirements of high real-time and precision.


MycoKeys ◽  
2018 ◽  
Vol 39 ◽  
pp. 29-40 ◽  
Author(s):  
Sten Anslan ◽  
R. Henrik Nilsson ◽  
Christian Wurzbacher ◽  
Petr Baldrian ◽  
Leho Tedersoo ◽  
...  

Along with recent developments in high-throughput sequencing (HTS) technologies and thus fast accumulation of HTS data, there has been a growing need and interest for developing tools for HTS data processing and communication. In particular, a number of bioinformatics tools have been designed for analysing metabarcoding data, each with specific features, assumptions and outputs. To evaluate the potential effect of the application of different bioinformatics workflow on the results, we compared the performance of different analysis platforms on two contrasting high-throughput sequencing data sets. Our analysis revealed that the computation time, quality of error filtering and hence output of specific bioinformatics process largely depends on the platform used. Our results show that none of the bioinformatics workflows appears to perfectly filter out the accumulated errors and generate Operational Taxonomic Units, although PipeCraft, LotuS and PIPITS perform better than QIIME2 and Galaxy for the tested fungal amplicon dataset. We conclude that the output of each platform requires manual validation of the OTUs by examining the taxonomy assignment values.


2007 ◽  
Vol 46 (03) ◽  
pp. 324-331 ◽  
Author(s):  
P. Jäger ◽  
S. Vogel ◽  
A. Knepper ◽  
T. Kraus ◽  
T. Aach ◽  
...  

Summary Objectives: Pleural thickenings as biomarker of exposure to asbestos may evolve into malignant pleural mesothelioma. Foritsearly stage, pleurectomy with perioperative treatment can reduce morbidity and mortality. The diagnosis is based on a visual investigation of CT images, which is a time-consuming and subjective procedure. Our aim is to develop an automatic image processing approach to detect and quantitatively assess pleural thickenings. Methods: We first segment the lung areas, and identify the pleural contours. A convexity model is then used together with a Hounsfield unit threshold to detect pleural thickenings. The assessment of the detected pleural thickenings is based on a spline-based model of the healthy pleura. Results: Tests were carried out on 14 data sets from three patients. In all cases, pleural contours were reliably identified, and pleural thickenings detected. PC-based Computation times were 85 min for a data set of 716 slices, 35 min for 401 slices, and 4 min for 75 slices, resulting in an average computation time of about 5.2 s per slice. Visualizations of pleurae and detected thickeningswere provided. Conclusion: Results obtained so far indicate that our approach is able to assist physicians in the tedious task of finding and quantifying pleural thickenings in CT data. In the next step, our system will undergo an evaluation in a clinical test setting using routine CT data to quantifyits performance.


Author(s):  
Srinivas Bachu ◽  
N. Ramya Teja

Due to the advancement of multimedia and its requirement of communication over the network, video compression has received much attention among the researchers. One of the popular video codings is scalable video coding, referred to as H.264/AVC standard. The major drawback in the H.264 is that it performs the exhaustive search over the interlayer prediction to gain the best rate-distortion performance. To reduce the computation overhead due to exhaustive search on mode prediction process, this paper presents a new technique for inter prediction mode selection based on the fuzzy holoentropy. This proposed scheme utilizes the pixel values and probabilistic distribution of pixel symbols to decide the mode. The adaptive mode selection is introduced here by analyzing the pixel values of the current block to be coded with those of a motion compensated reference block using fuzzy holoentropy. The adaptively selected mode decision can reduce the computation time without affecting the visual quality of frames. Experimentation of the proposed scheme is evaluated by utilizing five videos, and from the analysis, it is evident that proposed scheme has overall high performance with values of 41.367 dB and 0.992 for PSNR and SSIM respectively.


Author(s):  
Tu Renwei ◽  
Zhu Zhongjie ◽  
Bai Yongqiang ◽  
Gao Ming ◽  
Ge Zhifeng

Unmanned Aerial Vehicle (UAV) inspection has become one of main methods for current transmission line inspection, but there are still some shortcomings such as slow detection speed, low efficiency, and inability for low light environment. To address these issues, this paper proposes a deep learning detection model based on You Only Look Once (YOLO) v3. On the one hand, the neural network structure is simplified, that is the three feature maps of YOLO v3 are pruned into two to meet specific detection requirements. Meanwhile, the K-means++ clustering method is used to calculate the anchor value of the data set to improve the detection accuracy. On the other hand, 1000 sets of power tower and insulator data sets are collected, which are inverted and scaled to expand the data set, and are fully optimized by adding different illumination and viewing angles. The experimental results show that this model using improved YOLO v3 can effectively improve the detection accuracy by 6.0%, flops by 8.4%, and the detection speed by about 6.0%.


Author(s):  
M. McDermott ◽  
S. K. Prasad ◽  
S. Shekhar ◽  
X. Zhou

Discovery of interesting paths and regions in spatio-temporal data sets is important to many fields such as the earth and atmospheric sciences, GIS, public safety and public health both as a goal and as a preliminary step in a larger series of computations. This discovery is usually an exhaustive procedure that quickly becomes extremely time consuming to perform using traditional paradigms and hardware and given the rapidly growing sizes of today’s data sets is quickly outpacing the speed at which computational capacity is growing. In our previous work (Prasad et al., 2013a) we achieved a 50 times speedup over sequential using a single GPU. We were able to achieve near linear speedup over this result on interesting path discovery by using Apache Hadoop to distribute the workload across multiple GPU nodes. Leveraging the parallel architecture of GPUs we were able to drastically reduce the computation time of a 3-dimensional spatio-temporal interest region search on a single tile of normalized difference vegetative index for Saudi Arabia. We were further able to see an almost linear speedup in compute performance by distributing this workload across several GPUs with a simple MapReduce model. This increases the speed of processing 10 fold over the comparable sequential while simultaneously increasing the amount of data being processed by 384 fold. This allowed us to process the entirety of the selected data set instead of a constrained window.


Author(s):  
C. J. Rolls ◽  
W. ElMaraghy ◽  
H. ElMaraghy

Abstract Reverse engineering (RE), may be defined as the process of generating computer aided design models (CAD) from existing or prototype parts. The process has been used for many years in industry. It has markedly increased in implementation in the past few years, primarily due to the introduction of rapid part digitization technologies. Current industrial applications include CAD model construction from artisan geometry, such as in automotive body styling, the generation of custom fits to human surfaces, and quality control. This paper summarizes the principles of operation behind many commercially available part digitization technologies, and discusses techniques involved in part digitization using a coordinate measuring machine (CMM) and laser scanner. An overall error characterization of the laser scanning digitization process is presented for a particular scanner. This is followed by a discussion of the merits and considerations involved in generating combined data sets with characteristics indicative of the design intent of specific part features. Issues in facilitating the assembly, or registration, of the different types of data into a single point set are discussed.


Author(s):  
Lifang Zhou ◽  
Guang Deng ◽  
Weisheng Li ◽  
Jianxun Mi ◽  
Bangjun Lei

Current state-of-the-art detectors achieved impressive performance in detection accuracy with the use of deep learning. However, most of such detectors cannot detect objects in real time due to heavy computational cost, which limits their wide application. Although some one-stage detectors are designed to accelerate the detection speed, it is still not satisfied for task in high-resolution remote sensing images. To address this problem, a lightweight one-stage approach based on YOLOv3 is proposed in this paper, which is named Squeeze-and-Excitation YOLOv3 (SE-YOLOv3). The proposed algorithm maintains high efficiency and effectiveness simultaneously. With an aim to reduce the number of parameters and increase the ability of feature description, two customized modules, lightweight feature extraction and attention-aware feature augmentation, are embedded by utilizing global information and suppressing redundancy features, respectively. To meet the scale invariance, a spatial pyramid pooling method is used to aggregate local features. The evaluation experiments on two remote sensing image data sets, DOTA and NWPU VHR-10, reveal that the proposed approach achieves more competitive detection effect with less computational consumption.


Sign in / Sign up

Export Citation Format

Share Document