scholarly journals Small Sample Underwater Target Recognition Based on Mobilenet_ YOLOV4 Algorithm

CONVERTER ◽  
2021 ◽  
pp. 359-372
Author(s):  
Jun Zhang, Xiaohong Peng, Zixiang Liang, Rongfa Chen, ZhaoLi

Objectives: Underwater target recognition through simulation robot, or manual acquisition of seabed image data, the cost of sampling is high, the sample data obtained is limited, and the image quality is poor, and the data can be used for training is small. Methods: Aiming at this problem, this paper improves the algorithm based on yoov4, modifies its feature extraction backbone network, and proposes three kinds of YOLOV4 algorithms based on different Mobile net backbone networks to test the underwater target recognition in the case of small samples. In this paper, the real image of the seabed is used as the original data for training, and the data which is different from the training set is used for prediction. Result: Compared with the original YOLOV4 algorithm under the same conditions, the experimental results of MobilenetV1_YOLOV4 algorithm has the best MAP(86.04%) and FPS(52); and the histogram equalization method is used to enhance the image, which can be used as a further supplementary recognition of the missed target, and reduce the missed rate. Conclusions: The algorithm takes into account both lightweight and accuracy, and provides support for underwater target recognition in marine operation development and aquaculture

2021 ◽  
Vol 2068 (1) ◽  
pp. 012025
Author(s):  
Jian Zheng ◽  
Zhaoni Li ◽  
Jiang Li ◽  
Hongling Liu

Abstract It is difficult to detect the anomalies in big data using traditional methods due to big data has the characteristics of mass and disorder. For the common methods, they divide big data into several small samples, then analyze these divided small samples. However, this manner increases the complexity of segmentation algorithms, moreover, it is difficult to control the risk of data segmentation. To address this, here proposes a neural network approch based on Vapnik risk model. Firstly, the sample data is randomly divided into small data blocks. Then, a neural network learns these divided small sample data blocks. To reduce the risks in the process of data segmentation, the Vapnik risk model is used to supervise data segmentation. Finally, the proposed method is verify on the historical electricity price data of Mountain View, California. The results show that our method is effectiveness.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1724
Author(s):  
Zilu Ying ◽  
Chen Xuan ◽  
Yikui Zhai ◽  
Bing Sun ◽  
Jingwen Li ◽  
...  

Since Synthetic Aperture Radar (SAR) targets are full of coherent speckle noise, the traditional deep learning models are difficult to effectively extract key features of the targets and share high computational complexity. To solve the problem, an effective lightweight Convolutional Neural Network (CNN) model incorporating transfer learning is proposed for better handling SAR targets recognition tasks. In this work, firstly we propose the Atrous-Inception module, which combines both atrous convolution and inception module to obtain rich global receptive fields, while strictly controlling the parameter amount and realizing lightweight network architecture. Secondly, the transfer learning strategy is used to effectively transfer the prior knowledge of the optical, non-optical, hybrid optical and non-optical domains to the SAR target recognition tasks, thereby improving the model’s recognition performance on small sample SAR target datasets. Finally, the model constructed in this paper is verified to be 97.97% on ten types of MSTAR datasets under standard operating conditions, reaching a mainstream target recognition rate. Meanwhile, the method presented in this paper shows strong robustness and generalization performance on a small number of randomly sampled SAR target datasets.


Symmetry ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1673
Author(s):  
Aili Wang ◽  
Chengyang Liu ◽  
Dong Xue ◽  
Haibin Wu ◽  
Yuxiao Zhang ◽  
...  

Although hyperspectral data provide rich feature information and are widely used in other fields, the data are still scarce. Training small sample data classification is still a major challenge for HSI classification based on deep learning. Recently, the method of mining sample relationships has been proved to be an effective method for training small samples. However, this strategy requires high computational power, which will increase the difficulty of network model training. This paper proposes a modified depthwise separable relational network to deeply capture the similarity between samples. In addition, in order to effectively mine the similarity between samples, the feature vectors of support samples and query samples are symmetrically spliced. According to the metric distance between symmetrical structures, the dependence of the model on samples can be effectively reduced. Firstly, in order to improve the training efficiency of the model, depthwise separable convolution is introduced to reduce the computational cost of the model. Secondly, the Leaky-ReLU function effectively activates all neurons in each layer of neural network to improve the training efficiency of the model. Finally, the cosine annealing learning rate adjustment strategy is introduced to avoid the model falling into the local optimal solution and enhance the robustness of the model. The experimental results on two widely used hyperspectral remote sensing image data sets (Pavia University and Kennedy Space Center) show that compared with seven other advanced classification methods, the proposed method achieves better classification accuracy under the condition of limited training samples.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yin’e Zhang ◽  
Yong Ping Liu

The prevention and control of navel orange pests and diseases is an important measure to ensure the yield of navel oranges. Aiming at the problems of slow speed, strong subjectivity, high requirements for professional knowledge required, and high identification costs in the identification methods of navel orange pests and diseases, this paper proposes a method based on DenseNet and attention. The power mechanism fusion (DCPSNET) identification method of navel orange diseases and pests improves the traditional deep dense network DenseNet model to realize accurate and efficient identification of navel orange diseases and pests. Due to the difficulty in collecting data of navel orange pests and diseases, this article uses image enhancement technology to expand. The experimental results show that, in the case of small samples, compared with the traditional model, the DCPSNET model can accurately identify different types of navel orange diseases and pests images and the accuracy of identifying six types of navel orange diseases and pests on the test set is as high as 96.90%. The method proposed in this paper has high recognition accuracy, realizes the intelligent recognition of navel orange diseases and pests, and also provides a way for high-precision recognition of small sample data sets.


Author(s):  
Honghui Yang ◽  
Shuzhen Yi

To solve high-dimensional and small-sample-size classification problem for underwater target recognition, a new feature fusion method is proposed based on multi-kernel sparsity preserve multi-set canonical correlation analysis. The multi-set canonical correlation analysis algorithm is used to quantitatively analyze the correlation of multi-domain features, remove redundant and noise features, in order to achieve multi-domain feature fusion. The multi-kernel sparsely preserved projection algorithm is used to constrain the sparse reconstruction of the extracted multi-domain feature samples, which enhances the feature's classification ability. Results of applying real radiated noise datasets to underwater target recognition experiments show that our new method can effectively remove the redundancy and noise features, achieve the fusion of multi-domain underwater target features, and improve the recognition accuracy of underwater targets.


2001 ◽  
Vol 2 (1) ◽  
pp. 28-34 ◽  
Author(s):  
Edward R. Dougherty

In order to study the molecular biological differences between normal and diseased tissues, it is desirable to perform classification among diseases and stages of disease using microarray-based gene-expression values. Owing to the limited number of microarrays typically used in these studies, serious issues arise with respect to the design, performance and analysis of classifiers based on microarray data. This paper reviews some fundamental issues facing small-sample classification: classification rules, constrained classifiers, error estimation and feature selection. It discusses both unconstrained and constrained classifier design from sample data, and the contributions to classifier error from constrained optimization and lack of optimality owing to design from sample data. The difficulty with estimating classifier error when confined to small samples is addressed, particularly estimating the error from training data. The impact of small samples on the ability to include more than a few variables as classifier features is explained.


1994 ◽  
Vol 33 (02) ◽  
pp. 180-186 ◽  
Author(s):  
H. Brenner ◽  
O. Gefeller

Abstract:The traditional concept of describing the validity of a diagnostic test neglects the presence of chance agreement between test result and true (disease) status. Sensitivity and specificity, as the fundamental measures of validity, can thus only be considered in conjunction with each other to provide an appropriate basis for the evaluation of the capacity of the test to discriminate truly diseased from truly undiseased subjects. In this paper, chance-corrected analogues of sensitivity and specificity are presented as supplemental measures of validity, which pay attention to the problem of chance agreement and offer the opportunity to be interpreted separately. While recent proposals of chance-correction techniques, suggested by several authors in this context, lead to measures which are dependent on disease prevalence, our method does not share this major disadvantage. We discuss the extension of the conventional ROC-curve approach to chance-corrected measures of sensitivity and specificity. Furthermore, point and asymptotic interval estimates of the parameters of interest are derived under different sampling frameworks for validation studies. The small sample behavior of the estimates is investigated in a simulation study, leading to a logarithmic modification of the interval estimate in order to hold the nominal confidence level for small samples.


SLEEP ◽  
2021 ◽  
Vol 44 (Supplement_2) ◽  
pp. A177-A177
Author(s):  
Jaejin An ◽  
Dennis Hwang ◽  
Jiaxiao Shi ◽  
Amy Sawyer ◽  
Aiyu Chen ◽  
...  

Abstract Introduction Trial-based tele-obstructive sleep apnea (OSA) cost-effectiveness analyses have often been inconclusive due to small sample sizes and short follow-up. In this study, we report the cost-effectiveness of Tele-OSA using a larger sample from a 3-month trial that was augmented with 2.75 additional years of epidemiologic follow-up. Methods The Tele-OSA study was a 3-month randomized trial conducted in Kaiser Permanente Southern California that demonstrated improved adherence in patients receiving automated feedback messaging regarding their positive airway pressure (PAP) use when compared to usual care. At the end of the 3 months, participants in the intervention group pseudo-randomly either stopped or continued receiving messaging. This analysis included those participants who had moderate-severe OSA (Apnea Hypopnea Index >=15) and compared the cost-effectiveness of 3 groups: 1) no messaging, 2) messaging for 3 months only, and 3) messaging for 3 years. Costs were derived by multiplying medical service use from electronic medical records times costs from Federal fee schedules. Effects were average nightly hours of PAP use. We report the incremental cost per incremental hour of PAP use as well as the fraction acceptable. Results We included 256 patients with moderate-severe OSA (Group 1, n=132; Group 2, n=79; Group 3, n=45). Group 2, which received the intervention for 3 months only, had the highest costs and fewest hours of use and was dominated by the other two groups. Average 1-year costs for groups 1 and 3 were $6035 (SE, $477) and $6154 (SE, $575), respectively; average nightly hours of PAP use were 3.07 (SE, 0.23) and 4.09 (SE, 0.42). Compared to no messaging, messaging for 3 years had an incremental cost ($119, p=0.86) per incremental hour of use (1.02, p=0.03) of $117. For a willingness-to-pay (WTP) of $500 per year ($1.37/night), 3-year messaging has a 70% chance of being acceptable. Conclusion Long-term Tele-OSA messaging was more effective than no messaging for PAP use outcomes but also highly likely cost-effective with an acceptable willingness-to-pay threshold. Epidemiologic evidence suggests that this greater use will yield both clinical and additional economic benefits. Support (if any) Tele-OSA study was supported by the AASM Foundation SRA Grant #: 104-SR-13


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1429
Author(s):  
Gang Hu ◽  
Kejun Wang ◽  
Liangliang Liu

Facing the complex marine environment, it is extremely challenging to conduct underwater acoustic target feature extraction and recognition using ship-radiated noise. In this paper, firstly, taking the one-dimensional time-domain raw signal of the ship as the input of the model, a new deep neural network model for underwater target recognition is proposed. Depthwise separable convolution and time-dilated convolution are used for passive underwater acoustic target recognition for the first time. The proposed model realizes automatic feature extraction from the raw data of ship radiated noise and temporal attention in the process of underwater target recognition. Secondly, the measured data are used to evaluate the model, and cluster analysis and visualization analysis are performed based on the features extracted from the model. The results show that the features extracted from the model have good characteristics of intra-class aggregation and inter-class separation. Furthermore, the cross-folding model is used to verify that there is no overfitting in the model, which improves the generalization ability of the model. Finally, the model is compared with traditional underwater acoustic target recognition, and its accuracy is significantly improved by 6.8%.


Sign in / Sign up

Export Citation Format

Share Document