scholarly journals Recognition and Classification of Concrete Cracks under Strong Interference Based on Convolutional Neural Network

2021 ◽  
Vol 38 (3) ◽  
pp. 1001-1007
Author(s):  
Ningyu Zhao ◽  
Yang Jiang ◽  
Yi Song

This paper proposes the UmNet model based on convolutional neutral network (CNN), aiming to improve the ability to recognize and classify concrete cracks in a background complicated by construction seams and seepage traces. The model was derived from the famous CNN AlexNet. Without changing the receptive field, large convolutional kernels were replaced with small ones to reduce the parameters, deepen the network, and increase nonlinear transforms. Next, convolutional block attention module (CBAM) was introduced to highlight the key information in images and focus on high-weight channels. Finally, Bayesian network (BN) layer and L2 regularization were added, and the number of nodes in fully connected layer were reduced. A series of comparative experiments were carried out on three datasets D, P, and W. The results show that the proposed UmNet surpassed AlexNet in the recognition accuracy on D, P, and W by 3.74%, 3.17%, and 5.74%, respectively, and reduced the number of parameters by 75.04%. Therefore, our model is an effective means to recognize and classify of concrete cracks under strong interference.

In medical science, brain tumor is the most common and aggressive disease and is known to be risk factors that have been confirmed by research. A brain tumor is the anomalous development of cell inside the brain. One conventional strategy to separate brain tumors is by reviewing the MRI pictures of the patient's mind. In this paper, we have designed a Convolutional Neural Network (CNN) to perceive whether the image contains tumor or not. We have designed 5 different CNN and examined each design on the basis of convolution layers, max-pooling, and flattening layers and activation functions. In each design we have made some changes on layers i.e. using different pooling layers in design 2 and 4, using different activation functions in design 2 and 3, and adding more Fully Connected layers in design 5. We examine their results and compare it with other designs. After comparing their results we find a best design out of 5 based on their accuracy. Utilizing our Convolutional neural network, we could accomplish a training accuracy and validation accuracy of design 3 at 100 epochs is 99.99% and 92.34%, best case scenario.


Author(s):  
N. Devi

Abstract: This paper focuses on the task of recognizing handwritten Hindi characters using a Convolutional Neural Network (CNN) based. The recognized characters can then be stored digitally in the computer or used for other purposes. The dataset used is obtained from the UC Irvine Machine Learning Repository which contains 92,000 images divided into training (80%) and test set (20%). It contains different forms of handwritten Devanagari characters written by different individuals which can be used to train and test handwritten text recognizers. It contains four CNN layers followed by three fully connected layers for recognition. Grayscale handwritten character images are used as input. Filters are applied on the images to extract different features at each layer. This is done by the Convolution operation. The two other main operations involved are Pooling and Flattening. The output of the CNN layers is fed to the fully connected layers. Finally, the chance or probability score of each character is determined and the character with the highest probability score is shown as the output. A recognition accuracy of 98.94% is obtained. Similar models exist for the purpose, but the proposed model achieved a better performance and accuracy than some of the earlier models. Keywords: Devanagari characters, Convolutional Neural Networks, Image Processing


Author(s):  
Salsa Bila ◽  
Anwar Fitrianto ◽  
Bagus Sartono

Beef is a food ingredient that has a high selling value. Such high prices make some people manipulate sales in markets or other shopping venues, such as mixing beef and pork. The difference between pork and beef is actually from the color and texture of the meat. However, many people do not understand these differences yet. In addition to socialization related to understanding the differences between the two types of meat, another solution is to create a technology that can recognize and differentiate pork and beef. That is what underlies this research to build a system that can classify the two types of meat. Convolutional Neural Network (CNN) is one of the Deep Learning methods and the development of Artificial Intelligence science that can be applied to classify images. Several regularization techniques include Dropout, L2, and Max-Norm were applied to the model and compared to obtain the best classification results and may predict new data accurately. It has known that the highest accuracy of 97.56% obtained from the CNN model by applying the Dropout technique using 0.7 supported by hyperparameters such as Adam's optimizer, 128 neurons in the fully connected layer, ReLu activation function, and 3 fully connected layers. The reason that also underlies the selection of the model is the low error rate of the model, which is only 0.111.Keywords: Beef and Pork, Model, Classification, CNN


2020 ◽  
Vol 35 (33) ◽  
pp. 2043002 ◽  
Author(s):  
Fedor Sergeev ◽  
Elena Bratkovskaya ◽  
Ivan Kisel ◽  
Iouri Vassiliev

Classification of processes in heavy-ion collisions in the CBM experiment (FAIR/GSI, Darmstadt) using neural networks is investigated. Fully-connected neural networks and a deep convolutional neural network are built to identify quark–gluon plasma simulated within the Parton-Hadron-String Dynamics (PHSD) microscopic off-shell transport approach for central Au+Au collision at a fixed energy. The convolutional neural network outperforms fully-connected networks and reaches 93% accuracy on the validation set, while the remaining only 7% of collisions are incorrectly classified.


2020 ◽  
Vol 224 (1) ◽  
pp. 191-198
Author(s):  
Xinliang Liu ◽  
Tao Ren ◽  
Hongfeng Chen ◽  
Yufeng Chen

SUMMARY In this paper, convolutional neural networks (CNNs) were used to distinguish between tectonic and non-tectonic seismicity. The proposed CNNs consisted of seven convolutional layers with small kernels and one fully connected layer, which only relied on the acoustic waveform without extracting features manually. For a single station, the accuracy of the model was 0.90, and the event accuracy could reach 0.93. The proposed model was tested using data from January 2019 to August 2019 in China. The event accuracy could reach 0.92, showing that the proposed model could distinguish between tectonic and non-tectonic seismicity.


2020 ◽  
Vol 10 (11) ◽  
pp. 3956 ◽  
Author(s):  
Fan Li ◽  
Hong Tang ◽  
Shang Shang ◽  
Klaus Mathiak ◽  
Fengyu Cong

Heart sounds play an important role in the diagnosis of cardiac conditions. Due to the low signal-to-noise ratio (SNR), it is problematic and time-consuming for experts to discriminate different kinds of heart sounds. Thus, objective classification of heart sounds is essential. In this study, we combined a conventional feature engineering method with deep learning algorithms to automatically classify normal and abnormal heart sounds. First, 497 features were extracted from eight domains. Then, we fed these features into the designed convolutional neural network (CNN), in which the fully connected layers that are usually used before the classification layer were replaced with a global average pooling layer to obtain global information about the feature maps and avoid overfitting. Considering the class imbalance, the class weights were set in the loss function during the training process to improve the classification algorithm’s performance. Stratified five-fold cross-validation was used to evaluate the performance of the proposed method. The mean accuracy, sensitivity, specificity and Matthews correlation coefficient observed on the PhysioNet/CinC Challenge 2016 dataset were 86.8%, 87%, 86.6% and 72.1% respectively. The proposed algorithm’s performance achieves an appropriate trade-off between sensitivity and specificity.


2020 ◽  
Author(s):  
Pu-Yun Kow ◽  
Li-Chiu Chang ◽  
Fi-John Chang

<p>As living standards have improved, people have been increasingly concerned about air pollution problems. Taiwan also faces the same problem, especially in the southern region. Thus, it is a crucial task to rapidly provide reliable information of air quality. This study intends to classify air quality images into, for example, “high pollution”, “moderate pollution”, or “low pollution” categories in areas of interest. In this work, we consider achieving a finer classification of air quality, i.e., more categories like 5-6 categories. To achieve our goal, we propose a hybrid model (CNN-FC) that integrates the convolutional neural network (CNN) and a fully-connected neural network for classifying the concentrations of PM2.5 and PM10 as well as the air quality index (AQI). Despite being implemented in many fields, the regression classification has, however, been rarely applied to air pollution problems. The image regression classification is useful to air pollution research, especially when some of the (more sophisticated) air quality detectors are malfunctioning. The hourly air quality datasets collected at Station Linyuan of Kaohsiung City in southern Taiwan form the case study for evaluating the applicability and reliability of the proposed CNN-FC approach. A total of 3549 datasets that contain the images (photos) and monitored data of PM2.5, PM10, and AQI are used to train and validate the constructed model. The proposed CNN-FC approach is employed to perform image regression classification by extracting important characteristics from images. The results demonstrate that the proposed CNN-FC model can provide a practical and reliable approach to creating an accurate image regression classification. The main breakthrough of this study is the image classification of several pollutants only using a single shallow CNN-FC model.</p><p><strong>Keywords:</strong> PM<sub>2.5</sub> forecast; image classification; Deep learning; Convolutional neural network; Fully-connected neural network; Taiwan</p><p> </p>


BioResources ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. 4986-4999
Author(s):  
Ziyu Zhao ◽  
Xiaoxia Yang ◽  
Zhedong Ge ◽  
Hui Guo ◽  
Yucheng Zhou

To prevent the illegal trade of precious wood in circulation, a wood species identification method based on convolutional neural network (CNN), namely PWoodIDNet (Precise Wood Specifications Identification) model, is proposed. In this paper, the PWoodIDNet model for the identification of rare tree species is constructed to reduce network parameters by decomposing convolutional kernel, prevent overfitting, enrich the diversity of features, and improve the performance of the model. The results showed that the PWoodIDNet model can effectively improve the generalization ability, the characterization ability of detail features, and the recognition accuracy, and effectively improve the classification of wood identification. PWoodIDNet was used to analyze the identification accuracy of microscopic images of 16 kinds of wood, and the identification accuracy reached 99%, which was higher than the identification accuracy of several existing classical convolutional neural network models. In addition, the PWoodIDNet model was analyzed to verify the feasibility and effectiveness of the PWoodIDNet model as a wood identification method, which can provide a new direction and technical solution for the field of wood identification.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


Sign in / Sign up

Export Citation Format

Share Document