Reduction of Defect Misclassification of Electronic Board Using Multiple SVM Classifiers

2014 ◽  
Vol 2 (1) ◽  
pp. 25-36 ◽  
Author(s):  
Takuya Nakagawa ◽  
Yuji Iwahori ◽  
M. K. Bhuyan

This paper proposes a new method to improve the classification accuracy by multiple class classification using multiple SVM. The proposed approach classifies the true and pseudo defects by adding features to decrease the incorrect classification. This approach consists of two steps. First, detect the straight line by Hough Transform to the inspection image and condition is judged with the gradient. More than 80% of AOI images consist of images with the margin line between base part and lead line part which has the same direction. When detected line directions are almost the same directions, shifted image of inspection image is generated and used as the reference image. In case of different directions of detected lines (this case holds for less than 20% of AOI images), reference image is generated manually. After the reference image is prepared, the difference is taken between the inspection image and reference image. This leads to extract the defect candidate region with high accuracy and features are extracted to judge the defect and foreign material. Second, selected features are learned with multiple SVM and classified into the class. When the result has the multiple same voting counts to the same class, the judgment is treated as the difficult class for the classification. It is shown that the proposed approach gives efficient classification with the higher classification accuracy than the previous approaches through the real experiment.

1979 ◽  
Vol 7 (1) ◽  
pp. 31-39
Author(s):  
G. S. Ludwig ◽  
F. C. Brenner

Abstract An automatic tread gaging machine has been developed. It consists of three component systems: (1) a laser gaging head, (2) a tire handling device, and (3) a computer that controls the movement of the tire handling machine, processes the data, and computes the least-squares straight line from which a wear rate may be estimated. Experimental tests show that the machine has good repeatability. In comparisons with measurements obtained by a hand gage, the automatic machine gives smaller average groove depths. The difference before and after a period of wear for both methods of measurement are the same. Wear rates estimated from the slopes of straight lines fitted to both sets of data are not significantly different.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 194
Author(s):  
Sarah Gonzalez ◽  
Paul Stegall ◽  
Harvey Edwards ◽  
Leia Stirling ◽  
Ho Chit Siu

The field of human activity recognition (HAR) often utilizes wearable sensors and machine learning techniques in order to identify the actions of the subject. This paper considers the activity recognition of walking and running while using a support vector machine (SVM) that was trained on principal components derived from wearable sensor data. An ablation analysis is performed in order to select the subset of sensors that yield the highest classification accuracy. The paper also compares principal components across trials to inform the similarity of the trials. Five subjects were instructed to perform standing, walking, running, and sprinting on a self-paced treadmill, and the data were recorded while using surface electromyography sensors (sEMGs), inertial measurement units (IMUs), and force plates. When all of the sensors were included, the SVM had over 90% classification accuracy using only the first three principal components of the data with the classes of stand, walk, and run/sprint (combined run and sprint class). It was found that sensors that were placed only on the lower leg produce higher accuracies than sensors placed on the upper leg. There was a small decrease in accuracy when the force plates are ablated, but the difference may not be operationally relevant. Using only accelerometers without sEMGs was shown to decrease the accuracy of the SVM.


1878 ◽  
Vol 28 (2) ◽  
pp. 633-671 ◽  
Author(s):  
Alexander Macfarlane

The experiments to which I shall refer were carried out in the physical laboratory of the University during the late summer session. I was ably assisted in conducting the experiments by three students of the laboratory,—Messrs H. A. Salvesen, G. M. Connor, and D. E. Stewart. The method which was used of measuring the difference of potential required to produce a disruptive discharge of electricity under given conditions, is that described in a paper communicated to the Royal Society of Edinburgh in 1876 in the names of Mr J. A. Paton, M. A., and myself, and was suggested to me by Professor Tait as a means of attacking the experimental problems mentioned below.The above sketch which I took of the apparatus in situ may facilitate tha description of the method. The receiver of an air-pump, having a rod capable of being moved air-tight up and down through the neck, was attached to one of the conductors of a Holtz machine in such a manner that the conductor of the machine and the rod formed one conducting system. Projecting from the bottom of the receiver was a short metallic rod, forming one conductor with the metallic parts of the air-pump, and by means of a chain with the uninsulated conductor of the Holtz machine. Brass balls and discs of various sizes were made to order, capable of being screwed on to the ends of the rods. On the table, and at a distance of about six feet from the receiver, was a stand supporting two insulated brass balls, the one fixed, the other having one degree of freedom, viz., of moving in a straight line in the plane of the table. The fixed insulated ball A was made one conductor with the insulated conductor of the Holtz and the rod of the receiver, by means of a copper wire insulated with gutta percha, having one end stuck firmly into a hole in the collar of the receiver, and having the other fitted in between the glass stem and the hollow in the ball, by which it fitted on to the stem tightly. A thin wire similarly fitted in between the ball B and its insulating stem connected the ball with the insulated half ring of a divided ring reflecting electrometer.


Author(s):  
Toplica Stojanović ◽  
Slobodan Goranović ◽  
Aleksandar Šakanović ◽  
Darko Stojanović

In order to determine at which level is the specific performance and technical and tactical efficiency of young players of different level of competition, and whether the level of competition can be an indicator of level differences of these abilities, a research was conducted on the sample of young football players aged 14 to 16 from the eight clubs, half of them competing in the higher and the other half in the lower level of competition. A sample of measuring instruments consisted of 13 tests for evaluation of five factors of specific endurance: starting endurance, stamina in maintaining the shallow formation, endurance during fast dribbling, ball pressing endurance, and evaluation of technical and tactical efficiency of football players. The results of the research showed that the young players of higher level of competition had significantly greater technical and tactical efficiency, as well as specific performance in tests which included curvilinear movement and dribbling, as well as control and passing the ball in motion, but the difference is not recorded with straight-line movements and sprints.


2020 ◽  
Vol 4 ◽  
pp. 65-71
Author(s):  
E.A. Veshkin ◽  
◽  
V.I. Postnov ◽  
V.V. Semenychev ◽  
E.V. Krasheninnikova ◽  
...  

The change in the microhardness over the thickness of samples made of EDT-69N binder cured in vacuum and at atmospheric pressure at temperatures from 130 to 170°C was investigated. It was found that the change in microhardness along the thickness of the samples occurs according to the parabolic law, with the maximum values being achieved in the middle of the sample cross-section along the thickness. With an increase in the molding temperature, the microhardness in the middle section of the sample increases from 222 MPa at a molding temperature of 130°C to 410 MPa during molding at 170°C. At the critical molding temperature (170°C), the microhardness in all zones of the specimen cross section (subsurface, semi-average, and core) levels off, while the parabolic dependence degenerates into a straight line. It is shown that the method of scratching (sclerometry) demonstrated a sufficiently high sensitivity to the state of samples cured at different temperatures. With an increase in the molding temperature, the width of the sclerometric grooves decreases. At a critical molding temperature of 170°C, the groove width is stabilized and becomes constant throughout the sample thickness. To characterize the difference in the values of the microhardness of the cured binder in the sample volume, it is proposed to use a dimensionless “coefficient of volume anisotropy,” which can take a positive, negative or zero value. With an increase in the curing temperature of the binder and, accordingly, with an increase in the microhardness of the sample, the coefficient of volume anisotropy decreases, and when the samples are molded at the critical temperature, it turns to zero, which indicates the absence of anisotropy.


1993 ◽  
Vol 39 (5) ◽  
pp. 766-772 ◽  
Author(s):  
K Emancipator ◽  
M H Kroll

Abstract Quantitative measures of the nonlinearity of an analytical method are defined as follows: the "(dimensional) nonlinearity" of a method is the square root of the mean of the square of the deviation of the response curve from a straight line, where the straight line is chosen to minimize the nonlinearity. The "relative nonlinearity" is defined as the dimensional nonlinearity divided by the difference between the maximum and minimum assayed values. These definitions may be used to develop practical criteria for linearity that are still objective. Calculation of the nonlinearity requires a method of curve-fitting. In this article, we use polynomial regression to demonstrate calculations, but the definition of nonlinearity also accommodates alternative nonlinear regression procedures.


1993 ◽  
Vol 39 (3) ◽  
pp. 405-413 ◽  
Author(s):  
M H Kroll ◽  
K Emancipator

Abstract The measure of linearity is an important part of the evaluation of a method. According to the NCCLS guidelines (Document EP6-P), results of a linearity experiment are fit to a straight line and judged linear either by visual evaluation, which is subjective, or by the lack-of-fit test. This approach depends on the precision of the method, is not necessarily conclusive, and fails to be quantitative. We define linearity as a measure of how well a first-order (linear) polynomial fits the data compared with a higher-order (nonlinear) polynomial. The major property of a linear polynomial is that the first derivative is a constant. The nonlinearity of a method can be measured by the difference between these two polynomials (first-order and higher-order) at specific values or, as an average, the root-mean difference. This approach is independent of the precision of the assay and is conclusive, quantitative, and objective.


2017 ◽  
Vol 23 (1) ◽  
pp. 55-71 ◽  
Author(s):  
Yang Xiao ◽  
Zhiyun Ouyang ◽  
Zhiming Zhang ◽  
Chaofan Xian

The quality of Landsat images in humid areas is considerably degraded by haze in terms of their spectral response pattern, which limits the possibility of their application in using visible and near-infrared bands. A variety of haze removal algorithms have been proposed to correct these unsatisfactory illumination effects caused by the haze contamination. The purpose of this study was to illustrate the difference of two major algorithms (the improved homomorphic filtering (HF) and the virtual cloud point (VCP)) for their effectiveness in solving spatially varying haze contamination, and to evaluate the impacts of haze removal on land cover classification. A case study with exploiting large quantities of Landsat TM images and climates (clear and haze) in the most humid areas in China proved that these haze removal algorithms both perform well in processing Landsat images contaminated by haze. The outcome of the application of VCP appears to be more similar to the reference images compared to HF. Moreover, the Landsat image with VCP haze removal can improve the classification accuracy effectively in comparison to that without haze removal, especially in the cloudy contaminated area


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Jianghui Wen ◽  
Yeshu Liu ◽  
Yu Shi ◽  
Haoran Huang ◽  
Bing Deng ◽  
...  

Abstract Background Long-chain non-coding RNA (lncRNA) is closely related to many biological activities. Since its sequence structure is similar to that of messenger RNA (mRNA), it is difficult to distinguish between the two based only on sequence biometrics. Therefore, it is particularly important to construct a model that can effectively identify lncRNA and mRNA. Results First, the difference in the k-mer frequency distribution between lncRNA and mRNA sequences is considered in this paper, and they are transformed into the k-mer frequency matrix. Moreover, k-mers with more species are screened by relative entropy. The classification model of the lncRNA and mRNA sequences is then proposed by inputting the k-mer frequency matrix and training the convolutional neural network. Finally, the optimal k-mer combination of the classification model is determined and compared with other machine learning methods in humans, mice and chickens. The results indicate that the proposed model has the highest classification accuracy. Furthermore, the recognition ability of this model is verified to a single sequence. Conclusion We established a classification model for lncRNA and mRNA based on k-mers and the convolutional neural network. The classification accuracy of the model with 1-mers, 2-mers and 3-mers was the highest, with an accuracy of 0.9872 in humans, 0.8797 in mice and 0.9963 in chickens, which is better than those of the random forest, logistic regression, decision tree and support vector machine.


1887 ◽  
Vol 41 (246-250) ◽  
pp. 442-443

The result obtained in the paper on the cell of the honey bee, read November 26, 1885, by which the side of one of the lozenges composing the cell was found to be three times the difference between the two parallel edges forming the sides of one of the trapeziums of the prism, gives a very simple method for constructing the figure as follows. On a straight line take a part AD, and lay off DC equal to twice AD, from D erect a perpendicular, and with radius AC = 3DA cut off DP; AC and AP are sides of the lozenge ACEP, which fulfils the required conditions. It is manifest that from this lozenge the remaining two lozenges and also the six trapeziums can he immediately constructed.


Sign in / Sign up

Export Citation Format

Share Document