A visual weld edge recognition method based on light and shadow feature construction using directional lighting

2016 ◽  
Vol 24 ◽  
pp. 19-30 ◽  
Author(s):  
Jinle Zeng ◽  
Baohua Chang ◽  
Dong Du ◽  
Yuxiang Hong ◽  
Yirong Zou ◽  
...  
2021 ◽  
Vol 11 (6) ◽  
pp. 2759
Author(s):  
Shidian Ma ◽  
Weifeng Fang ◽  
Haobin Jiang ◽  
Mu Han ◽  
Chenxu Li

At present, the realization of autonomous valet parking (AVP) technology does not achieve information interaction between the parking spaces and vehicles, and accurate parking spaces information perception cannot be obtained when the accuracy of the search is not precise. In addition, when using the camera vision to identify the parking spaces, traditional parking space features such as parking lines and parking angles recognition are susceptible to light and environment. Especially when the vehicle nearby partially occupies the parking space to be parked, it is not easy to determine whether it is a valid empty parking space. This paper proposes a parking space recognition method based on parking space features in the scene of AVP. By constructing the multi-dimensional features containing the parking space information, the cameras are used to extract features’ contour, locate features’ position and recognize features. In this paper, a new similarity calculation formula is proposed to recognize the stained features through template matching algorithm. According to the relative position relationship between the feature and parking space, the identification of effective empty parking spaces and their boundaries is realized. The experimental results show that compared with the recognition of traditional parking lines and parking angles, this method can identify effective empty parking spaces even when the light conditions are complex and the parking spaces are partially occupied by adjacent vehicles, which simplifies the recognition algorithm and improves the reliability of the parking spaces identification.


The Deaf, Dumb, and Blind Community and the general public have a true communication difficulty. The advancements made during the automated signing recognition try to break down the communication barrier. Our commitment considers a recognition method based on the Microsoft Kinect, convolutional neural networks (CNNs), and GPU acceleration. CNNs are prepared to automate the procedure of feature construction rather than developing intricate handcrafted features.We have a high level of accuracy in recognizing gestures and sign language. We also created more modules to make communication easier for persons with diverse abilities.This Project Mainlyto help the deaf, dumb and blind community by using this project approach this community can be able to communicate like a normal human being for this have used different modules for that.


Geophysics ◽  
2021 ◽  
pp. 1-88
Author(s):  
Yingjie Zhu ◽  
wanyin wang ◽  
Colin Farquharson ◽  
Jinming Huang ◽  
Minghua Zhang ◽  
...  

Gravity and magnetic data have unique advantages for studying the lateral extents of geological bodies. There is a class of methods for edge recognition called the maximum-edge-recognition methods that use their extreme values to locate the edges of geological bodies. These methods include the total horizontal derivative, the analytic signal amplitude, the theta map, and the normalized standard deviation. These are all first-order derivative-based techniques. There are also higher-order derivative-based methods that are derived from the first-order filters, for example, the total horizontal derivative of the tilt angle. We present an edge recognition filter that is based on the idea of the normalized vertical derivatives of existing methods. For each maximum-edge-recognition method, we first calculate its nth-order vertical derivative and then use thresholding to locate its peaks. The peak values are subsequently normalized by the values of the original maximum-edge-recognition method. Testing on synthetic and real data shows that the normalized vertical derivatives of the maximum-edge-recognition methods have higher accuracy, better lateral resolution and are more interpretable than existing techniques, and thus are a worthwhile addition to the set of edge-detection tools for potential-field data.


2019 ◽  
Vol 6 (1) ◽  
pp. 1
Author(s):  
Yuli Anwar

Revenue and cost recognitions is the most important thing to be done by an entity,  time and the recognition method must be based on the rules from Financial Accounting Standards. Revenue and cost recognition which is done by PT. EMKL Jelutung Subur located on Pangkalpinang, Bangka Belitung province is done by using the accrual basis, and it can be seen with its influences to company profits every year.  This research is useful to get a data and information for preparing this thesis and improving my knowledge and also for comparing between theories accepted against facts applied in the field.  The result of this research shows that PT. EMKL Jelutung Subur has implemented one of the revenue and cost recognition method (accrual basis) continually, so that profit accuracy is accountable to be used for developing this kind of expedition business in order to become a better company. The accuracy is evaluated because all revenues received and cost spent  have clear evidence and found in the period of time.  The evaluation shows there is one thing that miss from revenue and cost recognition done by PT. EMKL Jelutung Subur, that is charge to the customers who use the storage service temporary, because some customers keep their goods for a long time in the warehouse, and it will increase the costs of loading, warehouse maintenance, damaged goods and decreasing a quantity of goods. If the storage service is charged to the customers, PT. EMKL Jelutung Subur will earn additional revenue to cover all the expenses above


2020 ◽  
Vol 64 (4) ◽  
pp. 40404-1-40404-16
Author(s):  
I.-J. Ding ◽  
C.-M. Ruan

Abstract With rapid developments in techniques related to the internet of things, smart service applications such as voice-command-based speech recognition and smart care applications such as context-aware-based emotion recognition will gain much attention and potentially be a requirement in smart home or office environments. In such intelligence applications, identity recognition of the specific member in indoor spaces will be a crucial issue. In this study, a combined audio-visual identity recognition approach was developed. In this approach, visual information obtained from face detection was incorporated into acoustic Gaussian likelihood calculations for constructing speaker classification trees to significantly enhance the Gaussian mixture model (GMM)-based speaker recognition method. This study considered the privacy of the monitored person and reduced the degree of surveillance. Moreover, the popular Kinect sensor device containing a microphone array was adopted to obtain acoustic voice data from the person. The proposed audio-visual identity recognition approach deploys only two cameras in a specific indoor space for conveniently performing face detection and quickly determining the total number of people in the specific space. Such information pertaining to the number of people in the indoor space obtained using face detection was utilized to effectively regulate the accurate GMM speaker classification tree design. Two face-detection-regulated speaker classification tree schemes are presented for the GMM speaker recognition method in this study—the binary speaker classification tree (GMM-BT) and the non-binary speaker classification tree (GMM-NBT). The proposed GMM-BT and GMM-NBT methods achieve excellent identity recognition rates of 84.28% and 83%, respectively; both values are higher than the rate of the conventional GMM approach (80.5%). Moreover, as the extremely complex calculations of face recognition in general audio-visual speaker recognition tasks are not required, the proposed approach is rapid and efficient with only a slight increment of 0.051 s in the average recognition time.


Author(s):  
Zixuan Liu ◽  
Dan Niu ◽  
Qi Li ◽  
Xisong Chen ◽  
Li Ding ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document