Strut Diameter Uncertainty Prediction by Deep Neural Network for Additively Manufactured Lattice Structures

Author(s):  
Recep M. Gorguluarslan ◽  
Gorkem Can Ates ◽  
Olgun Utku Gungor ◽  
Yusuf Yamaner

Abstract Additive manufacturing (AM) introduces geometric uncertainties on the fabricated strut members of lattice structures. These uncertainties result in deviations between the modeled and fabricated geometries of struts. The use of deep neural networks (DNNs) to accurately predict the statistical parameters of the effective strut diameters to account for the AM-introduced geometric uncertainties with a small training dataset for constant process parameters is studied in this research. For the training data, struts with certain angle and diameter values are fabricated by the material extrusion process. The geometric uncertainties are quantified using the random field theory based on the spatial strut radius measurements obtained from the microscope images of the fabricated struts. The uncertainties are propagated to the effective diameters of the struts using a stochastic upscaling technique. The relationship between the modeled strut diameter and the characterized statistical parameters of the effective diameters are used as the training data to establish a DNN model. The validation results show that the DNN model can predict the statistical parameters of the effective diameters of the struts modeled with angle and diameters different from the ones used in the training data with good accuracy even if the training data set is small. Developing such a DNN model with a small data will allow designers to use the fabricated results in the design optimization processes without requiring additional experimentations.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ryoya Shiode ◽  
Mototaka Kabashima ◽  
Yuta Hiasa ◽  
Kunihiro Oka ◽  
Tsuyoshi Murase ◽  
...  

AbstractThe purpose of the study was to develop a deep learning network for estimating and constructing highly accurate 3D bone models directly from actual X-ray images and to verify its accuracy. The data used were 173 computed tomography (CT) images and 105 actual X-ray images of a healthy wrist joint. To compensate for the small size of the dataset, digitally reconstructed radiography (DRR) images generated from CT were used as training data instead of actual X-ray images. The DRR-like images were generated from actual X-ray images in the test and adapted to the network, and high-accuracy estimation of a 3D bone model from a small data set was possible. The 3D shape of the radius and ulna were estimated from actual X-ray images with accuracies of 1.05 ± 0.36 and 1.45 ± 0.41 mm, respectively.


Author(s):  
Recep M. Gorguluarslan ◽  
O. Utku Gungor

Abstract In this study, the influence of the spatial variability of geometric uncertainties on the strut members of the lattice structures fabricated by additive manufacturing is investigated. Individual struts are fabricated with various printing angles and diameters using a material extrusion process and PLA material. The diameter values of the fabricated samples are measured along the printing and radial directions at each layer under an optical microscope. Spatial correlations are characterized based on the measurements using the experimental autocorrelation function. Candidate autocorrelation functions are fitted to the measured data to identify the best fitted one for each diameter parameter and the corresponding correlation lengths are evaluated for random field. The applicability of the Karhunen-Loeve expansion (KLE) is investigated to reduce the dimensionality of the random field discretization. The results show that the diameters of the strut members at each layer are spatially dependent and the KLE method was found to give a good representation of the random field.


2020 ◽  
Vol 13 (10) ◽  
pp. 5459-5480
Author(s):  
Willem J. Marais ◽  
Robert E. Holz ◽  
Jeffrey S. Reid ◽  
Rebecca M. Willett

Abstract. Current cloud and aerosol identification methods for multispectral radiometers, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS), employ multichannel spectral tests on individual pixels (i.e., fields of view). The use of the spatial information in cloud and aerosol algorithms has been primarily through statistical parameters such as nonuniformity tests of surrounding pixels with cloud classification provided by the multispectral microphysical retrievals such as phase and cloud top height. With these methodologies there is uncertainty in identifying optically thick aerosols, since aerosols and clouds have similar spectral properties in coarse-spectral-resolution measurements. Furthermore, identifying clouds regimes (e.g., stratiform, cumuliform) from just spectral measurements is difficult, since low-altitude cloud regimes have similar spectral properties. Recent advances in computer vision using deep neural networks provide a new opportunity to better leverage the coherent spatial information in multispectral imagery. Using a combination of machine learning techniques combined with a new methodology to create the necessary training data, we demonstrate improvements in the discrimination between cloud and severe aerosols and an expanded capability to classify cloud types. The labeled training dataset was created from an adapted NASA Worldview platform that provides an efficient user interface to assemble a human-labeled database of cloud and aerosol types. The convolutional neural network (CNN) labeling accuracy of aerosols and cloud types was quantified using independent Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and MODIS cloud and aerosol products. By harnessing CNNs with a unique labeled dataset, we demonstrate the improvement of the identification of aerosols and distinct cloud types from MODIS and VIIRS images compared to a per-pixel spectral and standard deviation thresholding method. The paper concludes with case studies that compare the CNN methodology results with the MODIS cloud and aerosol products.


2020 ◽  
Author(s):  
Stefanie

As a student, I am learning knowledge with the help of teachers and the teacher plays a crucial role in our life. A wonderful instructor is able to teach a student with appropriate teaching materials. Therefore, in this project, I explore a teaching strategy called learning to teach (L2T) in which a teacher model could provide high-quality training samples to a student model. However, one major problem of L2T is that the teacher model will only select a subset of the training dataset as the final training data for the student. Learning to teach small-data learning strategy (L2TSDL) is proposed to solve this problem. In this strategy, the teacher model will calculate the importance score for every training sample and help students to make use of all training samples. To demonstrate the advantage of the proposed approach over L2T, I take the training of different deep neural networks (DNN) on image classification task as an exampleand show that L2TSDL could achieve good performance on both large and small dataset.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 825 ◽  
Author(s):  
Fadi Al Machot ◽  
Mohammed R. Elkobaisi ◽  
Kyandoghere Kyamakya

Due to significant advances in sensor technology, studies towards activity recognition have gained interest and maturity in the last few years. Existing machine learning algorithms have demonstrated promising results by classifying activities whose instances have been already seen during training. Activity recognition methods based on real-life settings should cover a growing number of activities in various domains, whereby a significant part of instances will not be present in the training data set. However, to cover all possible activities in advance is a complex and expensive task. Concretely, we need a method that can extend the learning model to detect unseen activities without prior knowledge regarding sensor readings about those previously unseen activities. In this paper, we introduce an approach to leverage sensor data in discovering new unseen activities which were not present in the training set. We show that sensor readings can lead to promising results for zero-shot learning, whereby the necessary knowledge can be transferred from seen to unseen activities by using semantic similarity. The evaluation conducted on two data sets extracted from the well-known CASAS datasets show that the proposed zero-shot learning approach achieves a high performance in recognizing unseen (i.e., not present in the training dataset) new activities.


2018 ◽  
Vol 57 (04) ◽  
pp. 220-229
Author(s):  
Tung-I Tsai ◽  
Yaofeng Zhang ◽  
Gy-Yi Chao ◽  
Cheng-Chieh Tsai ◽  
Zhigang Zhang

Summary Background: Radiotherapy has serious side effects and thus requires prudent and cautious evaluation. However, obtaining protein expression profiles is expensive and timeconsuming, making it necessary to develop a theoretical and rational procedure for predicting the radiotherapy outcome for bladder cancer when working with limited data. Objective: A procedure for estimating the performance of radiotherapy is proposed in this research. The population domain (range of the population) of proteins and the relationships among proteins are considered to increase prediction accuracy. Methods: This research uses modified extreme value theory (MEVT), which is used to estimate the population domain of proteins, and correlation coefficients and prediction intervals to overcome the lack of knowledge regarding relationships among proteins. Results: When the size of the training data set was 5 samples, the mean absolute percentage error rate (MAPE) was 31.6200%; MAPE fell to 13.5505% when the number of samples was increased to 30. The standard deviation (SD) of forecasting error fell from 3.0609% for 5 samples to 1.2415% for 30 samples. These results show that the proposed procedure yields accurate and stable results, and is suitable for use with small data sets. Conclusions: The results show that considering the relationships among proteins is necessary when predicting the outcome of radiotherapy.


2021 ◽  
Vol 11 (8) ◽  
pp. 3301
Author(s):  
Pamir Ghimire ◽  
Igor Jovančević ◽  
Jean-José Orteu

We present a method to train a deep-network-based feature descriptor to calculate discriminative local descriptions from renders and corresponding real images with similar geometry. We are interested in using such descriptors for automatic industrial visual inspection whereby the inspection camera has been coarsely localized with respect to a relatively large mechanical assembly and presence of certain components needs to be checked compared to the reference computer-aided design model (CAD). We aim to perform the task by comparing the real inspection image with the render of textureless 3D CAD using the learned descriptors. The descriptor was trained to capture geometric features while staying invariant to image domain. Patch pairs for training the descriptor were extracted in a semisupervised manner from a small data set of 100 pairs of real images and corresponding renders that were manually finely registered starting from a relatively coarse localization of the inspection camera. Due to the small size of the training data set, the descriptor network was initialized with weights from classification training on ImageNet. A two-step training is proposed for addressing the problem of domain adaptation. The first, “bootstrapping”, is a classification training to obtain good initial weights for second training step, triplet-loss training, that provides weights for extracting the discriminative features comparable using l2 distance. The descriptor was tested for comparing renders and real images through two approaches: finding local correspondences between the images through nearest neighbor matching and transforming the images into Bag of Visual Words (BoVW) histograms. We observed that learning a robust cross-domain descriptor is feasible, even with a small data set, and such features might be of interest for CAD-based inspection of mechanical assemblies, and related applications such as tracking or finely registered augmented reality. To the best of our knowledge, this is the first work that reports learning local descriptors for comparing renders with real inspection images.


2021 ◽  
Author(s):  
Recep M. Gorguluarslan ◽  
Gorkem Can Ates ◽  
O. Utku Gungor ◽  
Yusuf Yamaner

Abstract Additive manufacturing introduces geometric uncertainties on the fabricated strut members of lattice structures. These uncertainties lead to deviations between the simulation result and the fabricated mechanical performance. Although these uncertainties can be characterized and quantified in the existing literature, the generation of a high number of samples for the quantified uncertainties to use in the computer-aided design of lattice structures for different strut diameters and angles requires high experimental effort and computational cost. The use of deep neural network models to accurately predict the samples of uncertainties is studied in this research to address this issue. For the training data, the geometric uncertainties on the fabricated struts introduced by the material extrusion process are characterized from microscope measurements using random field theory. These uncertainties are propagated to effective diameters of the strut members using a stochastic upscaling technique. The relationship between the deterministic strut model parameters, namely the model diameter and angle, and the effective diameter with propagated uncertainties is established through a deep neural network model. The validation data results show accurate predictions for the effective diameter when model parameters are given as inputs. Thus, the proposed model has the potential to use the fabricated results in the design optimization processes without requiring computationally expensive repetitive simulations.


2021 ◽  
Vol 32 (2) ◽  
pp. 20-25
Author(s):  
Efraim Kurniawan Dairo Kette

In pattern recognition, the k-Nearest Neighbor (kNN) algorithm is the simplest non-parametric algorithm. Due to its simplicity, the model cases and the quality of the training data itself usually influence kNN algorithm classification performance. Therefore, this article proposes a sparse correlation weight model, combined with the Training Data Set Cleaning (TDC) method by Classification Ability Ranking (CAR) called the CAR classification method based on Coefficient-Weighted kNN (CAR-CWKNN) to improve kNN classifier performance. Correlation weight in Sparse Representation (SR) has been proven can increase classification accuracy. The SR can show the 'neighborhood' structure of the data, which is why it is very suitable for classification based on the Nearest Neighbor. The Classification Ability (CA) function is applied to classify the best training sample data based on rank in the cleaning stage. The Leave One Out (LV1) concept in the CA works by cleaning data that is considered likely to have the wrong classification results from the original training data, thereby reducing the influence of the training sample data quality on the kNN classification performance. The results of experiments with four public UCI data sets related to classification problems show that the CAR-CWKNN method provides better performance in terms of accuracy.


2021 ◽  
Author(s):  
SHOGO ARAI ◽  
ZHUANG FENG ◽  
Fuyuki Tokuda ◽  
Adam Purnomo ◽  
Kazuhiro Kosuge

<div>This paper proposes a deep learning-based fast grasp detection method with a small dataset for robotic bin-picking. We consider the problem of grasping stacked up mechanical parts on a planar workspace using a parallel gripper. In this paper, we use a deep neural network to solve the problem with a single depth image. To reduce the computation time, we propose an edge-based algorithm to generate potential grasps. Then, a convolutional neural network (CNN) is applied to evaluate the robustness of all potential grasps for bin-picking. Finally, the proposed method ranks them and the object is grasped by using the grasp with the highest score. In bin-picking experiments, we evaluate the proposed method with a 7-DOF manipulator using textureless mechanical parts with complex shapes. The success ratio of grasping is 97%, and the average computation time of CNN inference is less than 0.23[s] on a laptop PC without a GPU array. In addition, we also confirm that the proposed method can be applied to unseen objects which are not included in the training dataset. </div>


Sign in / Sign up

Export Citation Format

Share Document