scholarly journals Subgroup Preference Neural Network

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6104
Author(s):  
Ayman Elgharabawy ◽  
Mukesh Prasad ◽  
Chin-Teng Lin

Subgroup label ranking aims to rank groups of labels using a single ranking model, is a new problem faced in preference learning. This paper introduces the Subgroup Preference Neural Network (SGPNN) that combines multiple networks have different activation function, learning rate, and output layer into one artificial neural network (ANN) to discover the hidden relation between the subgroups’ multi-labels. The SGPNN is a feedforward (FF), partially connected network that has a single middle layer and uses stairstep (SS) multi-valued activation function to enhance the prediction’s probability and accelerate the ranking convergence. The novel structure of the proposed SGPNN consists of a multi-activation function neuron (MAFN) in the middle layer to rank each subgroup independently. The SGPNN uses gradient ascent to maximize the Spearman ranking correlation between the groups of labels. Each label is represented by an output neuron that has a single SS function. The proposed SGPNN using conjoint dataset outperforms the other label ranking methods which uses each dataset individually. The proposed SGPNN achieves an average accuracy of 91.4% using the conjoint dataset compared to supervised clustering, decision tree, multilayer perceptron label ranking and label ranking forests that achieve an average accuracy of 60%, 84.8%, 69.2% and 73%, respectively, using the individual dataset.

Author(s):  
Ayman Elgharabawy ◽  
Mukesh Prasad ◽  
Chin-Teng Lin

Equality and incomparability multi-label ranking have not been introduced to learning before. This paper proposes new native ranker neural network to address the problem of multi-label ranking including incomparable preference orders using a new activation and error functions and new architecture. Preference Neural Network PNN solves the multi-label ranking problem, where labels may have indifference preference orders or subgroups which are equally ranked. PNN is a nondeep, multiple-value neuron, single middle layer and one or more output layers network. PNN uses a novel positive smooth staircase (PSS) or smooth staircase (SS) activation function and represents preference orders and Spearman ranking correlation as objective functions. It is introduced in two types, Type A is traditional NN architecture and Type B uses expanding architecture by introducing new type of hidden neuron has multiple activation function in middle layer and duplicated output layers to reinforce the ranking by increasing the number of weights. PNN accepts single data instance as inputs and output neurons represent the number of labels and output value represents the preference value. PNN is evaluated using a new preference mining data set that contains repeated label values which have not experimented on before. SS and PS speed-up the learning and PNN outperforms five previously proposed methods for strict label ranking in terms of accurate results with high computational efficiency.


Author(s):  
Ayman Elgharabawy ◽  
Mukesh Parsad ◽  
Nikhil R. Pal ◽  
Chin-Teng Lin

Equality and incomparability multi-label ranking have not been introduced to learning before. This paper proposes new native ranker neural network to address the problem of multi-label ranking including incomparable preference orders using a new activation and error functions and new architecture. Preference Neural Network PNN solves the multi-label ranking problem, where labels may have indifference preference orders or subgroups which are equally ranked. PNN is a nondeep, multiple-value neuron, single middle layer and one or more output layers network. PNN uses a novel positive smooth staircase (PSS) or smooth staircase (SS) activation function and represents preference orders and Spearman ranking correlation as objective functions. It is introduced in two types, Type A is traditional NN architecture and Type B uses expanding architecture by introducing new type of hidden neuron has multiple activation function in middle layer and duplicated output layers to reinforce the ranking by increasing the number of weights. PNN accepts single data instance as inputs and output neurons represent the number of labels and output value represents the preference value. PNN is evaluated using a new preference mining data set that contains repeated label values which have not experimented on before. SS and PS speed-up the learning and PNN outperforms five previously proposed methods for strict label ranking in terms of accurate results with high computational efficiency.


Author(s):  
Ayman Elgharabawy ◽  
Mukesh Parsad ◽  
Chin-Teng Lin

Equality and incomparability multi-label ranking have not been introduced to learning before. This paper proposes new native ranker neural network to address the problem of multi-label ranking including incomparable preference orders using a new activation and error functions and new architecture. Preference Neural Network PNN solves the multi-label ranking problem, where labels may have indifference preference orders or subgroups which are equally ranked. PNN is a nondeep, multiple-value neuron, single middle layer and one or more output layers network. PNN uses a novel positive smooth staircase (PSS) or smooth staircase (SS) activation function and represents preference orders and Spearman ranking correlation as objective functions. It is introduced in two types, Type A is traditional NN architecture and Type B uses expanding architecture by introducing new type of hidden neuron has multiple activation function in middle layer and duplicated output layers to reinforce the ranking by increasing the number of weights. PNN accepts single data instance as inputs and output neurons represent the number of labels and output value represents the preference value. PNN is evaluated using a new preference mining data set that contains repeated label values which have not experimented on before. SS and PS speed-up the learning and PNN outperforms five previously proposed methods for strict label ranking in terms of accurate results with high computational efficiency.


Author(s):  
Geoffroy Chaussonnet ◽  
Sebastian Gepperth ◽  
Simon Holz ◽  
Rainer Koch ◽  
Hans-Jörg Bauer

Abstract A fully connected Artificial Neural Network (ANN) is used to predict the mean spray characteristics of prefilming airblast atomization. The model is trained from the planar prefilmer experiment from the PhD thesis of Gepperth (2020). The output of the ANN model are the Sauter Mean Diameter, the mean droplet axial velocity, the mean ligament length and the mean ligament deformation velocity. The training database contains 322 different operating points. Two types of model input quantities are investigated and compared. First, nine dimensional parameters are used as inputs for the model. Second, nine non-dimensional groups commonly used for liquid atomization are derived from the first set of inputs. The best architecture is determined after testing over 10000 randomly drawn ANN architectures, with up to 10 layers and up to 128 neurons per layer. The striking results is that for both types of model, the best architectures consist of only 3 hidden layer in the shape of a diabolo. This shape recalls the shape of an autoencoder, where the middle layer would be the feature space of reduced dimensionality. It was found that the model with dimensional input quantities always shows a lower test and validation errors than the one with non-dimensional input quantities. In general, the two types of models provide comparable accuracy, better than typical correlations of SMD and droplet velocity. Finally the extrapolation capability of the models was assessed by a training them on a confined domain of parameters and testing them outside this domain.


Author(s):  
Natasha Munirah Mohd Fahmi ◽  
◽  
Nor Aira Zambri ◽  
Norhafiz Salim ◽  
Sim Sy Yi ◽  
...  

This paper presents a step-by-step procedure for the simulation of photovoltaic modules with numerical values, using MALTAB/Simulink software. The proposed model is developed based on the mathematical model of PV module, which based on PV solar cell employing one-diode equivalent circuit. The output current and power characteristics curves highly depend on some climatic factors such as radiation and temperature, are obtained by simulation of the selected module. The collected data are used in developing Artificial Neural Network (ANN) model. Multilayer Perceptron (MLP) and Radial Basis Function (RBF) are the techniques used to forecast the outputs of the PV. Various types of activation function will be applied such as Linear, Logistic Sigmoid, Hyperbolic Tangent Sigmoid and Gaussian. The simulation results show that the Logistic Sigmoid is the best technique which produce minimal root mean square error for the system.


Author(s):  
O. C. Akgun ◽  
J. Mei

This paper presents the design of an ultra-low energy neural network that uses time-mode signal processing). Handwritten digit classification using a single-layer artificial neural network (ANN) with a Softmin-based activation function is described as an implementation example. To realize time-mode operation, the presented design makes use of monostable multivibrator-based multiplying analogue-to-time converters, fixed-width pulse generators and basic digital gates. The time-mode digit classification ANN was designed in a standard CMOS 0.18 μm IC process and operates from a supply voltage of 0.6 V. The system operates on the MNIST database of handwritten digits with quantized neuron weights and has a classification accuracy of 88%, which is typical for single-layer ANNs, while dissipating 65.74 pJ per classification with a speed of 2.37 k classifications per second. This article is part of the theme issue ‘Harmonizing energy-autonomous computing and intelligence’.


2021 ◽  
Author(s):  
Jong Soo Kim ◽  
Yongil Cho ◽  
Tae Ho Lim

Abstract An orthogonal neural network (ONN), a new deep-learning structure for medical image localization, is developed and presented in this paper. This method is simple, efficient, and completely different from a convolution neural network (CNN). The diagnostic performance of ONN for detecting the location of pneumothorax in chest X-rays was assessed and compared to that of CNN. An area under the receiver operating characteristic (ROC) curve (AUC) of 0.870, an accuracy of 85.3%, a sensitivity of 75.0%, and a specificity of 86.5% were achieved; the ONN outperformed the CNN. The diagnostic performance of the ONN with a sigmoid activation function for all the nodes obviously outperformed the ONN with the rectified linear unit (RELU) activation function for all the nodes other than the output nodes. In addition, by applying ONN and CNN to predict the location of the glottis in laryngeal images, we achieved accurate and adjacent prediction rates of 70.5% and 20.5%, respectively, with the ONN. The prediction accuracy of the ONN was compared favorably with that of the CNN. Compared to a CNN, an ONN required only approximately 10% of the computations using a CNN trained on images with an input resolution of 256 × 256 pixels. A fully-connected small artificial neural network (ANN), selected by comparing the test results of several dozens of small ANN models, achieved the best location prediction performance on medical images. This study demonstrated that an ONN can be used as a quick selection criterion to compare ANN models for image localization since an ONN performed well compared decently with the selected ANN model.


Author(s):  
Putri Marhida Badarudin ◽  
◽  
Rozaida Ghazali ◽  
Abdullah Alahdal ◽  
N.A.M. Alduais ◽  
...  

This work develops an Artificial Neural Network (ANN) model for performing Breast Cancer (BC) classification tasks. The design of the model considers studying different ANN architectures from the literature and chooses the one with the best performance. This ANN model aims to classify BC cases more systematically and more quickly. It provides facilities in the field of medicine to detect breast cancer among women. The ANN classification model is able to achieve an average accuracy of 98.88 % with an average run time of 0.182 seconds. Using this model, the classification of BC can be carried out much more faster than manual diagnosis and with good enough accuracy.


2020 ◽  
Vol 58 (1) ◽  
pp. 25-38
Author(s):  
Sandi Baressi Šegota ◽  
Daniel Štifanić ◽  
Kazuhiro Ohkura ◽  
Zlatan Car

An artificial neural network (ANN) approach is proposed to the problem of estimating the propeller torques of a frigate using combined diesel, electric and gas (CODLAG) propulsion system. The authors use a multilayer perceptron (MLP) feed-forward ANN trained with data from a dataset which describes the decay state coefficients as outputs and system parameters as inputs – with a goal of determining the propeller torques, removing the decay state coefficients and using the torque values of the starboard and port propellers as outputs. A total of 53760 ANNs are trained – 26880 for each of the propellers, with a total 8960 parameter combinations. The results are evaluated using mean absolute error (MAE) and coefficient of determination (R2). Best results for the starboard propeller are MAE of 2.68 [Nm], and MAE of 2.58 [Nm] for the port propeller with following ANN configurations respectively: 2 hidden layers with 32 neurons and identity activation and 3 hidden layers with 16, 32 and 16 neurons and identity activation function. Both configurations achieve R2 value higher than 0.99.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Putri Marhida Badarudin ◽  
◽  
Rozaida Ghazali ◽  
Abdullah Alahdal ◽  
N.A.M. Alduais ◽  
...  

This work develops an Artificial Neural Network (ANN) model for performing Breast Cancer (BC) classification tasks. The design of the model considers studying different ANN architectures from the literature and chooses the one with the best performance. This ANN model aims to classify BC cases more systematically and more quickly. It provides facilities in the field of medicine to detect breast cancer among women. The ANN classification model is able to achieve an average accuracy of 98.88 % with an average run time of 0.182 seconds. Using this model, the classification of BC can be carried out much more faster than manual diagnosis and with good enough accuracy.


Sign in / Sign up

Export Citation Format

Share Document