scholarly journals Switching Angles Calculation in Multilevel Inverters Using Triangular Number Sequence –A THD Minimization Approach

2020 ◽  
Vol 22 (1) ◽  
pp. 49-55
Author(s):  
Jesus Aguayo-Alquicira ◽  
Susana Estefany De León-Aldaco ◽  
Jorge Hugo Calleja-Gjumlich ◽  
Abraham Claudio-Sánchez
2020 ◽  
Vol 18 ◽  
pp. 419-424
Author(s):  
M. Buzdugan ◽  
◽  
C. Ciugudeanu ◽  
A. Campianu

2010 ◽  
Vol 4 (1) ◽  
pp. 51-57
Author(s):  
G. Mahesh Manivanna Kumar ◽  
◽  
S. Rama Reddy ◽  

2012 ◽  
Vol 2 (3) ◽  
pp. 16-21
Author(s):  
Surya Kalavathi M ◽  
K. Mahendran ◽  
B. Indhumathy
Keyword(s):  

Author(s):  
Øystein Linnebo

How are the natural numbers individuated? That is, what is our most basic way of singling out a natural number for reference in language or in thought? According to Frege and many of his followers, the natural numbers are cardinal numbers, individuated by the cardinalities of the collections that they number. Another answer regards the natural numbers as ordinal numbers, individuated by their positions in the natural number sequence. Some reasons to favor the second answer are presented. This answer is therefore developed in more detail, involving a form of abstraction on numerals. Based on this answer, a justification for the axioms of Dedekind–Peano arithmetic is developed.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4772
Author(s):  
Richard N. M. Rudd-Orthner ◽  
Lyudmila Mihaylova

A repeatable and deterministic non-random weight initialization method in convolutional layers of neural networks examined with the Fast Gradient Sign Method (FSGM). Using the FSGM approach as a technique to measure the initialization effect with controlled distortions in transferred learning, varying the dataset numerical similarity. The focus is on convolutional layers with induced earlier learning through the use of striped forms for image classification. Which provided a higher performing accuracy in the first epoch, with improvements of between 3–5% in a well known benchmark model, and also ~10% in a color image dataset (MTARSI2), using a dissimilar model architecture. The proposed method is robust to limit optimization approaches like Glorot/Xavier and He initialization. Arguably the approach is within a new category of weight initialization methods, as a number sequence substitution of random numbers, without a tether to the dataset. When examined under the FGSM approach with transferred learning, the proposed method when used with higher distortions (numerically dissimilar datasets), is less compromised against the original cross-validation dataset, at ~31% accuracy instead of ~9%. This is an indication of higher retention of the original fitting in transferred learning.


2020 ◽  
Vol 6 (4) ◽  
pp. 53-62
Author(s):  
Majid T. Fard ◽  
Waqar A. Khan ◽  
Jiangbiao He ◽  
Nathan Weise ◽  
Mostafa Abarzadeh

Sign in / Sign up

Export Citation Format

Share Document