scholarly journals Group-theoretic insights on the vibration of symmetric structures in engineering

Author(s):  
Alphose Zingoni

Group theory has been used to study various problems in physics and chemistry for many years. Relatively recently, applications have emerged in engineering, where problems of the vibration, bifurcation and stability of systems exhibiting symmetry have been studied. From an engineering perspective, the main attraction of group-theoretic methods has been their potential to reduce computational effort in the analysis of large-scale problems. In this paper, we focus on vibration problems in structural mechanics and reveal some of the insights and qualitative benefits that group theory affords. These include an appreciation of all the possible symmetries of modes of vibration, the prediction of the number of modes of a given symmetry type, the identification of modes associated with the same frequencies, the prediction of nodal lines and stationary points of a vibrating system, and the untangling of clustered frequencies.

2020 ◽  
Vol 102 (20) ◽  
Author(s):  
Susumu Minami ◽  
Fumiyuki Ishii ◽  
Motoaki Hirayama ◽  
Takuya Nomoto ◽  
Takashi Koretsune ◽  
...  

Author(s):  
Teije de Jong

AbstractIn this series of papers I attempt to provide an answer to the question how the Babylonian scholars arrived at their mathematical theory of planetary motion. Papers I and II were devoted to system A theory of the outer planets and of the planet Venus. In this third and last paper I will study system A theory of the planet Mercury. Our knowledge of the Babylonian theory of Mercury is at present based on twelve Ephemerides and seven Procedure Texts. Three computational systems of Mercury are known, all of system A. System A1 is represented by nine Ephemerides covering the years 190 BC to 100 BC and system A2 by two Ephemerides covering the years 310 to 290 BC. System A3 is known from a Procedure Text and from Text M, an Ephemeris of the last evening visibility of Mercury for the years 424 to 403 BC. From an analysis of the Babylonian observations of Mercury preserved in the Astronomical Diaries and Planetary Texts we find: (1) that dates on which Mercury reaches its stationary points are not recorded, (2) that Normal Star observations on or near dates of first and last appearance of Mercury are rare (about once every twenty observations), and (3) that about one out of every seven pairs of first and last appearances is recorded as “omitted” when Mercury remains invisible due to a combination of the low inclination of its orbit to the horizon and the attenuation by atmospheric extinction. To be able to study the way in which the Babylonian scholars constructed their system A models of Mercury from the available observational material I have created a database of synthetic observations by computing the dates and zodiacal longitudes of all first and last appearances and of all stationary points of Mercury in Babylon between 450 and 50 BC. Of the data required for the construction of an ephemeris synodic time intervals Δt can be directly derived from observed dates but zodiacal longitudes and synodic arcs Δλ must be determined in some other way. Because for Mercury positions with respect to Normal Stars can only rarely be determined at its first or last appearance I propose that the Babylonian scholars used the relation Δλ = Δt −3;39,40, which follows from the period relations, to compute synodic arcs of Mercury from the observed synodic time intervals. An additional difficulty in the construction of System A step functions is that most amplitudes are larger than the associated zone lengths so that in the computation of the longitudes of the synodic phases of Mercury quite often two zone boundaries are crossed. This complication makes it difficult to understand how the Babylonian scholars managed to construct System A models for Mercury that fitted the observations so well because it requires an excessive amount of computational effort to find the best possible step function in a complicated trial and error fitting process with four or five free parameters. To circumvent this difficulty I propose that the Babylonian scholars used an alternative more direct method to fit System A-type models to the observational data of Mercury. This alternative method is based on the fact that after three synodic intervals Mercury returns to a position in the sky which is on average only 17.4° less in longitude. Using reduced amplitudes of about 14°–25° but keeping the same zone boundaries, the computation of what I will call 3-synarc system A models of Mercury is significantly simplified. A full ephemeris of a synodic phase of Mercury can then be composed by combining three columns of longitudes computed with 3-synarc step functions, each column starting with a longitude of Mercury one synodic event apart. Confirmation that this method was indeed used by the Babylonian astronomers comes from Text M (BM 36551+), a very early ephemeris of the last appearances in the evening of Mercury from 424 to 403 BC, computed in three columns according to System A3. Based on an analysis of Text M I suggest that around 400 BC the initial approach in system A modelling of Mercury may have been directed towards choosing “nice” sexagesimal numbers for the amplitudes of the system A step functions while in the later final models, dating from around 300 BC onwards, more emphasis was put on selecting numerical values for the amplitudes such that they were related by simple ratios. The fact that different ephemeris periods were used for each of the four synodic phases of Mercury in the later models may be related to the selection of a best fitting set of System A step function amplitudes for each synodic phase.


Author(s):  
F. Ma ◽  
J. H. Hwang

Abstract In analyzing a nonclassically damped linear system, one common procedure is to neglect those damping terms which are nonclassical, and retain the classical ones. This approach is termed the method of approximate decoupling. For large-scale systems, the computational effort at adopting approximate decoupling is at least an order of magnitude smaller than the method of complex modes. In this paper, the error introduced by approximate decoupling is evaluated. A tight error bound, which can be computed with relative ease, is given for this method of approximate solution. The role that modal coupling plays in the control of error is clarified. If the normalized damping matrix is strongly diagonally dominant, it is shown that adequate frequency separation is not necessary to ensure small errors.


2004 ◽  
Vol 20 (3) ◽  
pp. 757-778 ◽  
Author(s):  
Anil K. Chopra ◽  
Rakesh K. Goel ◽  
Chatpan Chintanapakdee

The modal pushover analysis (MPA) procedure, which includes the contributions of all significant modes of vibration, estimates seismic demands much more accurately than current pushover procedures used in structural engineering practice. Outlined in this paper is a modified MPA (MMPA) procedure wherein the response contributions of higher vibration modes are computed by assuming the building to be linearly elastic, thus reducing the computational effort. After outlining such a modified procedure, its accuracy is evaluated for a variety of frame buildings and ground motion ensembles. Although it is not necessarily more accurate than the MPA procedure, the MMPA procedure is an attractive alternative for practical application because it leads to a larger estimate of seismic demands, improving the accuracy of the MPA results in some cases (relative to nonlinear response history analysis) and increasing their conservatism in others. However, such conservatism is unacceptably large for lightly damped systems, with damping significantly less than 5%. Thus the MMPA procedure is not recommended for such systems.


2003 ◽  
Vol 125 (4) ◽  
pp. 234-241 ◽  
Author(s):  
Vincent Y. Blouin ◽  
Michael M. Bernitsas ◽  
Denby Morrison

In structural redesign (inverse design), selection of the number and type of performance constraints is a major challenge. This issue is directly related to the computational effort and, most importantly, to the success of the optimization solver in finding a solution. These issues are the focus of this paper, which provides and discusses techniques that can help designers formulate a well-posed integrated complex redesign problem. LargE Admissible Perturbations (LEAP) is a general methodology, which solves redesign problems of complex structures with, among others, free vibration, static deformation, and forced response amplitude constraints. The existing algorithm, referred to as the Incremental Method is improved in this paper for problems with static and forced response amplitude constraints. This new algorithm, referred to as the Direct Method, offers comparable level of accuracy for less computational time and provides robustness in solving large-scale redesign problems in the presence of damping, nonstructural mass, and fluid-structure interaction effects. Common redesign problems include several natural frequency constraints and forced response amplitude constraints at various frequencies of excitation. Several locations on the structure and degrees of freedom can be constrained simultaneously. The designer must exercise judgment and physical intuition to limit the number of constraints and consequently the computational time. Strategies and guidelines are discussed. Such techniques are presented and applied to a 2,694 degree of freedom offshore tower.


2014 ◽  
Vol 18 (11) ◽  
pp. 4579-4600 ◽  
Author(s):  
P. Da Ronco ◽  
C. De Michele

Abstract. Snow cover maps provide information of great practical interest for hydrologic purposes: when combined with point values of snow water equivalent (SWE), they enable estimation of the regional snow resource. In this context, Earth observation satellites are an interesting tool for evaluating large scale snow distribution and extension. MODIS (MODerate resolution Imaging Spectroradiometer on board Terra and Aqua satellites) daily Snow Covered Area product has been widely tested and proved to be appropriate for hydrologic applications. However, within a daily map the presence of cloud cover can hide the ground, thus obstructing snow detection. Here, we consider MODIS binary products for daily snow mapping over the Po River basin. Ten years (2003–2012) of MOD10A1 and MYD10A1 snow maps have been analysed and processed with the support of a 500 m resolution Digital Elevation Model (DEM). We first investigate the issue of cloud obstruction, highlighting its dependence on altitude and season. Snow maps seem to suffer the influence of overcast conditions mainly in mountain and during the melting period. Thus, cloud cover highly influences those areas where snow detection is regarded with more interest. In spring, the average percentages of area lying beneath clouds are in the order of 70%, for altitudes over 1000 m a.s.l. Then, starting from previous studies, we propose a cloud removal procedure and we apply it to a wide area, characterized by high geomorphological heterogeneity such as the Po River basin. In conceiving the new procedure, our first target was to preserve the daily temporal resolution of the product. Regional snow and land lines were estimated for detecting snow cover dependence on elevation. In cases when there was not enough information on the same day within the cloud-free areas, we used temporal filters with the aim of reproducing the micro-cycles which characterize the transition altitudes, where snow does not stand continually over the entire winter. In the validation stage, the proposed procedure was compared against others, showing improvements in the performance for our case study. The accuracy is assessed by applying the procedure to clear-sky maps masked with additional cloud cover. The average value is higher than 95% considering 40 days chosen over all seasons. The procedure also has advantages in terms of input data and computational effort requirements.


2012 ◽  
Vol 3 (2) ◽  
pp. 34-50
Author(s):  
A. Chandramouli ◽  
L. Vivek Srinivasan ◽  
T. T. Narendran

This paper addresses the Capacitated Vehicle Routing Problem (CVRP) with a homogenous fleet of vehicles serving a large customer base. The authors propose a multi-phase heuristic that clusters the nodes based on proximity, orients them along a route, and allots vehicles. For the final phase of determining the routes for each vehicle, they have developed a Particle Swarm Optimization (PSO) approach. Benchmark datasets as well as hypothetical datasets have been used for computational trials. The proposed heuristic is found to perform exceedingly well even for large problem instances, both in terms of quality of solutions and in terms of computational effort.


2021 ◽  
Author(s):  
Hyeyoung Koh ◽  
Hannah Beth Blum

This study presents a machine learning-based approach for sensitivity analysis to examine how parameters affect a given structural response while accounting for uncertainty. Reliability-based sensitivity analysis involves repeated evaluations of the performance function incorporating uncertainties to estimate the influence of a model parameter, which can lead to prohibitive computational costs. This challenge is exacerbated for large-scale engineering problems which often carry a large quantity of uncertain parameters. The proposed approach is based on feature selection algorithms that rank feature importance and remove redundant predictors during model development which improve model generality and training performance by focusing only on the significant features. The approach allows performing sensitivity analysis of structural systems by providing feature rankings with reduced computational effort. The proposed approach is demonstrated with two designs of a two-bay, two-story planar steel frame with different failure modes: inelastic instability of a single member and progressive yielding. The feature variables in the data are uncertainties including material yield strength, Young’s modulus, frame sway imperfection, and residual stress. The Monte Carlo sampling method is utilized to generate random realizations of the frames from published distributions of the feature parameters, and the response variable is the frame ultimate strength obtained from finite element analyses. Decision trees are trained to identify important features. Feature rankings are derived by four feature selection techniques including impurity-based, permutation, SHAP, and Spearman's correlation. Predictive performance of the model including the important features are discussed using the evaluation metric for imbalanced datasets, Matthews correlation coefficient. Finally, the results are compared with those from reliability-based sensitivity analysis on the same example frames to show the validity of the feature selection approach. As the proposed machine learning-based approach produces the same results as the reliability-based sensitivity analysis with improved computational efficiency and accuracy, it could be extended to other structural systems.


2010 ◽  
Vol 2010 ◽  
pp. 1-16 ◽  
Author(s):  
Paulraj S. ◽  
Sumathi P.

The objective function and the constraints can be formulated as linear functions of independent variables in most of the real-world optimization problems. Linear Programming (LP) is the process of optimizing a linear function subject to a finite number of linear equality and inequality constraints. Solving linear programming problems efficiently has always been a fascinating pursuit for computer scientists and mathematicians. The computational complexity of any linear programming problem depends on the number of constraints and variables of the LP problem. Quite often large-scale LP problems may contain many constraints which are redundant or cause infeasibility on account of inefficient formulation or some errors in data input. The presence of redundant constraints does not alter the optimal solutions(s). Nevertheless, they may consume extra computational effort. Many researchers have proposed different approaches for identifying the redundant constraints in linear programming problems. This paper compares five of such methods and discusses the efficiency of each method by solving various size LP problems and netlib problems. The algorithms of each method are coded by using a computer programming language C. The computational results are presented and analyzed in this paper.


Author(s):  
Nicolò Mazzi ◽  
Andreas Grothey ◽  
Ken McKinnon ◽  
Nagisa Sugishita

AbstractThis paper proposes an algorithm to efficiently solve large optimization problems which exhibit a column bounded block-diagonal structure, where subproblems differ in right-hand side and cost coefficients. Similar problems are often tackled using cutting-plane algorithms, which allow for an iterative and decomposed solution of the problem. When solving subproblems is computationally expensive and the set of subproblems is large, cutting-plane algorithms may slow down severely. In this context we propose two novel adaptive oracles that yield inexact information, potentially much faster than solving the subproblem. The first adaptive oracle is used to generate inexact but valid cutting planes, and the second adaptive oracle gives a valid upper bound of the true optimal objective. These two oracles progressively “adapt” towards the true exact oracle if provided with an increasing number of exact solutions, stored throughout the iterations. These adaptive oracles are embedded within a Benders-type algorithm able to handle inexact information. We compare the Benders with adaptive oracles against a standard Benders algorithm on a stochastic investment planning problem. The proposed algorithm shows the capability to substantially reduce the computational effort to obtain an $$\epsilon $$ ϵ -optimal solution: an illustrative case is 31.9 times faster for a $$1.00\%$$ 1.00 % convergence tolerance and 15.4 times faster for a $$0.01\%$$ 0.01 % tolerance.


Sign in / Sign up

Export Citation Format

Share Document