Fundamental concepts in blast resistance evaluation of structuresThis article is one of a selection of papers published in the Special Issue on Blast Engineering.

2009 ◽  
Vol 36 (8) ◽  
pp. 1292-1304 ◽  
Author(s):  
G. Razaqpur ◽  
Waleed Mekky ◽  
S. Foo

This study critically discusses the fundamental concepts used for evaluating the flexural and axial resistance of structures under blast. Simplified methods based on single degree of freedom are emphasized. The paper begins with how to estimate the blast parameters for a given charge size and standoff distance. These parameters include side-on and reflected pressures, positive phase duration, and side-on and reflected impulses. Subsequently, blast damage criteria are defined in accordance with prevailing guidelines and some of their short comings are discussed. To assess the impact of blast on the flexural safety and performance of structures, some simple methods are presented. The methods are either empirical or are based on the principles of energy and momentum conservation. The analytical results are in closed-form or in the form of pressure–impulse (P–I) diagrams. The effect of strain rate on both blast-induced flexural deflection and strength of structures, with particular emphasis on reinforced concrete structures, is discussed.

Author(s):  
Abdulah K. Ambusaidi ◽  
Rahma M. Al Sabri

This study investigated the impact of teaching physics via modeling on theacquisition of energy and momentum conservation concepts. The sample consisted of 91 female students selected from 11th grade female students in two schools in Al-Dakhiliyah Governorate in Oman. The experimental group (n = 45) was taught via the modeling method, and the control group (n = 46) was taught using a traditional method. The study lasted six weeks during the second semester of the academic year 2013/2014. A teacher guide for teaching by models was designed and validated by a group of experts. To measure the acquisition of physics concepts, energy and momentum, an achievement test was used. The test consisted of 20 multiple-choice questions. Its reliability was measured by test-retest method (r = 0.79). The results revealed a statistically significant difference (p < .05) between the means of the experimental and control groups in favor of the experimental group. The study recommends that science teachers ought to use models and modeling in their teaching. Workshops to train supervisors, in-service teachers and pre-service teachers in the construction and development of scientific models need to be conducted


2021 ◽  
Vol 12 (10) ◽  
pp. 5168-5189
Author(s):  
Collins Ngu Nji, Et. al.

The importance of Big Data and Predictive Analytics (BDPA) have been overemphasized in recent years. However previous studies have been so focused on the developed and emerging market economies. The present research investigates this concept within the settings of a developing market economy. Also, the influence of transformational leadership (TL) in the adoption of BDPA as well as its moderation role between BDPA-Operational Performance (OP) nexus hasn’t been raised in prior studies. To address this, this study examines the combined effects of  Mimetic Pressures (MP) and the Firm’s Human Skills (HS) as well as TL in the adoption of BDPA. The present study also examines the impact of TL on OP and its moderating role on the BDPA-OP nexus. Using a pre-tested questionnaire, the research hypothesis was tested on 145 surveys. The results of the empirical study indicate that MP has a positive but insignificant effect on the building and selection of HS and independently the adoption of BDPA is positively and significantly been influenced by both MP and HS. Likewise, BDPA has a positive and significant impact on OP. TL has a positive but insignificant effect in the adoption of BDPA and a negative and insignificant effect on OP. Also, the moderating effect of TL in the BDPA-OP nexus was found to be positive and seemingly significant. 


Author(s):  
K. K. Botros ◽  
M. Piazza ◽  
D. Abayarathna

The option of internally coating a new pipeline, or a section of an existing pipeline has emerged in recent years as competition in the energy marketplace grows more intense and coating technologies have advanced from 100% solvent-base to 100% dry-based. Internally coated pipes would mean additional capital cost, but will result in lower pressure losses, hence lower compression power, lower fuel consumption and lower emission. Therefore, there is a trade-off that needs to be assessed, which is the subject of the present paper. The present paper first provides a proposed standardized method to report the internal wall roughness parameters that could be used to compare bare vs. different coating technologies in a consistent manner. The second part of the paper focuses on evaluating the impact and performance of internal coatings on flow efficiency in energy transmission pipeline systems. A tool and methods were developed to evaluate all of the various options specific to internal pipe coating (including bare pipe option) using a Life Cycle Cost (LCC) economic tool developed for this purpose. The incremental Cumulative Present Value Cost of Service (iCPVCOS) associated with the application of the various coating technologies, as well as quantification of the relative benefits of each compared to a bare pipe, are the main outcome of the tool. This provides an impetus to sound design and selection of the most cost effective technology for internally coating the pipes, whether for new or existing pipeline systems. Examples of various scenarios involving a 2900 km pipeline coated using different internal coating technologies are discussed.


2014 ◽  
Vol 21 (5) ◽  
pp. 1048-1056 ◽  
Author(s):  
Eli Rotenberg ◽  
Aaron Bostwick

The scientific opportunities for microARPES and nanoARPES techniques are discussed, and the benefits to these techniques at diffraction-limited light sources are presented, in particular the impact on spectromicroscopic ARPES (angle-resolved photoemission spectroscopy) of upgrading the Advanced Light Source to diffraction-limited performance. The most important consideration is whether the space-charge broadening, impacting the energy and momentum resolution, will limit the possible benefits for ARPES. Calculations of energy broadening due to space-charge effects will be presented over a wide range of parameters, and optimum conditions for ARPES will be discussed. The conclusion is that spectromicroscopic ARPES will greatly benefit from the advent of diffraction-limited light sources; space-charge broadening effects will not be a limiting factor.


Author(s):  
Vasiliy Savelievich Senashenko ◽  
Margarita Konstanitinovna Marushina

The object of this research is the business games as an effective method of corporate training for the acting executives and candidate pool. The subject of this research is the impact of business gamed upon the development of specific competences of the executives. The authors compare the Russian and foreign experience of conducting business games in corporate training, highlighting such aspects that the corporate training specialists are guided by in elaboration and selection of business games for the high-profile and demanding audience. The study is based on the analysis of Russian and foreign publication dedicated to corporate training, business games and innovative teaching methods for adults, as well as empirical data acquired as a result of elaboration and performance of corporate training of the candidate pool of the aerospace industry by one of the co-authors in 2017-2018. It is concluded that the method of business games is effective if adheres to the androgogic training principles, including systematicity, learning though experience, reflexivity, etc. In such case, it leads to achievement of such pedagogical goals as involvement of the participants in educational process, formation of new practical skills, development of competences, and so forth. The scientific novelty lies in paying sufficient attention to analysis of the impact of business games upon the development of competences of the executives. The materials can be used in development of modular programs of candidate pool training in the corporations.


2021 ◽  
Author(s):  
◽  
Deepak Singh

<p>Software-Defined-Networking (SDN) simplifies the configuration complexity in the computer communication network by decoupling the control plane from the data plane in a switch. In SDN, the switch has the data plane only and is configured by the logically centralised controller which simplifies the forwarding of packets in the network. However, an SDN switch is sensitive to delay and loss of packets which significantly affects the network performance.  This thesis uses queueing theory to conduct modelling and performance analysis of OpenFlow-based SDN switches. OpenFlow is the de-facto protocol for communication between an SDN switch and the controller. Using queueing theory, three aspects of packet processing in an SDN switch are explored. First, the existing research has primarily modelled the output buffer of an SDN switch using two buffer sharing mechanisms: the single shared buffer and the priority buffer. However, the effect of buffer dimensioning in these buffer sharing mechanisms has not been investigated. Buffer dimensioning helps in determining the minimum buffer capacity for a desired loss probability. The research in this thesis shows that the use of priority buffer in an SDN switch reduces the time to update flow tables than the shared buffer but at the cost of a higher buffer capacity.  Second, much of the existing research has not investigated the impact of internal buffering of data packets whereby a fraction of a data packet header is sent to the controller instead of an entire data packet. To investigate the impact of internal buffering, the queueing model for an SDN switch with the internal buffer is developed. The investigation shows that at the time of congestion, the internal buffer in an SDN switch improves the network performance with lower delay and lower packet loss.  Finally, existing research has focused on a software switch in SDN and very little research has studied the performance of a hardware switch. To characterise the performance of SDN-based hardware and software switches and identify the tradeoffs between them, a unified queueing model has been developed. The unified queueing model is an analytical tool for network engineers to predict delay and packet loss in their SDN deployments. The analysis shows the benefits of a hardware switch over a software switch. These benefits are lower delay and lower packet loss. However, the increasing involvement of the controller reduces the benefit of using a hardware switch, i.e. forwarding packets at the line speed rate.  This research guides network designers and analysts in the selection of the shared or buffer model for an SDN switch for their desired Quality of Service (QoS). Furthermore, the developed queueing model for an SDN switch with the internal buffer studies the impact of internal buffering in an SDN switch. Finally, the unified queueing model helps in the selection of a software or hardware switch in SDN.</p>


2021 ◽  
Author(s):  
◽  
Deepak Singh

<p>Software-Defined-Networking (SDN) simplifies the configuration complexity in the computer communication network by decoupling the control plane from the data plane in a switch. In SDN, the switch has the data plane only and is configured by the logically centralised controller which simplifies the forwarding of packets in the network. However, an SDN switch is sensitive to delay and loss of packets which significantly affects the network performance.  This thesis uses queueing theory to conduct modelling and performance analysis of OpenFlow-based SDN switches. OpenFlow is the de-facto protocol for communication between an SDN switch and the controller. Using queueing theory, three aspects of packet processing in an SDN switch are explored. First, the existing research has primarily modelled the output buffer of an SDN switch using two buffer sharing mechanisms: the single shared buffer and the priority buffer. However, the effect of buffer dimensioning in these buffer sharing mechanisms has not been investigated. Buffer dimensioning helps in determining the minimum buffer capacity for a desired loss probability. The research in this thesis shows that the use of priority buffer in an SDN switch reduces the time to update flow tables than the shared buffer but at the cost of a higher buffer capacity.  Second, much of the existing research has not investigated the impact of internal buffering of data packets whereby a fraction of a data packet header is sent to the controller instead of an entire data packet. To investigate the impact of internal buffering, the queueing model for an SDN switch with the internal buffer is developed. The investigation shows that at the time of congestion, the internal buffer in an SDN switch improves the network performance with lower delay and lower packet loss.  Finally, existing research has focused on a software switch in SDN and very little research has studied the performance of a hardware switch. To characterise the performance of SDN-based hardware and software switches and identify the tradeoffs between them, a unified queueing model has been developed. The unified queueing model is an analytical tool for network engineers to predict delay and packet loss in their SDN deployments. The analysis shows the benefits of a hardware switch over a software switch. These benefits are lower delay and lower packet loss. However, the increasing involvement of the controller reduces the benefit of using a hardware switch, i.e. forwarding packets at the line speed rate.  This research guides network designers and analysts in the selection of the shared or buffer model for an SDN switch for their desired Quality of Service (QoS). Furthermore, the developed queueing model for an SDN switch with the internal buffer studies the impact of internal buffering in an SDN switch. Finally, the unified queueing model helps in the selection of a software or hardware switch in SDN.</p>


Methodology ◽  
2007 ◽  
Vol 3 (1) ◽  
pp. 14-23 ◽  
Author(s):  
Juan Ramon Barrada ◽  
Julio Olea ◽  
Vicente Ponsoda

Abstract. The Sympson-Hetter (1985) method provides a means of controlling maximum exposure rate of items in Computerized Adaptive Testing. Through a series of simulations, control parameters are set that mark the probability of administration of an item on being selected. This method presents two main problems: it requires a long computation time for calculating the parameters and the maximum exposure rate is slightly above the fixed limit. Van der Linden (2003) presented two alternatives which appear to solve both of the problems. The impact of these methods in the measurement accuracy has not been tested yet. We show how these methods over-restrict the exposure of some highly discriminating items and, thus, the accuracy is decreased. It also shown that, when the desired maximum exposure rate is near the minimum possible value, these methods offer an empirical maximum exposure rate clearly above the goal. A new method, based on the initial estimation of the probability of administration and the probability of selection of the items with the restricted method ( Revuelta & Ponsoda, 1998 ), is presented in this paper. It can be used with the Sympson-Hetter method and with the two van der Linden's methods. This option, when used with Sympson-Hetter, speeds the convergence of the control parameters without decreasing the accuracy.


Sign in / Sign up

Export Citation Format

Share Document