scholarly journals Scaling the INGO: What the Development and Expansion of Canadian INGOs Tells Us

2020 ◽  
Vol 9 (8) ◽  
pp. 140
Author(s):  
Logan Cochrane ◽  
John-Michael Davis

The literature on international non-governmental organizations (INGOs) has focused primarily on large INGOs, which capture the majority of total INGO spending but represent a small number of total INGOs. Over the past two decades, the number of INGOs has more than tripled throughout the global North, which has ushered in a decentralization of the sector as an emerging class of small- and medium-sized INGOs increasingly share the same space once occupied solely by large INGOs. This study focuses on these INGOs in transition to explore how they differ from large INGOs that receive significant government funding and their pathways to scale. Using an original dataset of 1371 Canadian INGOs, we explored comparative differences related to funding sources, overhead, organizational age, country coverage, staff, and religion between the transitioning and small-scale INGOs. Our results identified several general insights for how INGOs transition: (1) Large INGOs are less likely to articulate a religious motivation, which may impede government funding; (2) INGOs are more likely to be headquartered in Ontario, which is closer to federal government offices; (3) low overhead expenditures inhibit small-scale INGOs from transitioning to medium- and large-scale INGOs; (4) organizational age plays a critical factor to scale-up as INGOs increase their experience and expertise; (5) generous compensation to attract talented staff offers an under-valued pathway to scale. Finally, our results demonstrate the diversity among INGOs in Canada and problematizes singular scale-up pathways, while underscoring the necessity of future research to explore scaling strategies through individual case studies.

2021 ◽  
pp. 3-11
Author(s):  
Suddhasvatta Das ◽  
Kevin Gary

AbstractDue to the fast-paced nature of the software industry and the success of small agile projects, researchers and practitioners are interested in scaling agile processes to larger projects. Agile software development (ASD) has been growing in popularity for over two decades. With the success of small-scale agile transformation, organizations started to focus on scaling agile. There is a scarcity of literature in this field making it harder to find plausible evidence to identify the science behind large scale agile transformation. The objective of this paper is to present a better understanding of the current state of research in the field of scaled agile transformation and explore research gaps. This tertiary study identifies seven relevant peer reviewed studies and reports research findings and future research avenues.


1993 ◽  
Vol 157 ◽  
pp. 283-297
Author(s):  
Rainer Beck

Results of linear αΩ-dynamo models are confronted with radio polarization observations of spiral galaxies. The general distribution of polarized emission and the magnetic field pitch angle can be described with sufficient accuracy. The occurrance of systematic large-scale variations in Faraday rotation (RM) is the strongest argument in favour of dynamo theory. However, the predominance of axisymmetric SO modes could not be confirmed by observations; S1 modes are about equally frequent. The azimuthal variations of field pitch angles and, in two cases, the phases of the RM variations are inconsistent with a classical αΩ-dynamo. Locally deviating RM values indicate field lines bending out of the plane. There is increasing evidence that galactic fields cannot be described by simple dynamo modes. This calls for more realistic dynamo models, taking into account non-axisymmetric velocity fields and galactic winds.Interpretation of radio observations is difficult because Faraday depolarization can seriously affect the data. Observations of small-scale field structures are summarized which show the path for future research. Instrumental needs for such investigations are discussed.


2019 ◽  
Vol 876 ◽  
pp. 1108-1128 ◽  
Author(s):  
Till Zürner ◽  
Felix Schindler ◽  
Tobias Vogt ◽  
Sven Eckert ◽  
Jörg Schumacher

Combined measurements of velocity components and temperature in a turbulent Rayleigh–Bénard convection flow at a low Prandtl number of $Pr=0.029$ and Rayleigh numbers of $10^{6}\leqslant Ra\leqslant 6\times 10^{7}$ are conducted in a series of experiments with durations of more than a thousand free-fall time units. Multiple crossing ultrasound beam lines and an array of thermocouples at mid-height allow for a detailed analysis and characterization of the complex three-dimensional dynamics of the single large-scale circulation roll in a cylindrical convection cell of unit aspect ratio which is filled with the liquid metal alloy GaInSn. We measure the internal temporal correlations of the complex large-scale flow and distinguish between short-term oscillations associated with a sloshing motion in the mid-plane as well as varying orientation angles of the velocity close to the top/bottom plates and the slow azimuthal drift of the mean orientation of the roll as a whole that proceeds on a time scale up to a hundred times slower. The coherent large-scale circulation drives a vigorous turbulence in the whole cell that is quantified by direct Reynolds number measurements at different locations in the cell. The velocity increment statistics in the bulk of the cell displays characteristic properties of intermittent small-scale fluid turbulence. We also show that the impact of the symmetry-breaking large-scale flow persists to small-scale velocity fluctuations thus preventing the establishment of fully isotropic turbulence in the cell centre. Reynolds number amplitudes depend sensitively on beam-line position in the cell such that different definitions have to be compared. The global momentum and heat transfer scalings with Rayleigh number are found to agree with those of direct numerical simulations and other laboratory experiments.


Author(s):  
Zhen Qian ◽  
Minghui Zhang ◽  
Hao Yu ◽  
Fei Wei

Radial profiles of particle velocity in a large scale (418 mm I.D.) downward Circulating Fluidized Bed (CFB downer) were obtained via a Laser Doppler Velocimetry (LDV) system. Results show that particle velocity is gradually increasing along the radial direction and there exists a peak value in the near wall region. Such unique radial profile shape can be explained by the solids accumulating trend in the near wall region of the downer. Experiment results in this large scale downer are also compared with those obtained by other researchers in small scale units so as to investigate the scale-up effect on the radial particle velocity distribution in the downer.


Author(s):  
R. Lo Frano ◽  
A. Pesetti ◽  
D. Aquaro ◽  
M. Olcese

Abstract The Direct Contact Condensation (DCC) is the main phenomenon characterizing the steam condensation. It plays an important role for the operation of Vacuum Vessel Pressure Suppression System (VVPSS) tanks, particularly for managing the Ingress of Coolant Event (determining fusion reactor overpressurization). It is safety relevant (key) system of the fusion reactor because by condensing the steam generated during such accident event allows to damp the overpressure. This paper deals with experimental and theoretical analyses of the DCC at sub-atmospheric pressure. The similitude analysis was elaborated to scale up the experimental results obtained in the reduced scale facility: similitude laws are used for the design of large experimental rig. Correlations are defined starting from the water temperature and pressure variation already obtained in the small-scale rig. Furthermore, the experimental rig and its main components accordingly designed (and under construction at the University of Pisa) allow to study at large scale the steam condensation. The testing conditions are presented and discussed.


2019 ◽  
Vol 157 (2) ◽  
pp. 189-219 ◽  
Author(s):  
Jérôme Hilaire ◽  
Jan C. Minx ◽  
Max W. Callaghan ◽  
Jae Edmonds ◽  
Gunnar Luderer ◽  
...  

Abstract For aiming to keep global warming well-below 2 °C and pursue efforts to limit it to 1.5 °C, as set out in the Paris Agreement, a full-fledged assessment of negative emission technologies (NETs) that remove carbon dioxide from the atmosphere is crucial to inform science-based policy making. With the Paris Agreement in mind, we re-analyse available scenario evidence to understand the roles of NETs in 1.5 °C and 2 °C scenarios and, for the first time, link this to a systematic review of findings in the underlying literature. In line with previous research, we find that keeping warming below 1.5 °C requires a rapid large-scale deployment of NETs, while for 2 °C, we can still limit NET deployment substantially by ratcheting up near-term mitigation ambition. Most recent evidence stresses the importance of future socio-economic conditions in determining the flexibility of NET deployment and suggests opportunities for hedging technology risks by adopting portfolios of NETs. Importantly, our thematic review highlights that there is a much richer set of findings on NETs than commonly reflected upon both in scientific assessments and available reviews. In particular, beyond the common findings on NETs underpinned by dozens of studies around early scale-up, the changing shape of net emission pathways or greater flexibility in the timing of climate policies, there is a suite of “niche and emerging findings”, e.g. around innovation needs and rapid technological change, termination of NETs at the end of the twenty-first century or the impacts of climate change on the effectiveness of NETs that have not been widely appreciated. Future research needs to explore the role of climate damages on NET uptake, better understand the geophysical constraints of NET deployment (e.g. water, geological storage, climate feedbacks), and provide a more systematic assessment of NET portfolios in the context of sustainable development goals.


Author(s):  
Barnali Mandal

ABSTRACTObjectives: The aim of the study was to determine the growth kinetics of Pediococcus acidilactici using a mathematical model for large scale pediocinproduction.Methods: Growth kinetics of P. acidilactici has been studied for pediocin production in small scale batch fermenter (Erlenmeyer flask) using meatprocessing waste medium. The experiments have been conducted with varying the concentrations of glucose, protein, and lactic acid. A mathematicalmodel has been developed to describe growth rate, products (pediocin and lactic acid) formation rate, and substrates (glucose and protein) utilizationrate. Monod model for dual substrates (glucose and protein) has been used with considering lactic acid inhibition. Luedeking-Piret model has beenintroduced to describe the production of pediocin and lactic acid.Results: The values of kinetic parameters have been determined using experimental data and model equations. The model prediction has beencompared satisfactorily with the experimental data for the validation of the model.Conclusions: The developed model was satisfactorily validated to scale up the production of pediocin.Keywords: Pediococcus acidilactici, Pediocin, Meat processing waste, Monod model, Luedeking-Piret model, Kinetic parameters.


SLEEP ◽  
2020 ◽  
Author(s):  
Alexander Neergaard Olesen ◽  
Poul Jørgen Jennum ◽  
Emmanuel Mignot ◽  
Helge Bjarup Dissing Sorensen

Abstract Study Objectives Sleep stage scoring is performed manually by sleep experts and is prone to subjective interpretation of scoring rules with low intra- and interscorer reliability. Many automatic systems rely on few small-scale databases for developing models, and generalizability to new datasets is thus unknown. We investigated a novel deep neural network to assess the generalizability of several large-scale cohorts. Methods A deep neural network model was developed using 15,684 polysomnography studies from five different cohorts. We applied four different scenarios: (1) impact of varying timescales in the model; (2) performance of a single cohort on other cohorts of smaller, greater, or equal size relative to the performance of other cohorts on a single cohort; (3) varying the fraction of mixed-cohort training data compared with using single-origin data; and (4) comparing models trained on combinations of data from 2, 3, and 4 cohorts. Results Overall classification accuracy improved with increasing fractions of training data (0.25%: 0.782 ± 0.097, 95% CI [0.777–0.787]; 100%: 0.869 ± 0.064, 95% CI [0.864–0.872]), and with increasing number of data sources (2: 0.788 ± 0.102, 95% CI [0.787–0.790]; 3: 0.808 ± 0.092, 95% CI [0.807–0.810]; 4: 0.821 ± 0.085, 95% CI [0.819–0.823]). Different cohorts show varying levels of generalization to other cohorts. Conclusions Automatic sleep stage scoring systems based on deep learning algorithms should consider as much data as possible from as many sources available to ensure proper generalization. Public datasets for benchmarking should be made available for future research.


2016 ◽  
Vol 16 (2) ◽  
pp. 191-217 ◽  
Author(s):  
Patrick Barron ◽  
Sana Jaffrey ◽  
Ashutosh Varshney

AbstractThe last decade has witnessed an extraordinary spate of scholarship on the ethno-communal violence that swept through Indonesia following the collapse of the Suharto regime. Yet we know very little about how these large-scale violent conflicts subsided and the patterns of post-conflict violence that have emerged since. We introduce evidence from an original dataset to show that the high violence period lasted till 2003, after which violence declined in intensity and scale. Despite this aggregate decline, we find that old conflict sites still exhibit relatively high levels of small-scale violence. We conclude that Indonesia has moved to a new, post-conflict phase where large-scale violence is infrequent, yet small-scale violence remains unabated, often taking on new forms. Finally, we propose that effective internal security interventions by the state are a key reason, although not the only reason, why large-scale violence has not emerged again despite the continued prevalence of low-level violence.


Author(s):  
Benjamin Dryer ◽  
Graeme Fukuda ◽  
Jake Webb ◽  
David I. Bigio ◽  
Mark Wetzel ◽  
...  

Twin-screw polymer extrusion has shown increased utility for creating composite materials. However, in order to achieve the desired product properties, sufficient mixing is essential. Dispersive mixing, or the breaking-up of particle agglomerates, is critical to create filled compounds with the required material properties. In a twin-screw compounding process, the Residence Stress Distribution (RSD) has been used to quantify the dispersive mixing induced by the stresses in the polymer melt. These stresses are quantified by the percent break-up of stress-sensitive polymeric beads. It was found that the amount of material that experiences the critical stress is a function of the operating conditions of screw speed and specific throughput [1]. The quantification of dispersive mixing allows for better control of a compounding process and can be used to design new processes. During the development of a new compounding process, screw geometries and operating conditions are often refined on a laboratory-scale extruder and then scaled up to a manufacturing level. Scale-up rules are used to translate the operating conditions of a process to different sizes of extruders. In a compounding process, the goal when scaling-up is to maintain the same material properties on both scales by achieving equivalent mixing. The RSD methodology can be used to evaluate the effectiveness of scale-up rules by comparison between two or more scales. This paper will demonstrate the utility of the RSD in evaluation of two unique scale-up rules. Conventional industry practice is based on the volumetric flow comparison between extruders. The proposed approach demonstrates that in order to maintain equivalent dispersive mixing between different sizes of extruders, the degree of fill, or the percent drag flow (%DF), must be kept equivalent in the primary mixing region. The effectiveness of both rules has been evaluated by experimental application of the RSD methodology. A design of experiment approach was used to generate predictive equations for each scale-up rule that were compared to the behavior of the original small-scale extruder. Statistical comparison of the two scale-up rules showed that the %DF rule predicted operating conditions on the large-scale extruder that produced percent break-up behavior more similar to the small-scale behavior. From these results, it can be concluded that the %DF scale-up rule can be used to accurately scale operating conditions between different-sized extruders to ensure similar dispersive mixing between two processes. This will allow for greater accuracy when recreating the material properties of a small-scale twin-screw compounding process on a larger, mass production machine.


Sign in / Sign up

Export Citation Format

Share Document