scholarly journals Un modèle des flux interrégionaux de marchandises au Canada

2009 ◽  
Vol 63 (1) ◽  
pp. 26-42
Author(s):  
Yvon Bigras ◽  
Sang Nguyen

Résumé Dans cet article, nous présentons un modèle des flux de marchandises entre huit régions du Canada, et pour l’ensemble des biens qui ont été regroupés en 64 catégories. Le modèle fonctionne en deux étapes. Dans un premier temps, les flux observés sont régressés sur certaines variables socio-économiques, dont le coût de transport. On obtient alors des flux « a priori », qui peuvent être modifiés lorsque les variables les expliquant sont elles-mêmes modifiées. Or, ces flux « a priori » ne respectent pas nécessairement la structure industrielle de chaque région. Pour les corriger, on résout un programme mathématique dont la fonction objectif est basée sur la théorie de l’information. On cherche alors les flux qui sont les plus proches possibles des flux « a priori », mais qui respectent également la structure industrielle de chaque région. Cette structure est représentée par des contraintes comptables input-output régionales. Le modèle peut être vu comme un modèle input-output interrégional, où les coefficients interrégionaux sont sensibles à des variations des coûts de transport. La formulation du modèle est également beaucoup plus souple et elle permet de prendre en compte d’autres facteurs explicatifs. Le modèle est testé avec des données input-output canadiennes de 1974, et il est aussi comparé à d’autres modèles.

2002 ◽  
Vol 8 (3) ◽  
pp. 197-205 ◽  
Author(s):  
Carlos F. Alastruey ◽  
Manuel de la Sen

In this paper, a Lyapunov function candidate is introduced for multivariable systems with inner delays, without assuminga prioristability for the nondelayed subsystem. By using this Lyapunov function, a controller is deduced. Such a controller utilizes an input–output description of the original system, a circumstance that facilitates practical applications of the proposed approach.


1994 ◽  
Vol 7 (3) ◽  
pp. 437-456 ◽  
Author(s):  
Muhammad El-Taha ◽  
Shaler Stidham

We extend our studies of sample-path stability to multiserver input-output processes with conditional output rates that may depend on the state of the system and other auxiliary processes. Our results include processes with countable as well as uncountable state spaces. We establish rate stability conditions for busy period durations as well as the input during busy periods. In addition, stability conditions for multiserver queues with possibly heterogeneous servers are given for the workload, attained service, and queue length processes. The stability conditions can be checked from parameters of primary processes, and thus can be verified a priori. Under the rate stability conditions, we provide stable versions of Little's formula for single server as well as multiserver queues. Our approach leads to extensions of previously known results. Since our results are valid pathwise, non-stationary as well as stationary processes are covered.


Author(s):  
Bernhard Manhartsgruber

Fluid power systems consist of components like pumps, valves and actuators and of the lines and hoses interconnecting these systems. Simple interconnections without branch points can be modelled as hydraulic two-port networks. This paper demonstrates the identification of linear state space models describing the input-output behaviour of hydraulic two-port networks in terms of pressure and flow-rate. The linear modelling approach restricts the applicability to the case of laminar flow with negligible influence of convective terms. Special attention is paid to a priori knowledge of certain model properties: The numerical optimization procedure used in the proposed identifcation method guarantees the passivity of the models and allows for instantaneous coupling of collocated pressure and flow variables according to the Joukowsky relation. The method takes experimental frequency response data as an input and generates a series of state-space approximations with increasing system order starting at order one. A hydraulic hose is presented as an example.


2009 ◽  
Vol 4 (2) ◽  
pp. 35-45
Author(s):  
Henri Atlan

Résumé Dans la théorie de l’information probabiliste comme dans la théorie des algorithmes de programmation, l’on n’a pas à s’occuper de la question de savoir comment nous comprenons ni comment les significations sont créées. Dans ces deux cas de complexité, nous rencontrons le même paradoxe : une identité formelle entre complexité maximale et aléatoire (c’est-à-dire désordre avec homogé-néité statistique maximale). Et, dans les deux cas, la solution du paradoxe consiste à l’ignorer en supposant qu’un sens et une signification existent a priori, ce qui élimine de ce fait l’hypothèse de l’aléatoire. Ce n’est que très récemment qu’on a tenté de résoudre vraiment ce paradoxe par des travaux sur la complexité algorithmique tenant compte d’une définition de la complexité porteuse de signification. Une première approche concerne le principe de complexité par le bruit. Une seconde, plus récente, utilise des simulations de réseaux d’automates pour tenter de surprendre l’émergence de significations fonctionnelles dans les réseaux d’automates à propriétés auto-organisatrices. Parmi les résultats obtenus, on trouve une large sous-détermination des théories par les faits, et la petite taille de ces réseaux permet d’en analyser clairement l’origine et même de la quantifier. Cette sous-détermination des théories apparaît comme l’expression probablement la plus spectaculaire de ce qu’est la com-plexité naturelle.


2013 ◽  
Vol 34 (3) ◽  
pp. 105-122 ◽  
Author(s):  
Andrzej Ziębik ◽  
Paweł Gładysz

Abstract In order to analyze the cumulative exergy consumption of an integrated oxy-fuel combustion power plant the method of balance equations was applied based on the principle that the cumulative exergy consumption charging the products of this process equals the sum of cumulative exergy consumption charging the substrates. The set of balance equations of the cumulative exergy consumption bases on the ‘input-output method’ of the direct energy consumption. In the structure of the balance we distinguished main products (e.g. electricity), by-products (e.g. nitrogen) and external supplies (fuels). In the balance model of cumulative exergy consumption it has been assumed that the cumulative exergy consumption charging the supplies from outside is a quantity known a priori resulting from the analysis of cumulative exergy consumption concerning the economy of the whole country. The byproducts are charged by the cumulative exergy consumption resulting from the principle of a replaced process. The cumulative exergy consumption of the main products is the final quantity.


Author(s):  
D. E. Luzzi ◽  
L. D. Marks ◽  
M. I. Buckett

As the HREM becomes increasingly used for the study of dynamic localized phenomena, the development of techniques to recover the desired information from a real image is important. Often, the important features are not strongly scattering in comparison to the matrix material in addition to being masked by statistical and amorphous noise. The desired information will usually involve the accurate knowledge of the position and intensity of the contrast. In order to decipher the desired information from a complex image, cross-correlation (xcf) techniques can be utilized. Unlike other image processing methods which rely on data massaging (e.g. high/low pass filtering or Fourier filtering), the cross-correlation method is a rigorous data reduction technique with no a priori assumptions.We have examined basic cross-correlation procedures using images of discrete gaussian peaks and have developed an iterative procedure to greatly enhance the capabilities of these techniques when the contrast from the peaks overlap.


Author(s):  
H.S. von Harrach ◽  
D.E. Jesson ◽  
S.J. Pennycook

Phase contrast TEM has been the leading technique for high resolution imaging of materials for many years, whilst STEM has been the principal method for high-resolution microanalysis. However, it was demonstrated many years ago that low angle dark-field STEM imaging is a priori capable of almost 50% higher point resolution than coherent bright-field imaging (i.e. phase contrast TEM or STEM). This advantage was not exploited until Pennycook developed the high-angle annular dark-field (ADF) technique which can provide an incoherent image showing both high image resolution and atomic number contrast.This paper describes the design and first results of a 300kV field-emission STEM (VG Microscopes HB603U) which has improved ADF STEM image resolution towards the 1 angstrom target. The instrument uses a cold field-emission gun, generating a 300 kV beam of up to 1 μA from an 11-stage accelerator. The beam is focussed on to the specimen by two condensers and a condenser-objective lens with a spherical aberration coefficient of 1.0 mm.


Sign in / Sign up

Export Citation Format

Share Document