scholarly journals Market Behavior and Evolution of Wealth Distribution: A Simulation Model Based on Artificial Agents

2021 ◽  
Vol 26 (1) ◽  
pp. 12
Author(s):  
Andrea Giunta ◽  
Gaetano Giunta ◽  
Domenico Marino ◽  
Francesco Oliveri

The aim of this work is to simulate a market behavior in order to study the evolution of wealth distribution. The numerical simulations are carried out on a simple economical model with a finite number of economic agents, which are able to exchange goods/services and money; the various agents interact each other by means of random exchanges. The model is micro founded, self-consistent, and predictive. Despite the simplicity of the model, the simulations show a complex and non-trivial behavior. First of all, we are able to recognize two solution classes, namely two phases, separated by a threshold region. The analysis of the wealth distribution of the model agents, in the threshold region, shows functional forms resembling empirical quantitative studies of the probability distributions of wealth and income in the United Kingdom and the United States. Furthermore, the decile distribution of the population wealth of the simulated model, in the threshold region, overlaps in a suggestive way with the real data of the Italian population wealth in the last few years. Finally, the results of the simulated model allow us to draw important considerations for designing effective policies for economic and human development.

Author(s):  
Grigorii S. Pushnoi

The classical three-sector model of the economy: 1) “the means of production”, 2) “the goods for employees”, and 3) “the goods consumed by other economic agents” (“luxury goods”) is considered in matrix formulation. Each sector contains many industries producing the goods of these three kinds. The “transformation problem” in Marxian economics is considered in a three-sector model of the economy with simple production. The solution of this problem is based on the action of the statistical “laws of large numbers” (LLN) in the economy. The stylized facts about the economy of the United States indicate onto the existence of the following probability distributions: 1) the inverse power distribution for the elements of matrix of direct requirements and 2) the Gaussian distribution for the direct labor per the unit of goods. The action of the statistic “law of large numbers” guarantees the C-V-M matrix of the economy must be almost symmetric. The “labor value” and the “price of production” of the total product produced within each sector in this case are almost equal.


2020 ◽  
Vol 4 (2) ◽  
pp. 137-153
Author(s):  
Sidika Basci ◽  
Tahar Gherbi

Aim: Money velocity data for the United States show that there is a decline in all of the broad money aggregates in recent decades. This points to a sustained demand deficiency element. Can consumer heterogeneity be the cause of this declining trend? The aim of this paper is to find an answer for this question.   Design / Research Methods: To achieve our aim we use Agent Based Modelling (ABM). In our model, the agents are heterogeneous consumers with different spending propensities.   Conclusions / findings: We show that heterogeneous consumers with different spending propensities alone puts a downward pressure on money velocity. This pressure is coupled with a sustained worsening in the wealth distribution. We observe that as money accumulates in the hands of agents with the lowest propensity to spend, money velocity keeps declining. This also puts a downward pressure on nominal aggregate demand and hence a deflationary bias on the general price level.   Originality / value of the article: This paper shows that heterogeneity of economic agents should not be ignored and that ABM is a very powerful tool to analyse heterogeneity.   Implications of the research: The implication for policy makers is that the demand deficiency associated with the fall in money velocity will persist until the worsening of wealth dispersion comes to a halt.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Zahra Amini Farsani ◽  
Volker J. Schmid

AbstractCo-localization analysis is a popular method for quantitative analysis in fluorescence microscopy imaging. The localization of marked proteins in the cell nucleus allows a deep insight into biological processes in the nucleus. Several metrics have been developed for measuring the co-localization of two markers, however, they depend on subjective thresholding of background and the assumption of linearity. We propose a robust method to estimate the bivariate distribution function of two color channels. From this, we can quantify their co- or anti-colocalization. The proposed method is a combination of the Maximum Entropy Method (MEM) and a Gaussian Copula, which we call the Maximum Entropy Copula (MEC). This new method can measure the spatial and nonlinear correlation of signals to determine the marker colocalization in fluorescence microscopy images. The proposed method is compared with MEM for bivariate probability distributions. The new colocalization metric is validated on simulated and real data. The results show that MEC can determine co- and anti-colocalization even in high background settings. MEC can, therefore, be used as a robust tool for colocalization analysis.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Yiwen Zhang ◽  
Yuanyuan Zhou ◽  
Xing Guo ◽  
Jintao Wu ◽  
Qiang He ◽  
...  

The K-means algorithm is one of the ten classic algorithms in the area of data mining and has been studied by researchers in numerous fields for a long time. However, the value of the clustering number k in the K-means algorithm is not always easy to be determined, and the selection of the initial centers is vulnerable to outliers. This paper proposes an improved K-means clustering algorithm called the covering K-means algorithm (C-K-means). The C-K-means algorithm can not only acquire efficient and accurate clustering results but also self-adaptively provide a reasonable numbers of clusters based on the data features. It includes two phases: the initialization of the covering algorithm (CA) and the Lloyd iteration of the K-means. The first phase executes the CA. CA self-organizes and recognizes the number of clusters k based on the similarities in the data, and it requires neither the number of clusters to be prespecified nor the initial centers to be manually selected. Therefore, it has a “blind” feature, that is, k is not preselected. The second phase performs the Lloyd iteration based on the results of the first phase. The C-K-means algorithm combines the advantages of CA and K-means. Experiments are carried out on the Spark platform, and the results verify the good scalability of the C-K-means algorithm. This algorithm can effectively solve the problem of large-scale data clustering. Extensive experiments on real data sets show that the accuracy and efficiency of the C-K-means algorithm outperforms the existing algorithms under both sequential and parallel conditions.


2018 ◽  
Vol 146 (12) ◽  
pp. 4079-4098 ◽  
Author(s):  
Thomas M. Hamill ◽  
Michael Scheuerer

Abstract Hamill et al. described a multimodel ensemble precipitation postprocessing algorithm that is used operationally by the U.S. National Weather Service (NWS). This article describes further changes that produce improved, reliable, and skillful probabilistic quantitative precipitation forecasts (PQPFs) for single or multimodel prediction systems. For multimodel systems, final probabilities are produced through the linear combination of PQPFs from the constituent models. The new methodology is applied to each prediction system. Prior to adjustment of the forecasts, parametric cumulative distribution functions (CDFs) of model and analyzed climatologies are generated using the previous 60 days’ forecasts and analyses and supplemental locations. The CDFs, which can be stored with minimal disk space, are then used for quantile mapping to correct state-dependent bias for each member. In this stage, the ensemble is also enlarged using a stencil of forecast values from the 5 × 5 surrounding grid points. Different weights and dressing distributions are assigned to the sorted, quantile-mapped members, with generally larger weights for outlying members and broader dressing distributions for members with heavier precipitation. Probability distributions are generated from the weighted sum of the dressing distributions. The NWS Global Ensemble Forecast System (GEFS), the Canadian Meteorological Centre (CMC) global ensemble, and the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble forecast data are postprocessed for April–June 2016. Single prediction system postprocessed forecasts are generally reliable and skillful. Multimodel PQPFs are roughly as skillful as the ECMWF system alone. Postprocessed guidance was generally more skillful than guidance using the Gamma distribution approach of Scheuerer and Hamill, with coefficients generated from data pooled across the United States.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Laura Millán-Roures ◽  
Irene Epifanio ◽  
Vicente Martínez

A functional data analysis (FDA) based methodology for detecting anomalous flows in urban water networks is introduced. Primary hydraulic variables are recorded in real-time by telecontrol systems, so they are functional data (FD). In the first stage, the data are validated (false data are detected) and reconstructed, since there could be not only false data, but also missing and noisy data. FDA tools are used such as tolerance bands for FD and smoothing for dense and sparse FD. In the second stage, functional outlier detection tools are used in two phases. In Phase I, the data are cleared of anomalies to ensure that data are representative of the in-control system. The objective of Phase II is system monitoring. A new functional outlier detection method is also proposed based on archetypal analysis. The methodology is applied and illustrated with real data. A simulated study is also carried out to assess the performance of the outlier detection techniques, including our proposal. The results are very promising.


Author(s):  
Chi-Hua Chen ◽  
Fangying Song ◽  
Feng-Jang Hwang ◽  
Ling Wu

To generate a probability density function (PDF) for fitting probability distributions of real data, this study proposes a deep learning method which consists of two stages: (1) a training stage for estimating the cumulative distribution function (CDF) and (2) a performing stage for predicting the corresponding PDF. The CDFs of common probability distributions can be adopted as activation functions in the hidden layers of the proposed deep learning model for learning actual cumulative probabilities, and the differential equation of trained deep learning model can be used to estimate the PDF. To evaluate the proposed method, numerical experiments with single and mixed distributions are performed. The experimental results show that the values of both CDF and PDF can be precisely estimated by the proposed method.


2021 ◽  
Vol 12 ◽  
Author(s):  
Yang Yang ◽  
Hongli Tian ◽  
Rui Wang ◽  
Lu Wang ◽  
Hongmei Yi ◽  
...  

Molecular marker technology is used widely in plant variety discrimination, molecular breeding, and other fields. To lower the cost of testing and improve the efficiency of data analysis, molecular marker screening is very important. Screening usually involves two phases: the first to control loci quality and the second to reduce loci quantity. To reduce loci quantity, an appraisal index that is very sensitive to a specific scenario is necessary to select loci combinations. In this study, we focused on loci combination screening for plant variety discrimination. A loci combination appraisal index, variety discrimination power (VDP), is proposed, and three statistical methods, probability-based VDP (P-VDP), comparison-based VDP (C-VDP), and ratio-based VDP (R-VDP), are described and compared. The results using the simulated data showed that VDP was sensitive to statistical populations with convergence toward the same variety, and the total probability of discrimination power (TDP) method was effective only for partial populations. R-VDP was more sensitive to statistical populations with convergence toward various varieties than P-VDP and C-VDP, which both had the same sensitivity; TDP was not sensitive at all. With the real data, R-VDP values for sorghum, wheat, maize and rice data begin to show downward tendency when the number of loci is 20, 7, 100, 100 respectively, while in the case of P-VDP and C-VDP (which have the same results), the number is 6, 4, 9, 19 respectively and in the case of TDP, the number is 6, 4, 4, 11 respectively. For the variety threshold setting, R-VDP values of loci combinations with different numbers of loci responded evenly to different thresholds. C-VDP values responded unevenly to different thresholds, and the extent of the response increased as the number of loci decreased. All the methods gave underestimations when data were missing, with systematic errors for TDP, C-VDP, and R-VDP going from smallest to biggest. We concluded that VDP was a better loci combination appraisal index than TDP for plant variety discrimination and the three VDP methods have different applications. We developed the software called VDPtools, which can calculate the values of TDP, P-VDP, C-VDP, and R-VDP. VDPtools is publicly available athttps://github.com/caurwx1/VDPtools.git.


2021 ◽  
Vol 82 (3) ◽  
pp. 106
Author(s):  
Marie L. Radford ◽  
Laura Costello ◽  
Kaitlin Montague

In March 2020, academic libraries across the United States closed and sent everyone home, some destined to not reopen for months. University offices closed. Classes were moved online. Suddenly, librarians and staff pivoted to working from home and to all remote services, without time for planning logistics or training. To study the impact of this extraordinary and sweeping transition on virtual reference services (VRS), we conducted a major study of academic library responses to the pandemic that focused on librarian perceptions of how services and relationships with users morphed during this COVID-19 year.Academic librarians rallied to our call, and we collected a total of 300 responses to two longitudinal surveys launched at key points during the pandemic. Data collection focused on two phases in 2020: 1) shutdown and immediate aftermath (mid-March to July), and 2) fall ramp up and into the semester (August to December). Via Zoom, we also interviewed 28 academic librarian leaders (e.g., heads of reference and/or VRS, associate directors for User Services) from September to November. Surveys and interviews centered on adaptations and innovations to reference services, especially VRS and perceptions of changes in user interactions.


Sign in / Sign up

Export Citation Format

Share Document