scholarly journals Complex Systems, Emergence, and Multiscale Analysis: A Tutorial and Brief Survey

2021 ◽  
Vol 11 (12) ◽  
pp. 5736
Author(s):  
Jianbo Gao ◽  
Bo Xu

Mankind has long been fascinated by emergence in complex systems. With the rapidly accumulating big data in almost every branch of science, engineering, and society, a golden age for the study of complex systems and emergence has arisen. Among the many values of big data are to detect changes in system dynamics and to help science to extend its reach, and most desirably, to possibly uncover new fundamental laws. Unfortunately, these goals are hard to achieve using black-box machine-learning based approaches for big data analysis. Especially, when systems are not functioning properly, their dynamics must be highly nonlinear, and as long as abnormal behaviors occur rarely, relevant data for abnormal behaviors cannot be expected to be abundant enough to be adequately tackled by machine-learning based approaches. To better cope with these situations, we advocate to synergistically use mainstream machine learning based approaches and multiscale approaches from complexity science. The latter are very useful for finding key parameters characterizing the evolution of a dynamical system, including malfunctioning of the system. One of the many uses of such parameters is to design simpler but more accurate unsupervised machine learning schemes. To illustrate the ideas, we will first provide a tutorial introduction to complex systems and emergence, then we present two multiscale approaches. One is based on adaptive filtering, which is excellent at trend analysis, noise reduction, and (multi)fractal analysis. The other originates from chaos theory and can unify the major complexity measures that have been developed in recent decades. To make the ideas and methods better accessed by a wider audience, the paper is designed as a tutorial survey, emphasizing the connections among the different concepts from complexity science. Many original discussions, arguments, and results pertinent to real-world applications are also presented so that readers can be best stimulated to apply and further develop the ideas and methods covered in the article to solve their own problems. This article is purported both as a tutorial and a survey. It can be used as course material, including summer extensive training courses. When the material is used for teaching purposes, it will be beneficial to motivate students to have hands-on experiences with the many methods discussed in the paper. Instructors as well as readers interested in the computer analysis programs are welcome to contact the corresponding author.

Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 297
Author(s):  
Haoyu Niu ◽  
YangQuan Chen ◽  
Bruce J. West

Fractional-order calculus is about the differentiation and integration of non-integer orders. Fractional calculus (FC) is based on fractional-order thinking (FOT) and has been shown to help us to understand complex systems better, improve the processing of complex signals, enhance the control of complex systems, increase the performance of optimization, and even extend the enabling of the potential for creativity. In this article, the authors discuss the fractional dynamics, FOT and rich fractional stochastic models. First, the use of fractional dynamics in big data analytics for quantifying big data variability stemming from the generation of complex systems is justified. Second, we show why fractional dynamics is needed in machine learning and optimal randomness when asking: “is there a more optimal way to optimize?”. Third, an optimal randomness case study for a stochastic configuration network (SCN) machine-learning method with heavy-tailed distributions is discussed. Finally, views on big data and (physics-informed) machine learning with fractional dynamics for future research are presented with concluding remarks.


2021 ◽  
Vol 6 (1) ◽  
pp. 82
Author(s):  
Marielle Papin

A review of the studies on institutional complexity reveals that the many definitions of institutional complexity and related concepts share similarities with the understanding of complexity and complex systems of complexity science. Yet few publications on institutional complexity engage explicitly with complexity science. Most observers still confuse complicated and complex systems, for instance. Furthermore, the variety of definitions may create disarray regarding what institutional complexity and its related concepts are and what they imply. Highlighting the similarities between institutional complexity and complexity science in global governance, this think piece offers a conceptual and operational definition of institutional complexity using a complexity science lens. It highlights the attributes and properties of institutional complexity. It also presents the benefits of such an approach. Besides offering advantages in terms of concept clarification, this approach aims to engage theoretically, epistemologically, and methodologically with the complexity of global governance, as well as propose a way to answer remaining questions on this crucial topic.


2017 ◽  
Vol 14 (S339) ◽  
pp. 274-274
Author(s):  
M. Lochner ◽  
B. Bassett

AbstractWe organized Workshop 13, Machine learning for transient classification, into two distinct question-and-answer parts. The first was a so-called ‘idiot session’, in which basic questions about machine learning and artificial intelligence were elicited from the audience. The second focussed discussions on the application of artificial intelligence to transient astronomy.The workshop proved highly successful. The room was packed, and the many interesting questions and discussions were good preparation for the presentation to be made on ‘machine learning’ during the plenary session the following day.The workshop clearly reflected the general awareness and excitement in the community for the potential of machine learning in regard to transient detections in astronomy in the era of ZTF, LSST, LIGO and the SKA. Several of the presentations at this Symposium had already been exhibiting specific attention to the roles of machine-learning techniques and products. The extent to which the younger generations were being involved was clearly noticeable, and that augured well for research into workable solutions for astronomy’s ‘Big Data’ problems which – as stated frequently at this conference – are only just around the corner.


2020 ◽  
pp. 167-181
Author(s):  
Ramón Alvarado

Alvarado provides a detailed examination of different kinds of opacity in algorithms, raising the problems of “many hands” (where to attribute responsibility in complex systems), error assessment, and path complexity. In the process, he offers the reader a demystified understanding of how big data computational methods function and suggests ways in which opacity threatens fundamental elements of the democratic process.


2018 ◽  
Vol 14 (A30) ◽  
pp. 569-569
Author(s):  
Anna M. M. Scaife ◽  
Sally E. Cooper

AbstractThe DARA Big Data project is a flagship UK Newton Fund & GCRF program in partnership with the South African Department of Science & Technology (DST). DARA Big Data provides bursaries for students from the partner countries of the African VLBI Network (AVN), namely Botswana, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Namibia and Zambia, to study for MSc(R) and PhD degrees at universities in South Africa and the UK. These degrees are in the three data intensive DARA Big Data focus areas of astrophysics, health data and sustainable agriculture. The project also provides training courses in machine learning, big data techniques and data intensive methodologies as part of the Big Data Africa initiative.


2016 ◽  
Vol 2 (2) ◽  
pp. 39-54 ◽  
Author(s):  
Bernhard Rieder

Abstract This paper develops a critique of Big Data and associated analytical techniques by focusing not on errors - skewed or imperfect datasets, false positives, underrepresentation, and so forth - but on data mining that works. After a quick framing of these practices as interested readings of reality, I address the question of how data analytics and, in particular, machine learning reveal and operate on the structured and unequal character of contemporary societies, installing “economic morality” (Allen 2012) as the central guiding principle. Rather than critiquing the methods behind Big Data, I inquire into the way these methods make the many differences in decentred, non-traditional societies knowable and, as a consequence, ready for profitable distinction and decision-making. The objective, in short, is to add to our understanding of the “profound ideological role at the intersection of sociality, research, and commerce” (van Dijck 2014: 201) the collection and analysis of large quantities of multifarious data have come to play. Such an understanding needs to embed Big Data in a larger, more fundamental critique of the societal context it operates in.


Sign in / Sign up

Export Citation Format

Share Document