scholarly journals Gleaning Insight from Antitrust Cases Using Machine Learning

10.51868/2 ◽  
2021 ◽  
pp. 16-37
Author(s):  
Giovanna Massarotto ◽  
Ashwin Ittoo

The application of AI and Machine Learning (ML) techniques is becoming a primary issue of investigation in the legal and regulatory domains. Antitrust agencies are in the spotlight because they represent the first arm of government regulation in that they reach new markets before Congress has had time to draft a more specific regulatory scheme. A question the antitrust community is asking is whether antitrust agencies are equipped with the appropriate tools and powers to face today’s increasingly dynamic markets. Our study aims to tackle this question by building and testing an antitrust machine learning (AML) application based on an unsupervised approach, devoid of any human intervention. It shows how a relatively simple algorithm can, in an autonomous manner, discover underlying patterns from past antitrust cases by computing the similarity between these cases based on their measurable characteristics. Our results, achieved with simple algorithms, show much promise from the use of AI for antitrust applications. AI, in its current form, cannot replace antitrust agencies such as the FTC. Instead, it is a valuable tool that antitrust agencies can exploit for efficiency, with the potential to aid in preliminary screening, analysis of cases, or ultimate decision-making. Our contribution aims to pave the way for future AI applications in market regulation, starting with antitrust regulation. Government adoption of emerging technologies, such as AI, appears to be key for ensuring consumer welfare and market efficiency in the age of AI and big data.

2012 ◽  
Vol 15 (3) ◽  
pp. 264-272 ◽  
Author(s):  
Keiko Tanida ◽  
Masashi Shibata ◽  
Margaret M. Heitkemper

Clinical researchers do not typically assess sleep with polysomnography (PSG) but rather with observation. However, methods relying on observation have limited reliability and are not suitable for assessing sleep depth and cycles. The purpose of this methodological study was to compare a sleep analysis method based on power spectral indices of heart rate variability (HRV) data to PSG. PSG and electrocardiography data were collected synchronously from 10 healthy women (ages 20–61 years) over 23 nights in a laboratory setting. HRV was analyzed for each 60-s epoch and calculated at 3 frequency band powers (very low frequency [VLF]-hi: 0.016–0.04 Hz; low frequency [LF]: 0.04–0.15 Hz; and high frequency [HF]: 0.15–0.4 Hz). Using HF/(VLF-hi + LF + HF) value, VLF-hi, and heart rate (HR) as indices, an algorithm to categorize sleep into 3 states (shallow sleep corresponding to Stages 1 & 2, deep sleep corresponding to Stages 3 & 4, and rapid eye movement [REM] sleep) was created. Movement epochs and time of sleep onset and wake-up were determined using VLF-hi and HR. The minute-by-minute agreement rate with the sleep stages as identified by PSG and HRV data ranged from 32 to 72% with an average of 56%. Longer wake after sleep onset (WASO) resulted in lower agreement rates. The mean differences between the 2 methods were 2 min for the time of sleep onset and 6 min for the time of wake-up. These results indicate that distinguishing WASO from shallow sleep segments is difficult using this HRV method. The algorithm's usefulness is thus limited in its current form, and it requires additional modification.


Author(s):  
Gino Angelini ◽  
Alessandro Corsini ◽  
Giovanni Delibra ◽  
Marco Giovannelli

Abstract One of the issues of handling large CFD datasets and process them to derive important design correlations is the limitation in automating the post-processing of data. Machine learning techniques, developed to process large unlabelled dataset, can play a key role on this subject. In this work an unsupervised approach to isolate different flow features inside a 2D cascade is proposed and validated. The approach relies on machine learning methods and in particular on Exploratory Data Analysis (EDA) and Principal Component Analysis for the pre-processing of the data and on K-means clustering for the post-processing. The K-means algorithm was trained on a Design of Experiments (DoE) of over 140 cases of 2D linear cascade configurations to identify the boundary layer on the profiles and the wake downstream. Validation resulted in a perfect capability of identifying the regions of interest. Then a possible exploitation of this method is presented, to compute pressure losses downstream of the cascade and train an artificial neural network to make a regression able to extend data to all the possible combinations of geometrical and operating parameters of the cascade. The same algorithm was applied to 3D flow cascades of profiles with sinusoidal leading edges to stress its extrapolation capability in case of flow regimes not present in the training DoE.


Author(s):  
Steven K. Vogel

How do you craft a market? This chapter reviews some of the key institutions necessary to make markets function and flourish. A modern market economy requires much more than the rule of law and the protection of private property: corporate law, accounting systems, banking regulation, capital market regulation, corporate governance, labor regulation, antitrust policy, sector-specific regulation, intellectual property protection, and the deliberate fabrication of certain markets. These mechanisms structure markets by defining market actors, such as corporations; constructing goods, such as intellectual property rights; establishing market arenas, such as stock exchanges; setting the rules of exchange, such as trading practices; and promoting competition via regulation. In all of the substantive issue cases reviewed in this chapter, government regulation and private-sector coordination are not impediments to markets, but rather preconditions to their creation, expansion, and dynamism.


2021 ◽  
Vol 15 (3) ◽  
pp. 97-105
Author(s):  
Alexander I. Kovalenko ◽  

This article aims to characterize the new theoretical and methodological reversal observed today in the American antitrust regulation of digital platforms. To this end, the author retrospectively describes the history of the development of the theory and methodology of antitrust regulation in the United States. The article describes the ideas of economic structuralism of the “Harvard school”. Further, the author reveals the theoretical and methodological revolution associated with the theory of prices; describes the fundamental differences between the “Harvard” and “Chicago” schools in terms of assessing the relationship between market structure and the intensity of competition. The article reveals the formation of the doctrine of consumer welfare as the dominant one in antitrust regulation. The consequences of the application of the doctrine of consumer welfare in antitrust regulation are described: the narrowness of ideas about barriers to entry, public welfare; breadth of understanding of competitive forces; ignoring the structural and sectoral characteristics of competition; absolutization of indicators of consumer prices and output volumes. The author gives a negative assessment of the effectiveness of the application of the doctrine of consumer welfare in the antitrust regulation of digital platforms. The paper explains how the focus on consumer welfare has been used by digital platforms to generate gigantic market power. In this context, a criticism of the ideas of the Chicago School in relation to digital markets is presented. Reanimation of ideas and methods of economic structuralism in decision-making within the framework of antimonopoly regulation of monopolistic activities of digital platforms is argued.


1970 ◽  
Vol 44 (2) ◽  
pp. 171-180 ◽  
Author(s):  
Abdur Mahmood ◽  
Wei Lei

One-R algorithm is a simple algorithm which exhibits quite good predictive accuracy for a large class of data. When compared to the more complex algorithms having better predictive accuracy, One-R provides the baseline accuracy for testing new machine learning algorithms. However, the simplicity of One-R means that it has there is a compromise between accuracy and complexity. Often, the accuracy of One- R can be further increased without making it significantly complex. The resulting algorithm as proposed in this paper, One-RM performs equal to One-R in most of the cases and sometimes outperforms One-R by significant margin. Theoretical analysis suggests that One-RM used in conjunction with One-R always performs either better or equal to One-R. Experimental analysis shows that One-RM is a viable alternative to One-R when used as a separate classification rule. Key words: One-RM, One-R algorithm, Algorithm, Accuracy and Complexity. DOI: 10.3329/bjsir.v44i2.3668 Bangladesh J. Sci. Ind. Res. 44(2), 171-180, 2009


2018 ◽  
Vol 5 (4) ◽  
pp. 172434 ◽  
Author(s):  
Max Falkenberg McGillivray ◽  
William Cheng ◽  
Nicholas S. Peters ◽  
Kim Christensen

Mapping resolution has recently been identified as a key limitation in successfully locating the drivers of atrial fibrillation (AF). Using a simple cellular automata model of AF, we demonstrate a method by which re-entrant drivers can be located quickly and accurately using a collection of indirect electrogram measurements. The method proposed employs simple, out-of-the-box machine learning algorithms to correlate characteristic electrogram gradients with the displacement of an electrogram recording from a re-entrant driver. Such a method is less sensitive to local fluctuations in electrical activity. As a result, the method successfully locates 95.4% of drivers in tissues containing a single driver, and 95.1% (92.6%) for the first (second) driver in tissues containing two drivers of AF. Additionally, we demonstrate how the technique can be applied to tissues with an arbitrary number of drivers. In its current form, the techniques presented are not refined enough for a clinical setting. However, the methods proposed offer a promising path for future investigations aimed at improving targeted ablation for AF.


Author(s):  
Тетяна Миколаївна Майборода

The article emphasizes a commonly recognized fact of exceptional significance of education for national economic and social development, enhancing its competitiveness in the world arena and knowledge-intensity of production. It is argued that education is a vital link that unites all sectors of the economy and social aspects as well as a critical accompanying element in the life of every individual that creates human and social capital. Traditionally, education refers to the area of a government regulation and control that sets its development priorities and vectors. However, to date, reforming the education sector is becoming a global mainstream trend which promotes a shift from government regulation towards a conditional mechanism of self-regulation. Thus, viewing education as an object of market regulation is of particular relevance in modern realia which has underpinned the objectives of this study. From the national economic development perspective, the article provides insights on education industry through the prism of a market mechanism which allows for identifying an educational service category as its major regulation object. Based on the domestic and international research surveys, the key and characteristic features of the education sector have been revealed which determined its specifics as a system of mixed private-public goods. The study also discusses the basic approaches to the interpretation of an education market, its functions and key structural elements which primarily include consumers, suppliers, intermediaries, sources of finance, etc. On the basis of research findings, a conclusion has been made about the critical need to balance government regulation of the educational services sector with the use of market mechanisms and tools.


Author(s):  
John N. Drobak

Chapter 1 explains that this book examines two economic “principles,” or beliefs, that have shaped the perception of the economic system in the United States today: (1) the belief that the U.S. economy is competitive, making government market regulation unnecessary, and (2) the belief that corporations exist for the benefit of their shareholders, but not for other stakeholders. Contrary to what many economists and policymakers believe, the chapter shows that numerous markets in the United States are not competitive and that the belief in shareholder primacy is not an economic principle but a normative notion. In addition, the belief in the existence of competitive markets is used to argue that market regulation is unnecessary because competition provides all the needed constraints. If there are no constraints from competition and no regulation, serious harm can result, as shown by the Great Recession of 2008. The chapter also points out that there never was a purely laissez-faire market economy. The real question is how much market regulation is desirable. It is often difficult to debate this issue because many people label any expansion of government regulation as socialism. In addition, some people just do not like being told what to do by the government. That was a principle reason for the objection to the individual mandate in the Affordable Care Act. The chapter then introduces the relationship between the two economic narratives and the millions of job losses this century, using lessons from the new institutional economics to analyze the issues.


2017 ◽  
Vol 23 (11) ◽  
pp. 11162-11165
Author(s):  
Anbuselvan Sangodiah ◽  
S. P. R. Charles Ramendran

Sign in / Sign up

Export Citation Format

Share Document