scholarly journals Measuring Discretion and Delegation in Legislative Texts: Methods and Application to US States

2020 ◽  
Vol 29 (1) ◽  
pp. 43-57 ◽  
Author(s):  
Matia Vannoni ◽  
Elliott Ash ◽  
Massimo Morelli

Bureaucratic discretion and executive delegation are central topics in political economy and political science. The previous empirical literature has measured discretion and delegation by manually coding large bodies of legislation. Drawing from computational linguistics, we provide an automated procedure for measuring discretion and delegation in legal texts to facilitate large-scale empirical analysis. The method uses information in syntactic parse trees to identify legally relevant provisions, as well as agents and delegated actions. We undertake two applications. First, we produce a measure of bureaucratic discretion by looking at the level of legislative detail for US states and find that this measure increases after reforms giving agencies more independence. This effect is consistent with an agency cost model, where a more independent bureaucracy requires more specific instructions (less discretion) to avoid bureaucratic drift. Second, we construct measures of delegation to governors in state legislation. Consistent with previous estimates using non-text metrics, we find that executive delegation increases under unified government.

e-Finanse ◽  
2018 ◽  
Vol 14 (4) ◽  
pp. 67-76
Author(s):  
Piotr Bartkiewicz

AbstractThe article presents the results of the review of the empirical literature regarding the impact of quantitative easing (QE) on emerging markets (EMs). The subject is of interest to policymakers and researchers due to the increasingly larger role of EMs in the world economy and the large-scale capital flows occurring after 2009. The review is conducted in a systematic manner and takes into consideration different methodological choices, samples and measurement issues. The paper puts the summarized results in the context of transmission channels identified in the literature. There are few distinct methodological approaches present in the literature. While there is a consensus regarding the direction of the impact of QE on EMs, its size and durability have not yet been assessed with sufficient precision. In addition, there are clear gaps in the empirical findings, not least related to relative underrepresentation of the CEE region (in particular, Poland).


2010 ◽  
Vol 36 (3) ◽  
pp. 535-568 ◽  
Author(s):  
Deyi Xiong ◽  
Min Zhang ◽  
Aiti Aw ◽  
Haizhou Li

Linguistic knowledge plays an important role in phrase movement in statistical machine translation. To efficiently incorporate linguistic knowledge into phrase reordering, we propose a new approach: Linguistically Annotated Reordering (LAR). In LAR, we build hard hierarchical skeletons and inject soft linguistic knowledge from source parse trees to nodes of hard skeletons during translation. The experimental results on large-scale training data show that LAR is comparable to boundary word-based reordering (BWR) (Xiong, Liu, and Lin 2006), which is a very competitive lexicalized reordering approach. When combined with BWR, LAR provides complementary information for phrase reordering, which collectively improves the BLEU score significantly. To further understand the contribution of linguistic knowledge in LAR to phrase reordering, we introduce a syntax-based analysis method to automatically detect constituent movement in both reference and system translations, and summarize syntactic reordering patterns that are captured by reordering models. With the proposed analysis method, we conduct a comparative analysis that not only provides the insight into how linguistic knowledge affects phrase movement but also reveals new challenges in phrase reordering.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Dr.Agnes Ogada

Purpose: The objective of the study was to investigate the duplicity in regulation and its effect on performance of the financial sector in Kenya. The specific objectives were; to review and identify regulation duplication/competition in existing regulatory framework for the financial sector in Kenya; to describe how regulatory effectiveness has been measured in empirical literature; to assess whether the current regulatory structure has affected the performance of the financial sector in Kenya and lastly to suggest potential ways of enhancing regulatory effectiveness in Kenya. Methodology: The paper used a desk study review methodology where relevant empirical literature was reviewed to identify main themes and to extract knowledge gaps. Findings: The study found out that financial sector in Kenya and other developing economies have reported losses on a large scale due to under regulation and regulator duplicity. Some of these have become insolvent, or have had to be taken over or rescued by their governments. A single market regulator clearly has its own advantages over multiple regulators. But it is more suitable for well-developed and mature markets which are smaller in size, like the UK. The study also found out that Kenya’s economy and political arena are not mature enough to handle a single financial market regulator. In this light it can be asserted that even mature economies such as the United States still have multiple regulators. Unique contribution to theory, practice and policy: Adherence to principles of open government, including transparency and participation in the regulatory process to ensure that regulation serves the public interest and is informed by the legitimate needs of those interested in and affected by regulation. Governments should ensure that regulations are comprehensible and clear and that parties can easily understand their rights and obligations. Organizations should create personalized technology systems that create a demand adaptation of ICT at every level of the organizational operations


2019 ◽  
Author(s):  
Kate C. McLean ◽  
Moin Syed ◽  
Monisha Pasupathi ◽  
Jonathan M. Adler ◽  
William Lewis Dunlop ◽  
...  

A robust empirical literature suggests that individual differences in the thematic and structural aspects of life narratives are associated with and predictive of psychological well-being. However, one limitation of the current field is the multitude of ways of capturing these narrative features, with little attention to overarching dimensions or latent factors of narrative that are responsible for these associations with well-being. In the present study we uncovered a reliable structure that accommodates commonly studied features of life narratives in a large-scale, multi-University collaborative effort. Across three large samples of emerging and mid-life adults responding to various narrative prompts (N = 855 participants, N = 2565 narratives), we found support for three factors of life narratives: motivational and affective themes, autobiographical reasoning, and structural aspects. We also identified a “functional” model of these three factors that reveals a reduced set of narrative features that adequately captures each factor. Additionally, motivational and affective themes was the factor most reliably related to well-being. Finally, associations with personality traits were variable by narrative prompt. Overall, the present findings provide a comprehensive and robust model for understanding the empirical structure of narrative identity as it relates to well-being, which offers meaningful theoretical contributions to the literature, and facilitates practical decision making for researchers endeavoring to capture and quantify life narratives.


2020 ◽  
Vol 31 (2) ◽  
pp. 64-73
Author(s):  
Dinesh Batra

This research note suggests five research challenges when conducting quantitative studies on large-scale agile methodology (LSAM). First, the LSAM empirical literature, which is mainly characterized by qualitative studies primarily focusing on coordination issues, provides limited background. Second, the notion of “large” in LSAM needs to be clarified because the existing research seems to have focused on “very large” or outlier projects. Third, the popular LSAM methods suggest broad and general maxims that may result in difficulty in operationalizing dependent variables, especially in innovation adoption studies. Fourth, the researcher may get overwhelmed when selecting independent variables from the plethora of suggested constructs. Finally, some of the problems associated with large-scale agile are mostly challenges of using conventional agile during a time-period when LSAM had not formally emerged. Researchers should take a balanced approach considering both benefits and challenges of using LSAM and focusing on project-level dependent measures such as success and acceptance.


Author(s):  
Yifan Gao ◽  
Yang Zhong ◽  
Daniel Preoţiuc-Pietro ◽  
Junyi Jessy Li

In computational linguistics, specificity quantifies how much detail is engaged in text. It is an important characteristic of speaker intention and language style, and is useful in NLP applications such as summarization and argumentation mining. Yet to date, expert-annotated data for sentence-level specificity are scarce and confined to the news genre. In addition, systems that predict sentence specificity are classifiers trained to produce binary labels (general or specific).We collect a dataset of over 7,000 tweets annotated with specificity on a fine-grained scale. Using this dataset, we train a supervised regression model that accurately estimates specificity in social media posts, reaching a mean absolute error of 0.3578 (for ratings on a scale of 1-5) and 0.73 Pearson correlation, significantly improving over baselines and previous sentence specificity prediction systems. We also present the first large-scale study revealing the social, temporal and mental health factors underlying language specificity on social media.


2018 ◽  
Vol 44 (1) ◽  
pp. 49-73 ◽  
Author(s):  
Dan Demetriou ◽  
Bob Fischer ◽  

Faced with the choice between supporting industrial plant agriculture and hunting, Tom Regan’s rights view can be plausibly developed in a way that permits a form of hunting we call “dignitarian.” To motivate this claim, we begin by showing how the empirical literature on animal deaths in plant agriculture suggests that a non-trivial amount of hunting would not add to animal harm. We discuss how Tom Regan’s miniride principle appears to morally permit hunting in that case, and we address recent objections by Jason Hanna to environmentally-based culling that may be seen to speak against this conclusion. We then turn to dignity, which is especially salient in scenarios where harm is necessary or justifiable. We situate “dignitarian” hunting within a larger framework of adversarial ethics, and argue that dignitarian hunting gives animals a more dignified death than the alternatives endemic to large-scale plant agriculture, and so is permissible based on the kinds of principles that Regan endorses. Indeed, dignitarian hunting may actually fit better with Regan’s widely endorsed animal rights framework than the practice of many vegans, and should only be rejected if we’re just as willing to condemn supporting conventional plant agriculture.


2008 ◽  
Vol 130 (2) ◽  
Author(s):  
Johannes P. Pretorius ◽  
Detlev G. Kröger

No physical optimum solar chimney power plant exists when only regarding the dimensions of such a plant. However, if construction costs are introduced, thermoeconomically optimal plant configurations may be established. This paper investigates the thermoeconomic optimization of a large-scale solar chimney power plant. Initially, relevant dimensions are selected, which are to be optimized. An approximated cost model is then developed, giving the capacity for finding optimum plant dimensions for different cost structures. Multiple computer simulations are performed and results are compared to the approximated cost of each specific plant. Thermoeconomically optimal plant configurations are obtained.


2021 ◽  
Author(s):  
◽  
Vahid Arabnejad

<p>Basic science is becoming ever more computationally intensive, increasing the need for large-scale compute and storage resources, be they within a High-Performance Computer cluster, or more recently, within the cloud. Commercial clouds have increasingly become a viable platform for hosting scientific analyses and computation due to their elasticity, recent introduction of specialist hardware, and pay-as-you-go cost model. This computing paradigm therefore presents a low capital and low barrier alternative to operating dedicated eScience infrastructure. Indeed, commercial clouds now enable universal access to capabilities previously available to only large well funded research groups. While the potential benefits of cloud computing are clear, there are still significant technical hurdles associated with obtaining the best execution efficiency whilst trading off cost. In most cases, large scale scientific computation is represented as a workflow for scheduling and runtime provisioning. Such scheduling becomes an even more challenging problem on cloud systems due to the dynamic nature of the cloud, in particular, the elasticity, the pricing models (both static and dynamic), the non-homogeneous resource types and the vast array of services. This mapping of workflow tasks onto a set of provisioned instances is an example of the general scheduling problem and is NP-complete. In addition, certain runtime constraints, the most typical being the cost of the computation and the time which that computation requires to complete, must be met. This thesis addresses 'the scientific workflow scheduling problem in cloud', which is to schedule workflow tasks on cloud resources in a way that users meet their defined constraints such as budget and deadline, and providers maximize profits and resource utilization. Moreover, it explores different mechanisms and strategies for distributing defined constraints over a workflow and investigate its impact on the overall cost of the resulting schedule.</p>


Sign in / Sign up

Export Citation Format

Share Document