scholarly journals Drug Innovations and Welfare Measures Computed from Market Demand: The Case of Anti-Cholesterol Drugs

2012 ◽  
Vol 4 (3) ◽  
pp. 167-189 ◽  
Author(s):  
Abe Dunn

The pharmaceutical industry is characterized as having substantial investment in R&D and a large number of new product introductions, which poses special problems for price measurement caused by the quality of drug products changing over time. This paper applies recent demand estimation techniques to individual-level data to construct a constant-quality price index for anti-cholesterol drugs. Although the average price for anti-cholesterol drugs does not change over the sample period, I find that the constant-quality price index drops by 27 percent, a pace more in line with our expectations in such a dynamic segment of the industry. (JEL C43, L11, L65, O31)

2017 ◽  
Vol 20 (2) ◽  
pp. 1-10
Author(s):  
Goran Vlašić ◽  
Emanuel Tutek

Abstract Customer centricity is gaining importance as companies are gaining access to increasing amount and quality of individual-level data on identifiable customers. However, efforts to enhance customer centricity often face challenges as they imply organization-wide effort. This paper explores the role of environment-level factors, organization-level factors (in terms of structure, influence and culture) and department-level factors (in terms of integration, power and capabilities) in driving customer centricity of a firm. Results indicate that, while within-category competition stimulates customer centricity, the cross-category competitive intensity limits it. Moreover, marketing competences exhibit highly significant impact which even diminishes the role of inter-departmental integration. Lastly, results show that firms with high level of marketing capabilities and the right culture (in terms of tolerance for failure and availability of slack resources) are likely to exhibit higher levels of customer centricity.


Author(s):  
Bernard Enjolras

AbstractVolunteer rates vary greatly across Europe despite the voluntary sector’s common history and tradition. This contribution advances a theoretical explanation for the variation in volunteering across Europe—the capability approach—and tests this approach by adopting a two-step strategy for modeling contextual effects. This approach, referring to the concept of capability introduced by Sen (Choice, welfare and measurement, Oxford University Press, 1980/1982), is based on the claim that the demand and supply sides of the voluntary sector can be expected to vary according to collective and individual capabilities to engage in volunteering. To empirically test the approach, the study relied on two data sources—the 2015 European Union (EU) Survey on Income and Living Conditions (EU-SILC), including an ad hoc module on volunteering at the individual level, and the Quality of Government Institute and PEW Research Center macro-level data sets—to operationalize economic, human, political, social, and religious contextual factors and assess their effects on individuals’ capability to volunteer. The results support the capability hypothesis at both levels. At the individual level, indicators of human, economic, and social resources have a positive effect on the likelihood of volunteering. At the contextual level, macro-structural indicators of economic, political, social, and religious contexts affect individuals’ ability to transform resources into functioning—that is, volunteering.


2019 ◽  
Vol 63 (9) ◽  
pp. 2128-2154 ◽  
Author(s):  
Patricia Justino ◽  
Bruno Martorano

This article analyzes the role of individual redistributive preferences on protest participation. The article focuses on Latin America, a region that has experienced substantial protests and demonstrations in the last decade, making use of individual-level data on redistributive preferences and protest participation collected across eighteen countries in 2010, 2012, and 2014. The results show evidence for an association between strong individual preferences for redistribution and participation in protests motivated by the low quality of services and institutions, failures to reduce corruption, and perceived lower standards of living. The results are robust to alternative estimators, samples, and model specifications and not affected by endogeneity concerns.


2020 ◽  
Vol 46 (2-3) ◽  
pp. 311-324
Author(s):  
Tara Sklar ◽  
Christopher T. Robertson

Telehealth continues to experience substantial investment, innovation, and unprecedented growth. However, telehealth has been slow to transform healthcare. Recent developments in telehealth technologies suggest great potential for chronic care management, mental health services, and care delivery in the home—all of which should be particularly impactful for an aging population with physical and cognitive limitations. While this alignment of technological capacity and market demand is promising, legal barriers remain for telehealth operators to scale up across large geographic areas. To better understand how federal and state law can be reformed to enable greater telehealth utilization, we review and extract lessons from (1) establishment of a healthcare relationship, (2) state licensure laws, and (3) reimbursement. We analyze these areas because of the legal ambiguities or inconsistencies they raise depending on the state, which seem to be hampering telehealth growth without necessarily improving quality of care. We propose several solutions for a more unified approach to telehealth regulation that incorporate core bioethics principles of doctor-patient relationship, competence, patient autonomy, as well as population-wide questions of resource allocation and access. Lawmakers should clarify that healthcare relationships may be established outside of in-person meetings, align licensure laws via an interstate compact or federal preemption, and expand Centers for Medicare and Medicaid plans to reimburse telehealth delivery in the home.


2011 ◽  
Vol 31 (6) ◽  
pp. E34-E44 ◽  
Author(s):  
David G. T. Whitehurst ◽  
Stirling Bryan ◽  
Martyn Lewis

Background. Group mean estimates and their underlying distributions are the focus of assessment for cost and outcome variables in economic evaluation. Research focusing on the comparability of alternative preference-based measures of health-related quality of life has typically focused on analysis of individual-level data within specific clinical specialties or community-based samples. Purpose. To explore the relationship between group mean scores for the EQ-5D and SF-6D across the utility scoring range. Methods. Studies were identified via a systematic search of 13 online electronic databases, a review of reference lists of included papers, and hand searches of key journals. Studies were included if they reported contemporaneous mean EQ-5D and SF-6D health state scores. All (sub)group comparisons of group mean EQ-5D and SF-6D scores identifiable from text, tables, or figures were extracted from identified studies. A total of 921 group mean comparisons were extracted from 56 studies. The nature of the relationship between the paired scores was examined using ranked scatter graphs and analysis of agreement. Results. Systematic differences in group mean estimates were observed at both ends of the utility scale. At the lower (upper) end of the scale, the SF-6D (EQ-5D) provides higher mean utility estimates. Conclusions. These findings show that group mean EQ-5D and SF-6D scores are not directly comparable. This raises serious concerns about the cross-study comparability of economic evaluations that differ in the choice of preference-based measures, although the review focuses on 2 of the available instruments only. Further work is needed to address the practical implications of noninterchangeable utility estimates for cost-per-QALY estimates and decision making.


2018 ◽  
Vol 54 (4) ◽  
pp. 745-775 ◽  
Author(s):  
Karel Kouba ◽  
Jakub Lysek

Research on invalid voting has expanded rapidly over the past few years. This review article for the first time examines its principal findings and provides a new theoretical perspective on the origins of invalid votes based on a two-dimensional framework. The main results of 54 studies using both individual-level and aggregate-level data as well as the results of experimental and qualitative studies are analysed. The meta-analysis of all existing aggregate-level studies finds that compulsory voting, quality of democracy, fragmentation and closeness of the electoral race play important roles in explaining invalid voting. On the other hand, the research is accompanied by many theoretical and empirical contradictions that hamper the accumulation of knowledge in this field. We therefore conclude by suggesting the challenges for future research.


2015 ◽  
Vol 7 (3) ◽  
pp. 24-53 ◽  
Author(s):  
Przemyslaw Jeziorski ◽  
Ilya Segal

We study users' responses to sponsored-search advertising using consumer-level data from Microsoft Live. We document that users click ads in a nonsequential order and that the clickthrough rates depend on the identity of competing ads. We estimate a dynamic model of utility-maximizing users that rationalizes these two facts and find that 51 percent more clicks would occur if ads faced no competition. We demonstrate that optimal matching of advertisements to positions raises welfare by 27 percent, and that individual-level targeting raises welfare by 69 percent. Revealing the quality of the advertiser prior to clicking on a sponsored link raises welfare by 1.6 percent. (JEL D12, L86, M37)


Author(s):  
David R. McClure ◽  
Jerome P. Reiter

When releasing individual-level data to the public, statistical agencies typically alter data values to protect the confidentiality of individuals’ identities and sensitive attributes. When data undergo substantial perturbation, secondary data analysts’ inferences can be distorted in ways that they typically cannot determine from the released data alone. This is problematic, in that analysts have no idea if they should trust the results based on the altered data.To ameliorate this problem, agencies can establish verification servers, which are remote computers that analysts query for measures of the quality of inferences obtained from disclosure-protected data. The reported quality measures reflect the similarity between the analysis done with the altered data and the analysis done with the confidential data. However, quality measures can leak information about the confidential values, so that they too must be subject to disclosure protections. In this article, we discuss several approaches to releasing quality measures for verification servers when the public use data are generated via multiple imputation, also known as synthetic data. The methods can be modified for other stochastic perturbation methods.


Sign in / Sign up

Export Citation Format

Share Document