Does Online Training Work in Retail?

Author(s):  
Marshall Fisher ◽  
Santiago Gallino ◽  
Serguei Netessine

Problem definition: How much, if at all, does training in product features increase a sales associate’s sales productivity? Academic/practical relevance: A knowledgeable retail sales associate (SA) can explain the features of available product variants and give a customer sufficient confidence in the customer’s choice or suggest alternatives so that the customer becomes willing to purchase. Although it is plausible that increasing an SA’s product knowledge will increase sales, training is not without cost and turnover is high in retail, so most retailers provide little product-knowledge training. Methodology: We partner with two firms and collect data on more than 50,000 SAs who had access to training. We assemble a detailed data set of the training history and individual sales productivity over a two-year period. We conduct econometric analysis to quantify the causal effect of training on sales. Results: For SAs who engaged in training, the sales rate increases by 1.8% for every online module taken, which is a much higher benefit than the direct or indirect costs associated with this training. Brand-specific training has a larger effect on the focal brand; however, there is a positive effect on other brands the SA sells. We also assess how the training benefit varies depending on the SA’s tenure, sales rate prior to training, and number of modules taken. Managerial implications: We present evidence of a novel training mechanism that can be extremely attractive to retailers. Online training tools, such as the one we study, have two characteristics that should not be overlooked. First, it is the brands, not the retailers, that create, develop, and pay for the training content. Second, the incentives are such that SAs invest their own time, rather than time on the job, to train, and this makes the retailer’s investment in the training a profitable proposition.

2002 ◽  
Vol 11 (3) ◽  
pp. 206-215 ◽  
Author(s):  
Graham Dunn

SUMMARYObjective – To provide a relatively non-technical review of recent statistical research on the analysis and interpretation of the results of randomised controlled trials in which there are possibly all three of the following types of protocol violation: non-adherence to allocated treatment, contamination (that is, patients receiving treatments other than the one to which they were allocated) and attrition (missing outcome data). Methods – The estimation methods involve the use of potential outcomes (counterfactuals) in the definition of a causal effect of treatment and in drawing valid inferences concerning its size. Results – The methods are explained through the use of simple arithmetical expressions involving the counts from three-way contingency tables (Outcome by Treatment Received by Random Allocation). Illustration is provided through the use of a hypothetical data set. Conclusions – Recent advances in statistical methodology enable one to estimate treatment effects from the results of randomised trials in which the treatment actually received is not necessarily the one to which the patient was allocated. These methods allow one to make adjustments to allow for both non-compliance and loss to follow-up. Even for such a 'broken' randomised trial, inference concerning causal effects is safer than that from data arising from an observational study that never involved random allocation in the first place.


Author(s):  
Lifei Sheng ◽  
Christopher Thomas Ryan ◽  
Mahesh Nagarajan ◽  
Yuan Cheng ◽  
Chunyang Tong

Problem definition: Games are the fastest-growing sector of the entertainment industry. Freemium games are the fastest-growing segment within games. The concept behind freemium is to attract large pools of players, many of whom will never spend money on the game. When game publishers cannot earn directly from the pockets of consumers, they employ other revenue-generating content, such as advertising. Players can become irritated by revenue-generating content. A recent innovation is to offer incentives for players to interact with such content, such as clicking an ad or watching a video. These are termed incentivized (incented) actions. We study the optimal deployment of incented actions. Academic/practical relevance: Removing or adding incented actions can essentially be done in real-time. Accordingly, the deployment of incented actions is a tactical, operational question for game designers. Methodology: We model the deployment problem as a Markov decision process (MDP). We study the performance of simple policies, as well as the structure of optimal policies. We use a proprietary data set to calibrate our MDP and derive insights. Results: Cannibalization—the degree to which incented actions distract players from making in-app purchases—is the key parameter for determining how to deploy incented actions. If cannibalization is sufficiently high, it is never optimal to offer incented actions. If cannibalization is sufficiently low, it is always optimal to offer. We find sufficient conditions for the optimality of threshold strategies that offer incented actions to low-engagement users and later remove them once a player is sufficiently engaged. Managerial implications: This research introduces operations management academics to a new class of operational issues in the games industry. Managers in the games industry can gain insights into when incentivized actions can be more or less effective. Game designers can use our MDP model to make data-driven decisions for deploying incented actions.


2011 ◽  
pp. 46-60 ◽  
Author(s):  
Antonio Miguel Seoane Pardo ◽  
Francisco José García Peñalvo

This chapter outlines the problem of laying the groundwork for building a suitable online training methodology. In the first place, it points out that most e-learning initiatives are developed without a defined method or an appropriate strategy. It then critically analyzes the role of the constructivist model in relation to this problem, affirming that this explanatory framework is not a method and describing the problems to which this confusion gives rise. Finally, it proposes a theoretical and epistemological framework of reference for building this methodology based on Greek paideía. The authors propose that the search for a reference model such as the one developed in ancient Greece will allow us to develop a method based on the importance of a teaching profile “different” from traditional academic roles and which we call “tutor.” It has many similarities to the figures in charge of monitoring learning both in Homeric epic and Classical Greece.


2020 ◽  
Vol 22 (5) ◽  
pp. 1045-1065 ◽  
Author(s):  
Nirup Menon ◽  
Anant Mishra ◽  
Shun Ye

Problem definition: Innovation contest platforms are often organized around specific fields and host contests that span a variety of interdependent problem domains. Whereas contestants may benefit from related experience in contests whose problem domains share an interdependency with the focal problem domain, it is unclear whether the benefits of related experience arise symmetrically from upstream experience (i.e., experience in problem domains that provide input information to the focal problem domain) and downstream experience (i.e., experience in problem domains that use output information from the focal problem domain) or differ among them. Academic/practical relevance: Given that innovation contest platforms serve to effectively match contest problem requirements with contestants’ skills, it is important to understand how a contestant’s prior experience on a platform contributes to her problem-solving performance. Our research provides a more granular examination of the benefits of related experience than what has been examined in prior studies on individual learning or innovation contests. Methodology: We collected detailed archival data from TopCoder, a leading innovation contest platform that hosts contests across multiple interdependent software development problem domains, from its launch in 2001 to September 2013. Our data set comprises detailed participation histories of 821 contestants in 3,274 contests across eight interdependent problem domains involving 8,985 observations. Results: Whereas a contestant’s related experience on the innovation contest platform is more positively associated with her focal contest performance compared with unrelated experience, the benefits of related experience arise only from downstream experience. That is, there are no significant performance benefits of upstream experience. Furthermore, the performance benefits of downstream experience are greater when the contest duration is shorter, highlighting its role in enabling more efficient search and problem solving in innovation contest platforms with interdependent problem domains. Managerial implications: Contrary to the notion of “hyperspecialization,” our findings suggest that contestants can reap benefits from diversifying their experience into downstream problem domains on innovation contest platforms. Furthermore, innovation contest platforms could facilitate such targeted diversification of contestant experience by developing more granular metrics of contestant experience across problem domains. Our findings also have implications for resource allocation and job rotation decisions in software development projects within firms.


Author(s):  
Diwas KC ◽  
Sokol Tushe

Problem definition: In the modern workplace, it is increasingly common for workers to concurrently attend to tasks across multiple physical locations. However, frequent site switching can lead to increased setup and overhead costs. Specifically, workers expend significant time and cognitive effort getting reoriented with personnel, operating processes, tools, and resources whenever they switch sites. In this paper, we look at the productivity and quality implications of multisite work. Academic/practical relevance: Although multisite workplace deployment is increasingly common, its impact on people operations has not been examined in the operations management literature. We contribute to the literature by studying the effect of multisiting on individual worker productivity and quality of output. Methodology: To estimate the effect of multisite operations on performance, we turn to a setting where multisite worker assignment is common—that of physicians who have admitting privileges at multiple hospitals. We collected detailed data on individual physicians practicing in 83 hospitals between 1999 and 2010. Our extensive data set includes detailed operational and clinical factors associated with more than 950,000 patient encounters. Our empirical analysis takes the form of a panel, where we follow a given physician over time and link short-term multisiting to patient-level outcomes. Results: We find that multisiting negatively impacts productivity. Specifically, for each additional site at which a physician works, we observe a 2% increase in patient length of stay. For each site served, the likelihood of a patient developing a complication increases by 3%. Greater travel distance between sites and lack of focus at a given site explain the performance declines due to multisiting. In addition, we find that the performance declines resulting from multisite operation are reduced among low-complexity patients and among highly experienced physicians. Managerial implications: Multisite performance losses need to be traded off against the potential benefits. The negative effects of multisiting can be mitigated by limiting multisite deployment to simpler tasks and among highly experienced physicians. Managers can decrease switching costs of multisite work by standardizing workflows, processes, and tools across sites. In addition, the practice of multisite work can be limited to sites that are physically proximate to avoid the overhead costs associated with excessive travel.


2020 ◽  
Vol 22 (4) ◽  
pp. 754-774 ◽  
Author(s):  
Itai Gurvich ◽  
Kevin J. O’Leary ◽  
Lu Wang ◽  
Jan A. Van Mieghem

Problem definition: Collaboration is important in services but may lead to interruptions. Professionals exercise discretion on when to preempt individual tasks to switch to collaborative tasks. Academic/practical relevance: Discretionary task switching can introduce changeover times when resuming the preempted task and, thus, can increase total processing time. Methodology: We analyze and quantify how collaboration, through interruptions and discretionary changeovers, affects total processing time. We introduce an episodal workflow model that captures the interruption and discretionary changeover dynamics—each switch and the episode of work it preempts—present in settings in which collaboration and multitasking is paramount. A simulation study provides evidence that changeover times are properly identified and estimated without bias. We then deploy the model in a field study of hospital medicine physicians: “hospitalists.” The hospitalist workflow includes visiting patients, consulting with other caregivers to guide patient diagnosis and treatment, and documenting in the patient’s medical chart. The empirical analysis uses a data set assembled from direct observation of hospitalist activity and pager-log data. Results: We estimate that a hospitalist incurs a total changeover time during documentation of five minutes per patient per day. Managerial implications: This estimate represents a significant 20% of the total processing time per patient: caring for 14 patients per day, our model estimates that a hospitalist spends more than one hour each day on changeovers. This provides evidence that task switching can causally lead to longer documentation time.


Author(s):  
Robert L. Bray

Problem definition: Do the benefits of operational transparency depend on when the work is done? Academic/practical relevance: This work connects the operations management literature on operational transparency with the psychology literature on the peak-end effect. Methodology: This study examines how customers respond to operational transparency with parcel delivery data from the Cainiao Network, the logistics arm of Alibaba. The sample comprises 4.68 million deliveries. Each delivery has between 4 and 10 track-package activities, which customers can check in real time, and a delivery service score, which customers leave after receiving the package. Instrumental-variable regressions quantify the causal effect of track-package-activity times on delivery scores. Results: The regressions suggest that customers punish early idleness less than late idleness, leaving higher delivery service scores when track-package activities cluster toward the end of the shipping horizon. For example, if a shipment takes 100 hours, then delaying the time of the average action from hour 20 to hour 80 increases the expected delivery score by approximately the same amount as expediting the arrival time from hour 100 to hour 73. Managerial implications: Memory limitations make customers especially sensitive to how service operations end.


2021 ◽  
pp. 1-11
Author(s):  
Velichka Traneva ◽  
Stoyan Tranev

Analysis of variance (ANOVA) is an important method in data analysis, which was developed by Fisher. There are situations when there is impreciseness in data In order to analyze such data, the aim of this paper is to introduce for the first time an intuitionistic fuzzy two-factor ANOVA (2-D IFANOVA) without replication as an extension of the classical ANOVA and the one-way IFANOVA for a case where the data are intuitionistic fuzzy rather than real numbers. The proposed approach employs the apparatus of intuitionistic fuzzy sets (IFSs) and index matrices (IMs). The paper also analyzes a unique set of data on daily ticket sales for a year in a multiplex of Cinema City Bulgaria, part of Cineworld PLC Group, applying the two-factor ANOVA and the proposed 2-D IFANOVA to study the influence of “ season ” and “ ticket price ” factors. A comparative analysis of the results, obtained after the application of ANOVA and 2-D IFANOVA over the real data set, is also presented.


2020 ◽  
Vol 72 (1) ◽  
Author(s):  
Ryuho Kataoka

Abstract Statistical distributions are investigated for magnetic storms, sudden commencements (SCs), and substorms to identify the possible amplitude of the one in 100-year and 1000-year events from a limited data set of less than 100 years. The lists of magnetic storms and SCs are provided from Kakioka Magnetic Observatory, while the lists of substorms are obtained from SuperMAG. It is found that majorities of events essentially follow the log-normal distribution, as expected from the random output from a complex system. However, it is uncertain that large-amplitude events follow the same log-normal distributions, and rather follow the power-law distributions. Based on the statistical distributions, the probable amplitudes of the 100-year (1000-year) events can be estimated for magnetic storms, SCs, and substorms as approximately 750 nT (1100 nT), 230 nT (450 nT), and 5000 nT (6200 nT), respectively. The possible origin to cause the statistical distributions is also discussed, consulting the other space weather phenomena such as solar flares, coronal mass ejections, and solar energetic particles.


Author(s):  
Can Zhang ◽  
Atalay Atasu ◽  
Karthik Ramachandran

Problem definition: Faced with the challenge of serving beneficiaries with heterogeneous needs and under budget constraints, some nonprofit organizations (NPOs) have adopted an innovative solution: providing partially complete products or services to beneficiaries. We seek to understand what drives an NPO’s choice of partial completion as a design strategy and how it interacts with the level of variety offered in the NPO’s product or service portfolio. Academic/practical relevance: Although partial product or service provision has been observed in the nonprofit operations, there is limited understanding of when it is an appropriate strategy—a void that we seek to fill in this paper. Methodology: We synthesize the practices of two NPOs operating in different contexts to develop a stylized analytical model to study an NPO’s product/service completion and variety choices. Results: We identify when and to what extent partial completion is optimal for an NPO. We also characterize a budget allocation structure for an NPO between product/service variety and completion. Our analysis sheds light on how beneficiary characteristics (e.g., heterogeneity of their needs, capability to self-complete) and NPO objectives (e.g., total-benefit maximization versus fairness) affect the optimal levels of variety and completion. Managerial implications: We provide three key observations. (1) Partial completion is not a compromise solution to budget limitations but can be an optimal strategy for NPOs under a wide range of circumstances, even in the presence of ample resources. (2) Partial provision is particularly valuable when beneficiary needs are highly heterogeneous, or beneficiaries have high self-completion capabilities. A higher self-completion capability generally implies a lower optimal completion level; however, it may lead to either a higher or a lower optimal variety level. (3) Although providing incomplete products may appear to burden beneficiaries, a lower completion level can be optimal when fairness is factored into an NPO’s objective or when beneficiary capabilities are more heterogeneous.


Sign in / Sign up

Export Citation Format

Share Document