scholarly journals Mapping Transmedia Marketing in the Music Industry: A Methodology

2021 ◽  
Vol 9 (3) ◽  
pp. 164-174
Author(s):  
Linda Ryan Bengtsson ◽  
Jessica Edlom

Over the last decade, the music industry has adapted its promotional strategy to take advantage of the fluid, contemporary, platform-based transmedia landscape. For researchers of contemporary culture, the multiplicity of promotional activities creates substantial methodological challenges. In this article, we present and discuss such methodological approaches using two studies of contemporary promotional music campaigns as illustrative cases. Inspired by digital and innovative methods and guided by the Association of Internet Researchers’ (AoIR’s) ethical guidelines, we developed two data collection strategies—reversed engineering and live capturing—and applied two analytical approaches—visual mapping and time-based layering. The first case study traced already staged music marketing campaigns across multiple online media platforms, and the second followed an online promotional campaign in real time for six months. Based on these case studies, we first argue for the importance of grounded manual capturing and coding in data collection, especially when working around data access limitations imposed by platforms. Second, we propose reversed engineering and live capturing as methods of capturing fragmented data, in contemporary promotional campaigns. Third, we suggest the visual mapping and time-based layering of data, enabling researchers to oscillate between qualitative and quantitative data. Finally, we argue that researchers must pool their experiences and resources regarding how to transcend platform limitations and question a lack of transparency while respecting ethical norms and guidelines. With these arguments, we assert the researcher’s necessary role in understanding and explaining the complex and hybrid contemporary promotional landscape and provide tools and strategies for further research.

2021 ◽  
pp. 43-58
Author(s):  
S. S. Yudachev ◽  
P. A. Monakhov ◽  
N. A. Gordienko

This article describes an attempt to create open source LabVIEW software, equivalent to data collection and control software. The proposed solution uses GNU Radio, OpenCV, Scilab, Xcos, and Comedi in Linux. GNU Radio provides a user-friendly graphical interface. Also, GNU Radio is a software-defined radio that conducts experiments in practice using software rather than the usual hardware implementation. Blocks for data propagation, code deletion with and without code tracking are created using the zero correlation zone code (ZCZ, a combination of ternary codes equal to 1, 0, and –1, which is specified in the program). Unlike MATLAB Simulink, GNU Radio is open source, i. e. free, and the concepts can be easily accessed by ordinary people without much programming experience using pre-written blocks. Calculations can be performed using OpenCV or Scilab and Xcos. Xcos is an application that is part of the Scilab mathematical modeling system, and it provides developers with the ability to design systems in the field of mechanics, hydraulics and electronics, as well as queuing systems. Xcos is a graphical interactive environment based on block modeling. The application is designed to solve problems of dynamic and situational modeling of systems, processes, devices, as well as testing and analyzing these systems. In this case, the modeled object (a system, device or process) is represented graphically by its functional parametric block diagram, which includes blocks of system elements and connections between them. The device drivers listed in Comedi are used for real-time data access. We also present an improved PyGTK-based graphical user interface for GNU Radio. English version of the article is available at URL: https://panor.ru/articles/industry-40-digital-technology-for-data-collection-and-management/65216.html


Author(s):  
Colleen Loos ◽  
Gita Mishra ◽  
Annette Dobson ◽  
Leigh Tooth

IntroductionLinked health record collections, when combined with large longitudinal surveys, are a rich research resource to inform policy development and clinical practice across multiple sectors. Objectives and ApproachThe Australian Longitudinal Study on Women’s Health (ALSWH) is a national study of over 57,000 women in four cohorts. Survey data collection commenced in 1996. Over the past 20 years, ALSWH has also established an extensive data linkage program. The aim of this poster is to provide an overview of ALSWH’s program of regularly up-dated linked data collections for use in parallel with on-going surveys, and to demonstrate how data are made widely available to research collaborators. ResultsALSWH surveys collect information on health conditions, ageing, reproductive characteristics, access to health services, lifestyle, and socio-demographic factors. Regularly updated linked national and state administrative data collections add information on health events, health outcomes, diagnoses, treatments, and patterns of service use. ALSWH’s national linked data collections, include Medicare Benefits Schedule, Pharmaceutical Benefits Scheme, the National Death Index, the Australian Cancer Database, and the National Aged Care Data Collection. State and Territory hospital collections include Admitted Patients, Emergency Department and Perinatal Data. There are also substudies, such as the Mothers and their Children’s Health Study (MatCH), which involves linkage to children’s educational records. ALSWH has an internal Data Access Committee along with systems and protocols to facilitate collaborative multi-sectoral research using de-identified linked data. Conclusion / ImplicationsAs a large scale Australian longitudinal multi-jurisdictional data linkage and sharing program, ALSWH is a useful model for anyone planning similar research.


Author(s):  
Ian Thomas ◽  
Peter Mackie

The aim of this paper is to set out the principles of an ideal data system. Good data is crucial to effective policy and practice development in all social policy spheres and this is a particular challenge in the context of homelessness policy. Policy makers, practitioners and researchers have been highly critical of the current state of homelessness data across the globe, with concerns largely focused on the incompleteness of the data. Most research has narrowly focused on the strengths and weaknesses of different data collection techniques, such as Point-In-Time counts. However, good data does not only derive from the data collection method - consideration must also be given to the wider data system, including how data are generated, reported, analysed, and crucially, how they are made accessible and to who. The evidence base for the paper is a desk-based review of 49 data collection systems from 8 countries, including systems in health and social care settings—where data are being increasingly used to drive more effective care. The different systems are synthesised to generate 8 areas of design, being: data architecture, governance, data quality, ethical and legal, privacy/security, data access, and importantly, purpose. Drawing these elements together, the paper concludes that data collection should adopt a common data standard shared across the sector, enabling inter-organisational information sharing and improving collaboration; reporting to local and central government must not be one-sided, instead data providers should receive some tangible benefit for their engagement; the focus of analysis needs to shift from statistics toward evaluation into the effectiveness of interventions; and access must be available to a range of sector actors, including service providers and academia. Importantly, the paper also concludes that in delivering the ideal system, care must be taken not to interrupt the delivery of effective homelessness interventions.


2018 ◽  
Vol 4 ◽  
pp. e28045 ◽  
Author(s):  
Evelyn Underwood ◽  
Katie Taylor ◽  
Graham Tucker

This review identifies successful approaches to collating and using biodiversity data in spatial planning and impact assessment, the barriers to obtaining and using existing data sources, and the key data gaps that hinder effective implementation. The analysis is a contribution to the EU BON project funded by the European Commission FP7 research programme, which aimed to identify and pilot new approaches to overcome gaps in biodiversity data in conservation policy at European and national levels. The consideration of biodiversity in impact assessments and spatial planning requires spatially explicit biodiversity data of various types. Where spatial plans take account of biodiversity, there are opportunities through Strategic Environmental Assessment (SEA) of development plans and Environmental Impact Assessment (EIA) of individual development proposals to ensure that consented activities are consistent with no net loss of biodiversity or even a net gain, and help to maintain or develop coherent ecological networks. However, biodiversity components of SEAs and EIAs have often been found to be of insufficient quality due to the lack of data or the inadequate use of existing data. Key obstacles to providing access to biodiversity data include the need for data standardisation and data quality governance and systems, licensing approaches to increase data access, and lack of resources to target gaps in data coverage and to develop and advertise policy-relevant data products. Existing data platforms differ in the degree to which they successfully provide a service to spatial planners and impact assessment practitioners. Some local governments, for example Somerset County Council in the UK and the Bremen federal state in Germany, have invested in integrated data collection and management systems that now provide intensively used tools for spatial planning and impact assessment informed by local data collection and monitoring. The EU BON biodiversity data portal aims to provide a platform that is an access point to datasets relevant to essential biodiversity variables on species, habitats and ecosystems. The EU BON taxonomic backbone provides an integrated search function for species and taxa according to different classifications, and also provides a range of tools for data analysis and decision-support. This will increase the accessibility of the vast range of biodiversity data available in different sources and allow the targeting of future data collection to address current gaps.


2021 ◽  
Vol 50 (Supplement_1) ◽  
Author(s):  
Rajiv Kumar Jain

Abstract Focus of Presentation The process and results of popularizing Occupational Health Epidemiology amongst Occupational Health practitioners in India during the covid-19 Pandemic in India through Webinars Findings The 25 webinars of average duration of 90 minutes, with contents relating to Covid-19 Epidemiology in India, generated immense interest amongst Occupational Health Practitioners with reference to innovative methods of data collection, analysis of data, results dissemination and integration of results in occupational Health practice during pandemic of Covid-19 in India. Conclusions/Implications Occupational Health Epidemiology is a neglected discipline in India. Innovative method of use of webinars amongst Occupational Health practitioners can be used for popularizing the methods, data analysis and results dissemination etc. It is expected that this interest shall be sustained in Post Pandemic period and the discipline of Occupational Health Epidemiology will get its rightful place amongst Occupational Health practitioners in India leading to research initiatives and application of results in the practice of Occupational Health in India.


2021 ◽  
Author(s):  
Goran Muric ◽  
Yusong Wu ◽  
Emilio Ferrara

BACKGROUND False claims about COVID-19 vaccines can undermine public trust in ongoing vaccination campaigns, thus posing a threat to global public health. Misinformation originating from various sources has been spreading online since the beginning of the COVID-19 pandemic. Anti-vaccine activists have also begun to utilize platforms like Twitter to share their views. To properly understand the phenomenon of vaccine hesitancy through the lens of online social media, it is of greatest importance to gather the relevant data. OBJECTIVE In this paper, we describe a dataset of Twitter posts that exhibit a strong anti-vaccine stance. The dataset is made available to the research community via our AvaxTweets dataset GitHub repository. METHODS We started the ongoing data collection on October 18, 2020, leveraging the Twitter streaming application programming interface (API) to follow a set of specific anti-vaccine related keywords. Additionally, we collect the historical tweets of the set of accounts that engaged in spreading anti-vaccination narratives at some point during 2020. RESULTS Since the inception of our collection, we have published two collections: a) a streaming keyword-centered data collection with more than 1.8 million tweets, and b) a historical account-level collection with more than 135 million tweets. In this paper we present descriptive analyses showing the volume of activity over time, geographical distributions, topics, news sources, and inferred accounts’ political leaning. CONCLUSIONS The vaccine-related misinformation on social media may exacerbate the levels of vaccine hesitancy, hampering the progress toward vaccine-induced herd immunity, and potentially increase infections related to new COVID-19 variants. For these reasons, understanding vaccine hesitancy through the lens of social media is of paramount importance. Since data access is the first obstacle to attain that, we publish the dataset that can be used in studying anti-vaccine misinformation on social media and enable a better understanding of vaccine hesitancy.


Author(s):  
Manuela De Allegri ◽  
Stephan Brenner ◽  
Christabel Kambala ◽  
Jacob Mazalale ◽  
Adamson S Muula ◽  
...  

Abstract The application of mixed methods in Health Policy and Systems Research (HPSR) has expanded remarkably. Nevertheless, a recent review has highlighted how many mixed methods studies do not conceptualize the quantitative and the qualitative component as part of a single research effort, failing to make use of integrated approaches to data collection and analysis. More specifically, current mixed methods studies rarely rely on emergent designs as a specific feature of this methodological approach. In our work, we postulate that explicitly acknowledging the emergent nature of mixed methods research by building on a continuous exchange between quantitative and qualitative strains of data collection and analysis leads to a richer and more informative application in the field of HPSR. We illustrate our point by reflecting on our own experience conducting the mixed methods impact evaluation of a complex health system intervention in Malawi, the Results Based Financing for Maternal and Newborn Health Initiative. We describe how in the light of a contradiction between the initial set of quantitative and qualitative findings, we modified our design multiple times to include additional sources of quantitative and qualitative data and analytical approaches. To find an answer to the initial riddle, we made use of household survey data, routine health facility data, and multiple rounds of interviews with both healthcare workers and service users. We highlight what contextual factors made it possible for us to maintain the high level of methodological flexibility that ultimately allowed us to solve the riddle. This process of constant reiteration between quantitative and qualitative data allowed us to provide policymakers with a more credible and comprehensive picture of what dynamics the intervention had triggered and with what effects, in a way that we would have never been able to do had we kept faithful to our original mixed methods design.


Author(s):  
T. R. Hird ◽  
E. H. Young ◽  
F. J. Pirie ◽  
J. Riha ◽  
T. M. Esterhuizen ◽  
...  

The Durban Diabetes Study (DDS) is a population-based cross-sectional survey of an urban black population in the eThekwini Municipality (city of Durban) in South Africa. The survey combines health, lifestyle and socioeconomic questionnaire data with standardised biophysical measurements, biomarkers for non-communicable and infectious diseases, and genetic data. Data collection for the study is currently underway and the target sample size is 10 000 participants. The DDS has an established infrastructure for survey fieldwork, data collection and management, sample processing and storage, managed data sharing and consent for re-approaching participants, which can be utilised for further research studies. As such, the DDS represents a rich platform for investigating the distribution, interrelation and aetiology of chronic diseases and their risk factors, which is critical for developing health care policies for disease management and prevention. For data access enquiries please contact the African Partnership for Chronic Disease Research (APCDR) at [email protected] or the corresponding author.


2014 ◽  
Vol 7 (2) ◽  
pp. 311-317 ◽  
Author(s):  
Nigel Williams ◽  
Nicole P. Ferdinand ◽  
Robin Croft

Purpose – While the area of project management maturity (PMM) is attracting an increased amount of research attention, the approaches to measuring maturity fit within existing social science conventions. This paper aims to examine the potential contribution of new data collection and analytical approaches to develop new insights in PMM. Design/methodology/approach – This paper takes the form of a literature review. Findings – The current trends of rapidly growing digital data collection and storage may have the potential to develop approaches to PMM assessment that overcome the limitations of existing qualitative and quantitative approaches. Research limitations/implications – Future research in PMM can employ techniques such as social network analysis and text analysis to develop insights based on the flow and content of information in organizations. Practical implications – Adoption of data analytical approaches from big data can enable the creation of new types of holistic and adaptive maturity models. Holistic maturity models provide insights based on both structured and unstructured data within organizations. Adaptive maturity models provide rapid insights based on the flow of information within an enterprise. Originality/value – The recent trend towards digitising of organizational knowledge and interactions has created the possibility to apply new analytical approaches and techniques to the understanding of PMM in firms. This paper identifies possible tools and approaches that can be applied to create new types of maturity models based on structured and unstructured data.


2020 ◽  
Author(s):  
Mark Rubin

Preregistration entails researchers registering their planned research hypotheses, methods, and analyses in a time-stamped document before they undertake their data collection and analyses. This document is then made available with the published research report to allow readers to identify discrepancies between what the researchers originally planned to do and what they actually ended up doing. This historical transparency is supposed to facilitate judgments about the credibility of the research findings. The present article provides a critical review of 17 of the reasons behind this argument. The article covers issues such as HARKing, multiple testing, p-hacking, forking paths, optional stopping, researchers’ biases, selective reporting, test severity, publication bias, and replication rates. It is concluded that preregistration’s historical transparency does not facilitate judgments about the credibility of research findings when researchers provide contemporary transparency in the form of (a) clear rationales for current hypotheses and analytical approaches, (b) public access to research data, materials, and code, and (c) demonstrations of the robustness of research conclusions to alternative interpretations and analytical approaches.


Sign in / Sign up

Export Citation Format

Share Document