scholarly journals The Topologies of Data Practices: A Methodological Introduction

2021 ◽  
Vol 9 (2) ◽  
pp. 67 ◽  
Author(s):  
Mathias Decuypere

This paper offers a methodological framework to research data practices in education critically. Data practices are understood in the generic sense of the word here, i.e., as the actions, performances, and the resulting consequences, of introducing data-producing technologies in everyday educational situations. The paper first distinguishes between data infrastructures, datafication and data points as three distinct, yet interrelated, phenomena. In order to investigate their concrete doings and specificities, the paper proposes a topological methodology that allows disentangling the relational nature and interwovenness of data practices. Based on this methodology, the paper proceeds with outlining a methodical toolbox that can be employed in studying data practices. Starting from nascent work on digital education platforms as a worked example, the toolbox allows researchers to investigate data practices as consisting of four unique topological dimensions: the Interface of a data practice, its actual Usage, its concrete Design, and its Ecological embeddedness - IUDE.

2019 ◽  
Vol 2019 ◽  
Author(s):  
Simon Taylor ◽  
Kevin Witzenberger

AI methods and ubiquitous data sensors have enabled a new algorithmic quantification of affect with the possibility to detect or verify users’ identities, characteristics, emotional states, and physical traits. By scrutinizing how transient datasets are produced by user applied pressure on touch-screens (via fingertip commands) this paper showcases how sensory technology creeps into users’ everyday life with potential implementations connected to a series of emerging data issues engineered by a black-box design: one which obfuscates data production and precludes user consent under the disguise of “non-intrusive” features. Thereby, this paper explores the limits of user-based interrogation of black-boxes by researching tactile modes of operation, as a subset of behavioural biometrics, and sensors that register force in touch analysis and haptic technologies. Presenting a citation analysis of biometric techniques around the proposed usage of pressure; the authors offer a case-study examination of zinc-based force-sensing materials that are cost-effective and scalable to ubiquitous-computing and a prototype developed using ‘each pixel as a sensor’. By combining these approaches, this paper argues that such developments constitute a phenomenological shift away from users’ perception to data infrastructures working as assemblages of hidden technical sensations, and there is a need to expose these complex networks to afford some grasp, if not direct agency, over their micro temporal operation. This work aims not simply to theorise, but to help reveal ways users may revise, embrace, resist, subvert or even live data practices that operate unlike conventional data harvesting techniques.  


2021 ◽  
Author(s):  
Simon Jirka ◽  
Benedikt Gräler ◽  
Matthes Rieke ◽  
Christian Autermann

<p>For many scientific domains such as hydrology, ocean sciences, geophysics and social sciences, geospatial observations are an important source of information. Scientists conduct extensive measurement campaigns or operate comprehensive monitoring networks to collect data that helps to understand and to model current and past states of complex environment. The variety of data underpinning research stretches from in-situ observations to remote sensing data (e.g., from the European Copernicus programme) and contributes to rapidly increasing large volumes of geospatial data.</p><p>However, with the growing amount of available data, new challenges arise. Within our contribution, we will focus on two specific aspects: On the one hand, we will discuss the specific challenges which result from the large volumes of remote sensing data that have become available for answering scientific questions. For this purpose, we will share practical experiences with the use of cloud infrastructures such as the German platform CODE-DE and will discuss concepts that enable data processing close to the data stores. On the other hand, we will look into the question of interoperability in order to facilitate the integration and collaborative use of data from different sources. For this aspect, we will give special consideration to the currently emerging new generation of standards of the Open Geospatial Consortium (OGC) and will discuss how specifications such as the OGC API for Processes can help to provide flexible processing capabilities directly within Cloud-based research data infrastructures.</p>


2013 ◽  
Vol 12 (0) ◽  
pp. 71-90 ◽  
Author(s):  
Costantino Thanos

Author(s):  
Martin Thomas Horsch ◽  
Silvia Chiacchiera ◽  
Welchy Leite Cavalcanti ◽  
Björn Schembera

AbstractThis chapter introduces metadata models as a semantic technology for knowledge representation to describe selected aspects of a research asset. The process of building a hierarchical metadata model is reenacted in this chapter and highlighted by the example of EngMeta. Moreover, an overview on data infrastructures is given, the general architecture and functions are disscussed, and multiple examples of data infrastructures in materials modelling are given.


2015 ◽  
Vol 10 (1) ◽  
pp. 210-229 ◽  
Author(s):  
Andrew Cox ◽  
Laurian Williamson

The Data Asset Framework methodology has evolved to provide a model for institutional surveys of researchers’ data practices and attitudes. At least 13 such studies have been published in the UK and internationally. The aim of this paper is to analyse the results from the 2014 DAF survey at the University of Sheffield and to reflect on the comparability of this with previous published studies. 432 researchers responded to the survey representing 8% of the target population. Researchers at Sheffield collect multiple types of data and a significant number have accumulated very large amounts of data. Data was backed up on a diverse basis. Only 25% of respondents had a DMP. Eighteen months after its creation most respondents were still not aware of the local research data management policy. Fortunately, most respondents were favourable to the idea of training in many aspects of RDM. Researchers had generally had no experience of sharing data, but attitudes were positive, both in terms of a significant minority seeing a lack of data sharing as an obstacle to the progress of research and also desire to reuse the data of others and share their own with a broad group of researchers. Comparison of the Sheffield results with those of other institutions is difficult particularly because of the divergence of questions asked in the different studies. Nevertheless, in terms of data practices and identifying training priorities there are common patterns. This institutional survey showed less positive attitudes to data sharing than the results of cross-institutional studies, such as conducted by Tenopir et al. (2011).


2021 ◽  
Author(s):  
Núria Queralt-Rosinach ◽  
Rajaram Kaliyaperumal ◽  
César H. Bernabé ◽  
Qinqin Long ◽  
Simone A. Joosten ◽  
...  

AbstractBackgroundThe COVID-19 pandemic has challenged healthcare systems and research worldwide. Data is collected all over the world and needs to be integrated and made available to other researchers quickly. However, the various heterogeneous information systems that are used in hospitals can result in fragmentation of health data over multiple data ‘silos’ that are not interoperable for analysis. Consequently, clinical observations in hospitalised patients are not prepared to be reused efficiently and timely. There is a need to adapt the research data management in hospitals to make COVID-19 observational patient data machine actionable, i.e. more Findable, Accessible, Interoperable and Reusable (FAIR) for humans and machines. We therefore applied the FAIR principles in the hospital to make patient data more FAIR.ResultsIn this paper, we present our FAIR approach to transform COVID-19 observational patient data collected in the hospital into machine actionable digital objects to answer medical doctors’ research questions. With this objective, we conducted a coordinated FAIRification among stakeholders based on ontological models for data and metadata, and a FAIR based architecture that complements the existing data management. We applied FAIR Data Points for metadata exposure, turning investigational parameters into a FAIR dataset. We demonstrated that this dataset is machine actionable by means of three different computational activities: federated query of patient data along open existing knowledge sources across the world through the Semantic Web, implementing Web APIs for data query interoperability, and building applications on top of these FAIR patient data for FAIR data analytics in the hospital.ConclusionsOur work demonstrates that a FAIR research data management plan based on ontological models for data and metadata, open Science, Semantic Web technologies, and FAIR Data Points is providing data infrastructure in the hospital for machine actionable FAIR digital objects. This FAIR data is prepared to be reused for federated analysis, linkable to other FAIR data such as Linked Open Data, and reusable to develop software applications on top of them for hypothesis generation and knowledge discovery.


Sign in / Sign up

Export Citation Format

Share Document