scholarly journals A decade of Semantic Web research through the lenses of a mixed methods approach

Semantic Web ◽  
2020 ◽  
Vol 11 (6) ◽  
pp. 979-1005
Author(s):  
Sabrina Kirrane ◽  
Marta Sabou ◽  
Javier D. Fernández ◽  
Francesco Osborne ◽  
Cécile Robin ◽  
...  

The identification of research topics and trends is an important scientometric activity, as it can help guide the direction of future research. In the Semantic Web area, initially topic and trend detection was primarily performed through qualitative, top-down style approaches, that rely on expert knowledge. More recently, data-driven, bottom-up approaches have been proposed that offer a quantitative analysis of the evolution of a research domain. In this paper, we aim to provide a broader and more complete picture of Semantic Web topics and trends by adopting a mixed methods methodology, which allows for the combined use of both qualitative and quantitative approaches. Concretely, we build on a qualitative analysis of the main seminal papers, which adopt a top-down approach, and on quantitative results derived with three bottom-up data-driven approaches (Rexplore, Saffron, PoolParty), on a corpus of Semantic Web papers published between 2006 and 2015. In this process, we both use the latter for “fact-checking” on the former and also to derive key findings in relation to the strengths and weaknesses of top-down and bottom-up approaches to research topic identification. Although we provide a detailed study on the past decade of Semantic Web research, the findings and the methodology are relevant not only for our community but beyond the area of the Semantic Web to other research fields as well.

2018 ◽  
Vol 22 (8) ◽  
pp. 4425-4447 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter.Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test for the extent to which expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the inverse distance weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down set-up relying on parameter and process constraints and an experimentalists' set-up based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed.The simulation results showed that (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up set-up performed better than the top-down one when simulating short-duration events, but similarly to the top-down set-up when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up set-up can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down set-up seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of model realism differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


2017 ◽  
Author(s):  
Marielle Saunois ◽  
Philippe Bousquet ◽  
Benjamin Poulter ◽  
Anna Peregon ◽  
Philippe Ciais ◽  
...  

Abstract. Following the recent Global Carbon project (GCP) synthesis of the decadal methane (CH4) budget over 2000–2012 (Saunois et al., 2016), we analyse here the same dataset with a focus on quasi-decadal and inter-annual variability in CH4 emissions. The GCP dataset integrates results from top-down studies (exploiting atmospheric observations within an atmospheric inverse-modelling frameworks) and bottom-up models, inventories, and data-driven approaches (including process-based models for estimating land surface emissions and atmospheric chemistry, inventories of anthropogenic emissions, and data-driven extrapolations). The annual global methane emissions from top-down studies, which by construction match the observed methane growth rate within their uncertainties, all show an increase in total methane emissions over the period 2000–2012, but this increase is not linear over the 13 years. Despite differences between individual studies, the mean emission anomaly of the top-down ensemble shows no significant trend in total methane emissions over the period 2000–2006, during the plateau of atmospheric methane mole fractions, and also over the period 2008–2012, during the renewed atmospheric methane increase. However, the top-down ensemble mean produces an emission shift between 2006 and 2008, leading to 22 [16–32] Tg CH4 yr−1 higher methane emissions over the period 2008–2012 compared to 2002–2006. This emission increase mostly originated from the tropics with a smaller contribution from mid-latitudes and no significant change from boreal regions. The regional contributions remain uncertain in top-down studies. Tropical South America and South and East Asia seems to contribute the most to the emission increase in the tropics. However, these two regions have only limited atmospheric measurements and remain therefore poorly constrained. The sectorial partitioning of this emission increase between the periods 2002–2006 and 2008–2012 differs from one atmospheric inversion study to another. However, all top-down studies suggest smaller changes in fossil fuel emissions (from oil, gas, and coal industries) compared to the mean of the bottom-up inventories included in this study. This difference is partly driven by a smaller emission change in China from the top-down studies compared to the estimate in the EDGARv4.2 inventory, which should be revised to smaller values in a near future. Though the sectorial partitioning of six individual top-down studies out of eight are not consistent with the observed change in atmospheric 13CH4, the partitioning derived from the ensemble mean is consistent with this isotopic constraint. At the global scale, the top-down ensemble mean suggests that, the dominant contribution to the resumed atmospheric CH4 growth after 2006 comes from microbial sources (more from agriculture and waste sectors than from natural wetlands), with an uncertain but smaller contribution from fossil CH4 emissions. Besides, a decrease in biomass burning emissions (in agreement with the biomass burning emission databases) makes the balance of sources consistent with atmospheric 13CH4 observations. The methane loss (in particular through OH oxidation) has not been investigated in detail in this study, although it may play a significant role in the recent atmospheric methane changes.


2020 ◽  
Author(s):  
Bettina Horlach ◽  
Andreas Drechsler

Abstract In this paper, we outline inherent tensions in Agile environments, which lead to paradoxes that Agile teams and organizations have to navigate. By taking a critical perspective on Agile frameworks and Agile organizational settings the authors are familiar with, we contribute an initial problematization of paradoxes for the Agile context. For instance, Agile teams face the continuous paradox of ‘doing Agile’ (= following an established Agile way of working) versus ‘being Agile’ (= changing an established Agile way of working). One of the paradoxes that organizations face is whether to start their Agile journey with a directed top-down (and therefore quite un-Agile) ‘big bang’ or to allow an emergent bottom-up transformation (which may be more in-line with the Agile spirit but perhaps not be able to overcome organizational inertia). Future research can draw on our initial problematization as a foundation for subsequent in-depth investigations of these Agile paradoxes. Agile teams and organizations can draw on our initial problematization of Agile paradoxes to inform their learning and change processes.


Author(s):  
Jeremy Millard

In terms of public services, governments do not yet know how to treat users as different and unique individuals. At worst, users are still considered an undifferentiated mass, or at best as segments. However, the benefits of universal personalisation in public services are within reach technologically through e-government developments. Universal personalisation will involve achieving a balance between top-down government- and data-driven services, on the one hand, and bottom-up self-directed and user-driven services on the other. There are at least three main technological, organisational and societal drivers. First, top-down data-driven, often automatic, services based on the huge data resources available in the cloud and the technologies enabling the systematic exploitation of these by governments. Second, increasing opportunities for users themselves or their intermediaries to select or create their own service environments, bottom-up, through ‘user-driven’ services, drawing directly on the data cloud. Third, a move to ‘everyday’, location-driven e-government based largely on mobile smart phones using GPS and local data clouds, where public services are offered depending on where people are as well as who they are and what they are doing. This paper examines practitioners and researchers and describes model current trends based on secondary research and literature review.


2015 ◽  
Vol 26 (7) ◽  
pp. 1031-1052 ◽  
Author(s):  
TickFei Chay ◽  
YuChun Xu ◽  
Ashutosh Tiwari ◽  
FooSoon Chay

Purpose – Failure in engaging shop floor employees (including supervisory staff) in lean, lacking of supervisory skills in leading workers and lacking of lean technical knowhow among the shop floor employees are some of the major obstacles in lean transformation. One of the reasons of inefficient lean transformation is the shortages in frameworks or plans in implementing lean. The purpose of this paper is to investigate the shortfalls in the current lean implementation frameworks. Design/methodology/approach – The frameworks were analysed according to the following criteria: first, “What” is the approach of lean implementation, i.e. top-down or bottom-up; second, “How” to implement lean (description of steps or sequences of lean implementation along the lean journey); third, “Why” – the reason of adoption of the proposed lean tools, techniques or practices (thereafter TTPs) in each phase of lean implementation; and fourth, “Who” are the targeted internal stakeholders to use or apply the lean TTPs that were proposed in the frameworks. Findings – Most of the current available lean frameworks were prone to top-down approach but not bottom-up. Improvement initiatives from the shop floor employees were often overlooked by researchers. In proposing their frameworks, most of the researchers have neglected the importance of “Why” aspect in the adoption of TTPs or the framework itself without giving the “reason” for each of the elements in lean implementation. Besides the aspects of “What” and “How”, the mentioned “Why” aspect is important in contributing to capability building among the shop floor employees in carrying out improvement, problem-solving or waste elimination activities. The aspect of “Who should carry out which lean TTP” was somewhat not emphasised by most of the lean researchers. In addition, the current frameworks were prone to “one-best-way” approach with lacking of contingency sense, which is one of the common criticisms against Lean Production System. Originality/value – This paper provides a critical view on the shortfalls of current lean implementation frameworks, and proposes an insight of new criteria for future research in analysing and proposing new lean implementation framework towards lean transformation.


2021 ◽  
Vol 13 (2) ◽  
pp. 547
Author(s):  
Qin Li ◽  
Hongmin Chen

Governments around the world are actively exploring strategies to reduce carbon emissions and mitigate and adapt to the impacts of climate change. In addition to technological progress, promoting a transformation of residents’ behaviors to a low carbon mode is also a solution. Many people are concerned about how to reduce carbon emissions while ensuring human well-being. Starting from the comparative analysis of two main theories of human well-being, this paper sorted out existing well-being measurement methods from the perspectives of “top-down” and “bottom-up” and further sorted out research on the relationship between human well-being and energy carbon emissions. While “top-down” research is conducive to the layout of macro policies, “bottom-up” research can better help to promote the transformation of society to a low carbon life by estimating the energy consumption and carbon emissions contained in human needs. Current research discusses human well-being, human needs, energy use and carbon emissions, respectively, but they are not systematically integrated. Furthermore, this paper proposes a framework combining these aspects to analyze the relationship between human well-being and carbon emissions. In addition, this paper suggests future research directions.


2020 ◽  
Vol 6 (1) ◽  
pp. 209-223
Author(s):  
Elke Huwiler
Keyword(s):  
Top Down ◽  

Archivbestände digitalisiert im Netz zu finden wird aus Nutzersicht heutzutage immer selbstverständlicher. Doch nicht nur um diesen Nutzungsbedürfnissen zu entsprechen, sondern auch aus Gründen der Erhöhung der Sichtbarkeit als Archiv gegen aussen, aus konservatorischen Gründen sowie aus Gründen der Bereitstellung des digitalisierten Materials für Forschung oder zu musealen Zwecken digitalisieren immer mehr Archive ihre Bestände.Um solche Digitalisierungsprojekte durchführen zu können, formulieren Archive Digitalisierungsstrategien, die unter den Aspekten Ziele, Ressourcen, Juristische Grundlagen, Auswahlkriterien, Standards, Erschliessung, Bereitstellung und Langzeitarchivierung die Handhabung des Archivs im Digitalisierungsprozess regeln.Die Arbeit untersucht die Digitalisierungsstrategien der beiden Gedächtnisinstitutionen Deutsches Literaturarchiv (Marbach) und Schweizerisches Literaturarchiv (Bern), die unterschiedlich mit dem Bereich der Digitalisierung umgehen: Während das Deutsche Literaturarchiv bisher kaum sichtbar Material digitalisiert und bereitgestellt hat, nun aber zu diesem Zweck ein Digitalisierungszentrum errichtet hat und die Koordination der entsprechenden Projekte „top-down“ regeln will, hat das Schweizerische Literaturarchiv schon mehrere Digitalisierungsprojekte durchgeführt, allerdings ohne selber formulierte Digitalisierungsstrategie; es orientiert sich für die Rahmenbedingungen an der Digitalisierungsleitlinie der Schweizerischen Nationalbibliothek und führt eigene Projekte nach einer „bottom-up“-Handhabung mit variierenden Regelungen je nach Projekt durch.Der Vergleich der beiden Strategien zeigt unter anderem, dass sich die beiden Literaturarchive zwar in der digitalisierten Erschliessung und Bereitstellung von Metadaten sehr fortschrittlich positionieren, bei der eigentlichen Digitalisierung von Archivgut jedoch sehr zurückhaltend vorgehen. Die Gründe dafür werden in der Untersuchung analysiert, und es werden Optimierungsvorschläge formuliert. Ein deutliches Fazit der Arbeit ist, dass die Digitalisierung von Archivgut und die Bereitstellung der Digitalisate im Semantic Web enorme Chancen für die Literaturarchive bietet und dieser Prozess somit vorangetrieben und optimiert werden sollte.


2012 ◽  
Vol 9 (3) ◽  
pp. 983-1017
Author(s):  
Daniel Rodríguez-Cerezo ◽  
Antonio Sarasa-Cabezuelo ◽  
José-Luis Sierra

This article describes structure-preserving coding patterns to code arbitrary non-circular attribute grammars as syntax-directed translation schemes for bottom-up and top-down parser generation tools. In these translation schemes, semantic actions are written in terms of a small repertory of primitive attribution operations. By providing alternative implementations for these attribution operations, it is possible to plug in different semantic evaluation strategies in a seamlessly way (e.g., a demand-driven strategy, or a data-driven one). The pattern makes possible the direct implementation of attribute grammar-based specifications with widely-used translation schemedriven tools for the development of both bottom-up (e.g. YACC, BISON, CUP) and top-down (e.g., JavaCC, ANTLR) language translators. As a consequence, initial translation schemes can be successively refined to yield final efficient implementations. Since these implementations still preserve the ability to be extended with new features described at the attribute grammar level, the advantages from the point of view of development and maintenance become apparent.


Sign in / Sign up

Export Citation Format

Share Document