publishing workflow
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 2)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
Alaric Carl Hamacher

Online teaching in 2020 forced many educators to adopt new teaching methods. Instead of working in the classroom with handout and oral presentations, online teaching requires new teaching materials and documents. These are usually created in different formats with different software and is often redundant. The present paper proposes a research on workflows and practical applications to streamline the publishing process by proposing authoring in a meta data format and publication of convergent teaching material from a single document. The purpose of this research is to improve the quality of education by reduction of redundant workflows in the creation process of teaching materials.


Author(s):  
Darrell W. Gunter

This chapter will explore how blockchain and AI technology will address the current problems in the current publishing workflow including the author manuscript submission systems, peer review process, editing, production process, and dissemination process. Further, after the article has published, blockchain and AI technologies will allow all of the stakeholders in the value chain to benefit from a more efficient and effective upstream and downstream publishing process. This chapter will explore rights and royalties, anti-piracy and ebooks, and how blockchain and AI will create new research and business opportunities.


Ravnetrykk ◽  
2020 ◽  
Author(s):  
Obiajulu Odu ◽  
Aysa Ekanger

This is a story about how an Open Journal Systems-based library publishing service tried (and failed) to implement XML in one of its publications. We ran a small project to look at how journals we support could develop a JATS XML-based publishing workflow using existing open software tools.


2020 ◽  
Vol 14 (1) ◽  
pp. 292-302
Author(s):  
Christian Thomas Jacobs

The introduction of open-access data policies by research councils, the enforcement of best practices, and the deployment of persistent online repositories have enabled datasets which support results in scientific papers to become more widely accessible. Unfortunately, despite this advancement in the curation/publishing workflow, the data-driven figures within a paper often remain difficult to reproduce. Plotting or analysis scripts rarely accompany the manuscript or any associated software release; and even if they do, it may be unclear exactly which version was used. Furthermore, the precise commands and parameters used to execute the scripts are often not included in a README file or in the paper itself. This paper introduces a new open-source digital curation tool, Pynea, for improving the reproducibility of LaTeX documents. Each figure within a document is enriched by automatically embedding the plotting script and data files required to generate it, such that it can be regenerated by readers of the paper in the future. The command used to execute the plotting script is also added to the figure's metadata, along with details of the specific version of the script used (if the script is tracked with the Git version control system). If the document is to be recompiled with a figure that has since changed, or had its plotting script or data files modified, the figure is regenerated such that the author can be confident that the latest version of the figure and its dependencies are included.


Author(s):  
Laurence Bénichou ◽  
Isabelle Gerard ◽  
Chloé Chester ◽  
Donat Agosti

The European Journal of Taxonomy (EJT) was initiated by a consortium of European natural history publishers to take advantage of the shift from paper to electronic-only publishing (Benichou et al. 2011). Whilst originally publishing in PDF format has been considered the state of the art, it became recently obvious that complementary dissemination channels help to disseminate taxonomic data - one of the pillars of Natural History institutions research - more widely and efficiently (Côtez et al. 2018). The adoption of semantic markup and assignment of persistent identifiers for content allow more comprehensive citations of the article, including elements therein, such as images, taxonomic treatments, and materials citation. It also allows more in-depth analyses and visualization of the contribution of collections, authors, or specimens to taxonomic output and third parties, such as the Global Biodiversity Information Facility, for reuse of the data or building the catalogue of life. In this presentation, EJT will be used to outline the nature of natural history publishers and their technical set up. This is followed by a description of the post-publishing workflow using the Plazi workflow and dissemination via the Biodiversity Literature Repository (BLR) and TreatmentBank. It outlines switching the publishing workflow to an increased use of extended markup language (XML) and visualization of the output and concludes by publishing guidelines that enable more efficient text and data mining of the content of taxonomic publications.


Author(s):  
Antonia Schrader ◽  
Alexander Grossmann ◽  
Michael Reiche

Across the world, there is a growing interest in Open Access (OA) publishing. Therefore, OA publishing has become a trend and is of key importance to the scientific community. However, observing the publication landscape in Germany leads to a striking finding of very different approaches. In particular, OA book publishing is still in relatively early stages, leading to OA books being much less frequently published than OA journal articles. However, although well-established publishers offer the publication of OA books, only certain researchers can actually publish, because of high Book Processing Charges (BPCs). In contrast to such publishers, university presses publish books as OA without any or at significantly lower charges; however, university presses are often inadequately staffed and do not have the technical know-how of the state-of-the-art publishing of OA books possessed by well-established publishers. For these reasons, our research project aims to develop an ideal and transferable publication workflow for OA books that is both cost-effective and personnel-efficient as well as media-neutral to enable universities to publish their publications as OA. To this end, a one-day meeting with stakeholders of the publication landscape was held in June 2018 at the University of Applied Science in Leipzig, Germany. During the meeting, the stakeholders were asked to present their views on the current situation and also the lessons learned and the shortcomings of the existing approaches. As a result, the observation was confirmed that the publication landscape is very heterogeneous and that there are no standardised interfaces and no harmonised practices for publishing OA books. Furthermore, in a discussion with the stakeholders during the second part of the meeting, further various issues of OA book publishing were revealed that have to be considered. Additionally, the various challenges and wishes of the stakeholders could be classified into five topic areas. These findings illustrate that the primary task of the research project has to be the analysis of the existing publishing workflows and abstracting generally valid processes that are needed to publish OA books. Additionally, the further issues of OA book publishing, mentioned by the stakeholders, have to be addressed during the development. The five topic areas will help reduce the complexity of this project.


2018 ◽  
Author(s):  
Iain Hrynaszkiewicz ◽  
Rebecca Grant

In 2016 the publisher Springer Nature introduced four standard research data policies for its journals, enabling more journals to adopt a data policy appropriate for their discipline and community. These standard policies have been adopted by more than 1,500 journals, and similar initiatives to standardise journal research data policies have since been introduced by other large publishers. To support researchers and editors Springer Nature launched a Research Data Helpdesk, which to October 2017 had received more than 300 enquiries. A large survey of researchers with more than 7000 respondents in 2017 revealed that many researchers need support with data management and curation tasks. In 2017 Springer Nature introduced a pilot service to provide additional support to researchers who wish to make their data available alongside their published articles. This Research Data Support service provides hands-on assistance to researchers in uploading their data to a repository, selecting an appropriate licence, enhancing metadata, and cross-referencing the data and its associated publication. The data curation standards were subject to blinded testing by professional editors, and curated datasets scored much higher for metadata quality and completeness on average. We describe the implementation of – and lessons learned from providing – a third party data deposition and curation service at a large scholarly publisher, which has been used by authors publishing in journals including Nature, and BMC Ecology. We conclude with current and future developments, which extend the Research Data Support service to any published researcher, and to research institutions and conferences, providing opportunities to embed research data management support earlier in the scholarly publishing workflow. This paper was presented at the PV 2018 Conference: Adding value and preserving data.


2018 ◽  
Vol 1007 ◽  
pp. 012032
Author(s):  
R Rahim ◽  
D E Irawan ◽  
A Zulfikar ◽  
R Hardi ◽  
L Arliman S ◽  
...  

2017 ◽  
Author(s):  
Robbi Rahim ◽  
Dasapta Erwin Irawan ◽  
Nuning Kurniasih ◽  
Ansari Saleh Ahmar ◽  
Ratnadewi Ratnadewi ◽  
...  

This is postprint version, this article already presenting in 2017 - International Conference on Mechanical, Electronics, Computer, and Industrial Technology (MECnIT 2017) by 7th December 2017 at Grand Kanaya Hotel, Host by Universitas Prima Indonesia, and waiting for published in IOP Publisher indexing Scopus.


Sign in / Sign up

Export Citation Format

Share Document