scholarly journals Historical Databases, Big and Small

2021 ◽  
Vol 10 ◽  
pp. 24-29
Author(s):  
Peter Doorn

Big Data is a relative term, and Small Data can be equally important. Not only the volume of data defines if data is 'Big', but three more Vs characterise the term: velocity (speed of data generation and processing), veracity (referring to data quality) and variety. Perhaps the most defining is methodological: data becomes really big when new methods are needed to process and analyse it. In contrast, this paper demonstrates how even a tiny dataset can contribute to our understanding of the past, in this case of the historical geography of two provinces in Ottoman Greece in the 17th century. Graph analysis is used on a dataset of just 16 data pairs, illustrating the point that a close-up view of data complements the look from farther away at bigger data volumes.

Author(s):  
Vijander Singh ◽  
Amit Kumar Bairwa ◽  
Deepak Sinwar

In the development of the advanced world, information has been created each second in numerous regions like astronomy, social locales, medical fields, transportation, web-based business, logical research, horticulture, video, and sound download. As per an overview, in 60 seconds, 600+ new clients on YouTube and 7 billion queries are executed on Google. In this way, we can say that the immense measure of organized, unstructured, and semi-organized information are produced each second around the cyber world, which should be managed efficiently. Big data conveys properties such as unpredictability, 'V' factor, multivariable information, and it must be put away, recovered, and dispersed. Logical arranged data may work as information in the field of digital world. In the past century, the sources of data as to size were very limited and could be managed using pen and paper. The next generation of data generation tools include Microsoft Excel, Access, and database tools like SQL, MySQL, and DB2.


2020 ◽  
Vol 110 ◽  
pp. 42-48
Author(s):  
Janet Currie ◽  
Henrik Kleven ◽  
Esmée Zwiers

The last 40 years have seen huge innovations in computing and in the availability of data. Data derived from millions of administrative records or by using (as we do) new methods of data generation such as text mining are now common. New data often requires new methods, which in turn can inspire new data collection. If history is any guide, some methods will stick and others will prove to be a flash in the pan. However, the larger trends toward demanding greater credibility and transparency from researchers in applied economics and a 'collage' approach to assembling evidence will likely continue.


2016 ◽  
Vol 8 (4) ◽  
pp. 165-175 ◽  
Author(s):  
Oleg Kapliński ◽  
Natalija Košeleva ◽  
Guoda Ropaitė

Data generation has increased drastically over the past few years. Data management has also grown in importance because extracting the significant value out of a huge pile of raw data is of prime importance while making different decisions. This article reviews the concept of Big Data. The Thomson Reuters Web of Science Core Collection academic database was used to overview publications that contained “BIG DATA” keywords and were included in Web of Science Category under “Engineering”. The analysis of publications was made according to year, country, journal, authors, language and funding agency.


2022 ◽  
pp. 1126-1148
Author(s):  
Vijander Singh ◽  
Amit Kumar Bairwa ◽  
Deepak Sinwar

In the development of the advanced world, information has been created each second in numerous regions like astronomy, social locales, medical fields, transportation, web-based business, logical research, horticulture, video, and sound download. As per an overview, in 60 seconds, 600+ new clients on YouTube and 7 billion queries are executed on Google. In this way, we can say that the immense measure of organized, unstructured, and semi-organized information are produced each second around the cyber world, which should be managed efficiently. Big data conveys properties such as unpredictability, 'V' factor, multivariable information, and it must be put away, recovered, and dispersed. Logical arranged data may work as information in the field of digital world. In the past century, the sources of data as to size were very limited and could be managed using pen and paper. The next generation of data generation tools include Microsoft Excel, Access, and database tools like SQL, MySQL, and DB2.


2017 ◽  
Vol 13 (4) ◽  
pp. 13-21
Author(s):  
Sh M Khapizov ◽  
M G Shekhmagomedov

The article is devoted to the study of inscriptions on the gravestones of Haji Ibrahim al-Uradi, his father, brothers and other relatives. The information revealed during the translation of these inscriptions allows one to date important events from the history of Highland Dagestan. Also we can reconsider the look at some important events from the past of Hidatl. Epitaphs are interesting in and of themselves, as historical and cultural monuments that needed to be studied and attributed. Research of epigraphy data monuments clarifies periodization medieval epitaphs mountain Dagestan using record templates and features of the Arabic script. We see the study of medieval epigraphy as one of the important tasks of contemporary Caucasian studies facing Dagestani researchers. Given the relatively weak illumination of the picture of events of that period in historical sources, comprehensive work in this direction can fill gaps in our knowledge of the medieval history of Dagestan. In addition, these epigraphs are of great importance for researchers of onomastics, linguistics, the history of culture and religion of Dagestan. The authors managed to clarify the date of death of Ibrahim-Haji al-Uradi, as well as his two sons. These data, the attraction of written sources and legends allowed the reconstruction of the events of the second half of the 18th century. For example, because of the epidemic of plague and the death of most of the population of Hidatl, this society noticeably weakened and could no longer maintain its influence on Akhvakh. The attraction of memorable records allowed us to specify the dates of the Ibrahim-Haji pilgrimage to Mecca and Medina, as well as the route through which he traveled to these cities.


2009 ◽  
Vol 5 (1) ◽  
pp. 32
Author(s):  
Melanie Maytin ◽  
Laurence M Epstein ◽  
◽  

Prior to the introduction of successful intravascular countertraction techniques, options for lead extraction were limited and dedicated tools were non-existent. The significant morbidity and mortality associated with these early extraction techniques limited their application to life-threatening situations such as infection and sepsis. The past 30 years have witnessed significant advances in lead extraction technology, resulting in safer and more efficacious techniques and tools. This evolution occurred out of necessity, similar to the pressure of natural selection weeding out the ineffective and highly morbid techniques while fostering the development of safe, successful and more simple methods. Future developments in lead extraction are likely to focus on new tools that will allow us to provide comprehensive device management and the design of new leads conceived to facilitate future extraction. With the development of these new methods and novel tools, the technique of lead extraction will continue to require operators that are well versed in several methods of extraction. Garnering new skills while remembering the lessons of the past will enable extraction technologies to advance without repeating previous mistakes.


Micromachines ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 118
Author(s):  
Jean-Laurent Pouchairet ◽  
Carole Rossi

For the past two decades, many research groups have investigated new methods for reducing the size and cost of safe and arm-fire systems, while also improving their safety and reliability, through batch processing. Simultaneously, micro- and nanotechnology advancements regarding nanothermite materials have enabled the production of a key technological building block: pyrotechnical microsystems (pyroMEMS). This building block simply consists of microscale electric initiators with a thin thermite layer as the ignition charge. This microscale to millimeter-scale addressable pyroMEMS enables the integration of intelligence into centimeter-scale pyrotechnical systems. To illustrate this technological evolution, we hereby present the development of a smart infrared (IR) electronically controllable flare consisting of three distinct components: (1) a controllable pyrotechnical ejection block comprising three independently addressable small-scale propellers, all integrated into a one-piece molded and interconnected device, (2) a terminal function block comprising a structured IR pyrotechnical loaf coupled with a microinitiation stage integrating low-energy addressable pyroMEMS, and (3) a connected, autonomous, STANAG 4187 compliant, electronic sensor arming and firing block.


2021 ◽  
pp. 105971232098304
Author(s):  
R Alexander Bentley ◽  
Joshua Borycz ◽  
Simon Carrignon ◽  
Damian J Ruck ◽  
Michael J O’Brien

The explosion of online knowledge has made knowledge, paradoxically, difficult to find. A web or journal search might retrieve thousands of articles, ranked in a manner that is biased by, for example, popularity or eigenvalue centrality rather than by informed relevance to the complex query. With hundreds of thousands of articles published each year, the dense, tangled thicket of knowledge grows even more entwined. Although natural language processing and new methods of generating knowledge graphs can extract increasingly high-level interpretations from research articles, the results are inevitably biased toward recent, popular, and/or prestigious sources. This is a result of the inherent nature of human social-learning processes. To preserve and even rediscover lost scientific ideas, we employ the theory that scientific progress is punctuated by means of inspired, revolutionary ideas at the origin of new paradigms. Using a brief case example, we suggest how phylogenetic inference might be used to rediscover potentially useful lost discoveries, as a way in which machines could help drive revolutionary science.


Author(s):  
Marco Angrisani ◽  
Anya Samek ◽  
Arie Kapteyn

The number of data sources available for academic research on retirement economics and policy has increased rapidly in the past two decades. Data quality and comparability across studies have also improved considerably, with survey questionnaires progressively converging towards common ways of eliciting the same measurable concepts. Probability-based Internet panels have become a more accepted and recognized tool to obtain research data, allowing for fast, flexible, and cost-effective data collection compared to more traditional modes such as in-person and phone interviews. In an era of big data, academic research has also increasingly been able to access administrative records (e.g., Kostøl and Mogstad, 2014; Cesarini et al., 2016), private-sector financial records (e.g., Gelman et al., 2014), and administrative data married with surveys (Ameriks et al., 2020), to answer questions that could not be successfully tackled otherwise.


Sign in / Sign up

Export Citation Format

Share Document