Robust Query Processing in Database Systems*

2018 ◽  
Author(s):  
Jayant Haritsa

Database management systems constitute the backbone of today’s information-rich society, providing a congenial environment for handling enterprise data during its entire life cycle of generation, storage, maintenance and processing. The defacto standard user interface to query the information present in the database is SQL (Structured Query Language). An organic USP of SQL is its declarative persona, which enables users to focus solely on query formulation, leaving it to the database system to identify an efficient execution strategy. Paradoxically, however, the declarative constitution of SQL is also its Achilles heel. This is because the execution strategies chosen by the system often turn out, in hindsight, to be highly sub-optimal as compared to the ideal choice. Unfortunately, due to the intrinsic technical complexities and challenges, solutions to this long-standing problem have remained chronically elusive despite intensive study by the database research community over the past five decades.

2021 ◽  
Vol 19 ◽  
pp. 151-158
Author(s):  
Piotr Rymarski ◽  
Grzegorz Kozieł

Most of today's web applications run on relational database systems. Communication with them is possible through statements written in Structured Query Language (SQL). This paper presents the most popular relational database management systems and describes common ways to optimize SQL queries. Using the research environment based on fragment of the imdb.com database, implementing OracleDb, MySQL, Microsoft SQL Server and PostgreSQL engines, a number of test scenarios were performed. The aim was to check the performance changes of SQL queries resulting from syntax modication while maintaining the result, the impact of database organization, indexing and advanced mechanisms aimed at increasing the eciency of operations performed, delivered in the systems used. The tests were carried out using a proprietary application written in Java using the Hibernate framework.


2014 ◽  
Author(s):  
Cameron Neylon

>> See video of presentation (62 min.)The question of radical change in the scholarly literature has shifted in the past 12 months from ‘if’ to ‘how’. There is growing consensus on the need for change alongside increasing action from funders and institutions aimed at driving those changes. However there is less consensus on what the ultimate end state will look like and how to get there. In particular two challenging collective action problems exist: how to manage the diversion of money from subscription budgets into the development, maintenance and running of a web-native communications infrastructure, and how to simultaneously encourage the cultural changes required in the research community to take advantage of the opportunities that infrastructure will bring. These two challenges are both tightly coupled with each other and with our vision of the ideal state of scholarly communications infrastructure. I will seek to chart out the various visions of the future alongside a model of how to drive the cultural and economic changes that can realise those visions in practice.


Author(s):  
Hong Va Leong

With the widespread deployment of wireless communication infrastructure in the past decade, accessing information online while a client is on the move becomes a concrete possibility. Such a computing environment is often referred to as a mobile environment (Imielinski & Badrinath, 1994). A typical group of applications that deserve strong support under the mobile environment would be database access. Database systems that support operations initiated from mobile clients are referred to as mobile databases (Leong & Si, 1997). We have witnessed a tremendous growth in mobile database research in the past ten years. Yet only the most primitive results have been incorporated in real applications. This is due to the additional dimensions of complexity that the mobile environment has introduced, beyond standard client/server computing environment.


Author(s):  
Xiongpai Qin ◽  
Yueguo Chen

In the last decade, computer hardware progressed by leaps and bounds. The advancements of hardware include the application of multi-core CPUs, use of GPUs in data intensive tasks, bigger and bigger main memory capacity, maturity and production use of non-volatile memory, etc. Database systems immediately benefit from faster CPU/GPU and bigger memory and run faster. However, there are some pitfalls. For example, database systems running on multi-core processors may suffer from cache conflicts when the number of concurrently executing DB processes increases. To fully exploit advantages of new hardware to improve the performance of database systems, database software should be more or less revised. This chapter introduces some efforts of database research community in this aspect.


Author(s):  
Xiongpai Qin ◽  
Yueguo Chen

In the last decade, computer hardware progresses with leaps and bounds. The advancements of hardware include: widely application of multi-core CPUs, using of GPUs in data intensive tasks, bigger and bigger main memory capacity, maturity and production use of non-volatile memory etc. Database systems immediately benefit from faster CPU/GPU and bigger memory, and run faster. However, there are some pitfalls. For example, database systems running on multi-core processors may suffer from cache conflicts when the number of concurrently executing DB processes increases. To fully exploit advantages of new hardware to improve the performance of database systems, database software should be more or less revised. This chapter introduces some efforts of database research community in this aspect.


Author(s):  
Gerald Gaus

This book lays out a vision for how we should theorize about justice in a diverse society. It shows how free and equal people, faced with intractable struggles and irreconcilable conflicts, might share a common moral life shaped by a just framework. The book argues that if we are to take diversity seriously and if moral inquiry is sincere about shaping the world, then the pursuit of idealized and perfect theories of justice—essentially, the entire production of theories of justice that has dominated political philosophy for the past forty years—needs to change. Drawing on recent work in social science and philosophy, the book points to an important paradox: only those in a heterogeneous society—with its various religious, moral, and political perspectives—have a reasonable hope of understanding what an ideally just society would be like. However, due to its very nature, this world could never be collectively devoted to any single ideal. The book defends the moral constitution of this pluralistic, open society, where the very clash and disagreement of ideals spurs all to better understand what their personal ideals of justice happen to be. Presenting an original framework for how we should think about morality, this book rigorously analyzes a theory of ideal justice more suitable for contemporary times.


Author(s):  
Jeasik Cho

This book provides the qualitative research community with some insight on how to evaluate the quality of qualitative research. This topic has gained little attention during the past few decades. We, qualitative researchers, read journal articles, serve on masters’ and doctoral committees, and also make decisions on whether conference proposals, manuscripts, or large-scale grant proposals should be accepted or rejected. It is assumed that various perspectives or criteria, depending on various paradigms, theories, or fields of discipline, have been used in assessing the quality of qualitative research. Nonetheless, until now, no textbook has been specifically devoted to exploring theories, practices, and reflections associated with the evaluation of qualitative research. This book constructs a typology of evaluating qualitative research, examines actual information from websites and qualitative journal editors, and reflects on some challenges that are currently encountered by the qualitative research community. Many different kinds of journals’ review guidelines and available assessment tools are collected and analyzed. Consequently, core criteria that stand out among these evaluation tools are presented. Readers are invited to join the author to confidently proclaim: “Fortunately, there are commonly agreed, bold standards for evaluating the goodness of qualitative research in the academic research community. These standards are a part of what is generally called ‘scientific research.’ ”


Author(s):  
Edward Bellamy

‘No person can be blamed for refusing to read another word of what promises to be a mere imposition upon his credulity.’ Julian West, a feckless aristocrat living in fin-de-siècle Boston, plunges into a deep hypnotic sleep in 1887 and wakes up in the year 2000. America has been turned into a rigorously centralized democratic society in which everything is controlled by a humane and efficient state. In little more than a hundred years the horrors of nineteenth-century capitalism have been all but forgotten. The squalid slums of Boston have been replaced by broad streets, and technological inventions have transformed people’s everyday lives. Exiled from the past, West excitedly settles into the ideal society of the future, while still fearing that he has dreamt up his experiences as a time traveller. Edward Bellamy’s Looking Backward (1888) is a thunderous indictment of industrial capitalism and a resplendent vision of life in a socialist utopia. Matthew Beaumont’s lively edition explores the political and psychological peculiarities of this celebrated utopian fiction.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Peter Baumann ◽  
Dimitar Misev ◽  
Vlad Merticariu ◽  
Bang Pham Huu

AbstractMulti-dimensional arrays (also known as raster data or gridded data) play a key role in many, if not all science and engineering domains where they typically represent spatio-temporal sensor, image, simulation output, or statistics “datacubes”. As classic database technology does not support arrays adequately, such data today are maintained mostly in silo solutions, with architectures that tend to erode and not keep up with the increasing requirements on performance and service quality. Array Database systems attempt to close this gap by providing declarative query support for flexible ad-hoc analytics on large n-D arrays, similar to what SQL offers on set-oriented data, XQuery on hierarchical data, and SPARQL and CIPHER on graph data. Today, Petascale Array Database installations exist, employing massive parallelism and distributed processing. Hence, questions arise about technology and standards available, usability, and overall maturity. Several papers have compared models and formalisms, and benchmarks have been undertaken as well, typically comparing two systems against each other. While each of these represent valuable research to the best of our knowledge there is no comprehensive survey combining model, query language, architecture, and practical usability, and performance aspects. The size of this comparison differentiates our study as well with 19 systems compared, four benchmarked to an extent and depth clearly exceeding previous papers in the field; for example, subsetting tests were designed in a way that systems cannot be tuned to specifically these queries. It is hoped that this gives a representative overview to all who want to immerse into the field as well as a clear guidance to those who need to choose the best suited datacube tool for their application. This article presents results of the Research Data Alliance (RDA) Array Database Assessment Working Group (ADA:WG), a subgroup of the Big Data Interest Group. It has elicited the state of the art in Array Databases, technically supported by IEEE GRSS and CODATA Germany, to answer the question: how can data scientists and engineers benefit from Array Database technology? As it turns out, Array Databases can offer significant advantages in terms of flexibility, functionality, extensibility, as well as performance and scalability—in total, the database approach of offering “datacubes” analysis-ready heralds a new level of service quality. Investigation shows that there is a lively ecosystem of technology with increasing uptake, and proven array analytics standards are in place. Consequently, such approaches have to be considered a serious option for datacube services in science, engineering and beyond. Tools, though, vary greatly in functionality and performance as it turns out.


2020 ◽  
Vol 10 (18) ◽  
pp. 6553
Author(s):  
Sabrina Azzi ◽  
Stéphane Gagnon ◽  
Alex Ramirez ◽  
Gregory Richards

Healthcare is considered as one of the most promising application areas for artificial intelligence and analytics (AIA) just after the emergence of the latter. AI combined to analytics technologies is increasingly changing medical practice and healthcare in an impressive way using efficient algorithms from various branches of information technology (IT). Indeed, numerous works are published every year in several universities and innovation centers worldwide, but there are concerns about progress in their effective success. There are growing examples of AIA being implemented in healthcare with promising results. This review paper summarizes the past 5 years of healthcare applications of AIA, across different techniques and medical specialties, and discusses the current issues and challenges, related to this revolutionary technology. A total of 24,782 articles were identified. The aim of this paper is to provide the research community with the necessary background to push this field even further and propose a framework that will help integrate diverse AIA technologies around patient needs in various healthcare contexts, especially for chronic care patients, who present the most complex comorbidities and care needs.


Sign in / Sign up

Export Citation Format

Share Document