Moral limits of genomic research and biotechnology as the basis to forme a legal space of innovative medicine

2019 ◽  
Vol XIV (2) ◽  
Author(s):  
A.M. Gerasimov
Author(s):  
Aleksandr Mitin

The article discusses the possibilities of automation of legal activities. Special attention is paid to the new branch of the business LegalTech, which allows providing legal services using information technology. Some projects in this area are briefly described: FreshDoc document designer, VideoContract app, and electronic trading platforms such as Legal Space and Pravoved.Ru. Although the legal community is not quite ready to work in such conditions, higher education institutions are already reforming their curricula, developing disciplines that allow gaining professional competence in introducing technologies that automate legal work, and so on. The author, in turn, offers using chat bots in legal clinics, gives examples of new disciplines for master’s degree programs, considers the idea of passing final examinations outside universities in certification centers, etc. It is emphasized that in jurisprudence there are a lot of typical situations in which typical decisions need to be made, and here artificial intelligence will be a good helper, and scientists will have more time to undertake a comprehensive analysis of law. Thus, even with the advent of new technologies, the creative work of lawyers will always be in demand.


2019 ◽  
Vol 25 (31) ◽  
pp. 3350-3357 ◽  
Author(s):  
Pooja Tripathi ◽  
Jyotsna Singh ◽  
Jonathan A. Lal ◽  
Vijay Tripathi

Background: With the outbreak of high throughput next-generation sequencing (NGS), the biological research of drug discovery has been directed towards the oncology and infectious disease therapeutic areas, with extensive use in biopharmaceutical development and vaccine production. Method: In this review, an effort was made to address the basic background of NGS technologies, potential applications of NGS in drug designing. Our purpose is also to provide a brief introduction of various Nextgeneration sequencing techniques. Discussions: The high-throughput methods execute Large-scale Unbiased Sequencing (LUS) which comprises of Massively Parallel Sequencing (MPS) or NGS technologies. The Next geneinvolved necessarily executes Largescale Unbiased Sequencing (LUS) which comprises of MPS or NGS technologies. These are related terms that describe a DNA sequencing technology which has revolutionized genomic research. Using NGS, an entire human genome can be sequenced within a single day. Conclusion: Analysis of NGS data unravels important clues in the quest for the treatment of various lifethreatening diseases and other related scientific problems related to human welfare.


2019 ◽  
Vol 14 (2) ◽  
pp. 157-163
Author(s):  
Majid Hajibaba ◽  
Mohsen Sharifi ◽  
Saeid Gorgin

Background: One of the pivotal challenges in nowadays genomic research domain is the fast processing of voluminous data such as the ones engendered by high-throughput Next-Generation Sequencing technologies. On the other hand, BLAST (Basic Local Alignment Search Tool), a longestablished and renowned tool in Bioinformatics, has shown to be incredibly slow in this regard. Objective: To improve the performance of BLAST in the processing of voluminous data, we have applied a novel memory-aware technique to BLAST for faster parallel processing of voluminous data. Method: We have used a master-worker model for the processing of voluminous data alongside a memory-aware technique in which the master partitions the whole data in equal chunks, one chunk for each worker, and consequently each worker further splits and formats its allocated data chunk according to the size of its memory. Each worker searches every split data one-by-one through a list of queries. Results: We have chosen a list of queries with different lengths to run insensitive searches in a huge database called UniProtKB/TrEMBL. Our experiments show 20 percent improvement in performance when workers used our proposed memory-aware technique compared to when they were not memory aware. Comparatively, experiments show even higher performance improvement, approximately 50 percent, when we applied our memory-aware technique to mpiBLAST. Conclusion: We have shown that memory-awareness in formatting bulky database, when running BLAST, can improve performance significantly, while preventing unexpected crashes in low-memory environments. Even though distributed computing attempts to mitigate search time by partitioning and distributing database portions, our memory-aware technique alleviates negative effects of page-faults on performance.


2020 ◽  
Author(s):  
Fang Li ◽  
Muhammad "Tuan" Amith ◽  
Grace Xiong ◽  
Jingcheng Du ◽  
Yang Xiang ◽  
...  

BACKGROUND Alzheimer’s Disease (AD) is a devastating neurodegenerative disease, of which the pathophysiology is insufficiently understood, and the curative drugs are long-awaited to be developed. Computational drug repurposing introduces a promising complementary strategy of drug discovery, which benefits from an accelerated development process and decreased failure rate. However, generating new hypotheses in AD drug repurposing requires multi-dimensional and multi-disciplinary data integration and connection, posing a great challenge in the era of big data. By integrating data with computable semantics, ontologies could infer unknown relationships through automated reasoning and fulfill an essential role in supporting computational drug repurposing. OBJECTIVE The study aimed to systematically design a robust Drug Repurposing-Oriented Alzheimer’s Disease Ontology (DROADO), which could model fundamental elements and their relationships involved in AD drug repurposing and integrate their up-to-date research advance comprehensively. METHODS We devised a core knowledge model of computational AD drug repurposing, based on both pre-genomic and post-genomic research paradigms. The model centered on the possible AD pathophysiology and abstracted the essential elements and their relationships. We adopted a hybrid strategy to populate the ontology (classes and properties), including importing from well-curated databases, extracting from high-quality papers and reusing the existing ontologies. We also leveraged n-ary relations and nanopublication graphs to enrich the object relations, making the knowledge stored in the ontology more powerful in supporting computational processing. The initially built ontology was evaluated by a semiotic-driven and web-based tool Ontokeeper. RESULTS The current version of DROADO was composed of 1,021 classes, 23 object properties and 3,207 axioms, depicting a fundamental network related to computational neuroscience concepts and relationships. Assessment using semiotic evaluation metrics by OntoKeeper indicated sufficient preliminary quality (semantics, usefulness and community-consensus) of the ontology. CONCLUSIONS As an in-depth knowledge base, DROADO would be promising in enabling computational algorithms to realize supervised mining from multi-source data, and ultimately, facilitating the discovery of novel AD drug targets and the realization of AD drug repurposing.


Author(s):  
Joshua M. White

This book offers a comprehensive examination of the shape and impact of piracy in the eastern half of the Mediterranean and the Ottoman Empire’s administrative, legal, and diplomatic response. In the late sixteenth and seventeenth centuries, piracy had a tremendous effect on the formation of international law, the conduct of diplomacy, the articulation of Ottoman imperial and Islamic law, and their application in Ottoman courts. Piracy and Law draws on research in archives and libraries in Istanbul, Venice, Crete, London, and Paris to bring the Ottoman state and Ottoman victims into the story for the first time. It explains why piracy exploded after the 1570s and why the Ottoman state was largely unable to marshal an effective military solution even as it responded dynamically in the spheres of law and diplomacy. By focusing on the Ottoman victims, jurists, and officials who had to contend most with the consequences of piracy, Piracy and Law reveals a broader range of piratical practitioners than the Muslim and Catholic corsairs who have typically been the focus of study and considers their consequences for the Ottoman state and those who traveled through Ottoman waters. This book argues that what made the eastern half of the Mediterranean basin the Ottoman Mediterranean, more than sovereignty or naval supremacy—which was ephemeral—was that it was a legal space. The challenge of piracy helped to define its contours.


This book continues the thick comparative approach that lies at the heart of the Max Planck Handbook series. It addresses one of the most significant phenomena of modern-day public law: constitutional adjudication. This book introduces, through individual country reports, the institutions and practices that make constitutional adjudication come to life across the Continent. Thus, each country report will explain the history, design, composition, and practice of the body that engages (or not) in constitutional scrutiny. To draw as complete a picture as possible, the book includes countries with powerful constitutional courts, jurisdictions with traditional supreme courts, and states with small institutions and limited ex ante review. In keeping with the focus on a diverse but unified legal space, each report also details how its institution fits into the broader association of constitutional courts that, through dialogue and conflict, brings to fruition the European legal space.


Sign in / Sign up

Export Citation Format

Share Document