scholarly journals Thiopurine Derivative-Induced Fpg/Nei DNA Glycosylase Inhibition: Structural, Dynamic and Functional Insights

2020 ◽  
Vol 21 (6) ◽  
pp. 2058 ◽  
Author(s):  
Charlotte Rieux ◽  
Stéphane Goffinont ◽  
Franck Coste ◽  
Zahira Tber ◽  
Julien Cros ◽  
...  

DNA glycosylases are emerging as relevant pharmacological targets in inflammation, cancer and neurodegenerative diseases. Consequently, the search for inhibitors of these enzymes has become a very active research field. As a continuation of previous work that showed that 2-thioxanthine (2TX) is an irreversible inhibitor of zinc finger (ZnF)-containing Fpg/Nei DNA glycosylases, we designed and synthesized a mini-library of 2TX-derivatives (TXn) and evaluated their ability to inhibit Fpg/Nei enzymes. Among forty compounds, four TXn were better inhibitors than 2TX for Fpg. Unexpectedly, but very interestingly, two dithiolated derivatives more selectively and efficiently inhibit the zincless finger (ZnLF)-containing enzymes (human and mimivirus Neil1 DNA glycosylases hNeil1 and MvNei1, respectively). By combining chemistry, biochemistry, mass spectrometry, blind and flexible docking and X-ray structure analysis, we localized new TXn binding sites on Fpg/Nei enzymes. This endeavor allowed us to decipher at the atomic level the mode of action for the best TXn inhibitors on the ZnF-containing enzymes. We discovered an original inhibition mechanism for the ZnLF-containing Fpg/Nei DNA glycosylases by disulfide cyclic trimeric forms of dithiopurines. This work paves the way for the design and synthesis of a new structural class of inhibitors for selective pharmacological targeting of hNeil1 in cancer and neurodegenerative diseases.

2018 ◽  
Vol 90 (2) ◽  
pp. 363-376 ◽  
Author(s):  
Gianna Reginato ◽  
Massimo Calamante ◽  
Lorenzo Zani ◽  
Alessandro Mordini ◽  
Daniele Franchi

Abstract D-π-A dyes have received a special attention in the field of dye-sensitized solar cells (DSSCs). In this kind of molecules, the acceptor group (A) generally acts as an anchor, enabling the adsorption of the dye onto the metal oxide substrate (TiO2) and providing a good electron injection. The search for new anchors represents a critical factor for the development of improved DSSCs and in recent years has been a very active research field. This mini-review focuses especially on our work on pyridine-derived anchoring groups for D-π-A dyes, with a special regard on the preparation and characterization of three different families of dyes and a critical evaluation of their stability and efficiency.


Molecules ◽  
2021 ◽  
Vol 26 (11) ◽  
pp. 3192
Author(s):  
Nicolas Giacoletto ◽  
Frédéric Dumur

Over the past several decades, photopolymerization has become an active research field, and the ongoing efforts to develop new photoinitiating systems are supported by the different applications in which this polymerization technique is involved—including dentistry, 3D and 4D printing, adhesives, and laser writing. In the search for new structures, bis-chalcones that combine two chalcones’ moieties within a unique structure were determined as being promising photosensitizers to initiate both the free-radical polymerization of acrylates and the cationic polymerization of epoxides. In this review, an overview of the different bis-chalcones reported to date is provided. Parallel to the mechanistic investigations aiming at elucidating the polymerization mechanisms, bis-chalcones-based photoinitiating systems were used for different applications, which are detailed in this review.


2018 ◽  
Vol 34 (10) ◽  
pp. 885-890 ◽  
Author(s):  
Bertrand Jordan

Senescent cells are involved in many age-related diseases, and the effects of their elimination by “senolytic” drugs is an active research field. A recent paper describes a convenient murine model of induced senescence and uses it to convincingly demonstrate the positive effects of senolytics on performance and lifespan. Clinical studies have already been initiated; this approach hold promise to eventually improve human “healthspan”.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1129
Author(s):  
Marvin Martens ◽  
Rob Stierum ◽  
Emma L. Schymanski ◽  
Chris T. Evelo ◽  
Reza Aalizadeh ◽  
...  

Toxicology has been an active research field for many decades, with academic, industrial and government involvement. Modern omics and computational approaches are changing the field, from merely disease-specific observational models into target-specific predictive models. Traditionally, toxicology has strong links with other fields such as biology, chemistry, pharmacology and medicine. With the rise of synthetic and new engineered materials, alongside ongoing prioritisation needs in chemical risk assessment for existing chemicals, early predictive evaluations are becoming of utmost importance to both scientific and regulatory purposes. ELIXIR is an intergovernmental organisation that brings together life science resources from across Europe. To coordinate the linkage of various life science efforts around modern predictive toxicology, the establishment of a new ELIXIR Community is seen as instrumental. In the past few years, joint efforts, building on incidental overlap, have been piloted in the context of ELIXIR. For example, the EU-ToxRisk, diXa, HeCaToS, transQST, and the nanotoxicology community have worked with the ELIXIR TeSS, Bioschemas, and Compute Platforms and activities. In 2018, a core group of interested parties wrote a proposal, outlining a sketch of what this new ELIXIR Toxicology Community would look like. A recent workshop (held September 30th to October 1st, 2020) extended this into an ELIXIR Toxicology roadmap and a shortlist of limited investment-high gain collaborations to give body to this new community. This Whitepaper outlines the results of these efforts and defines our vision of the ELIXIR Toxicology Community and how it complements other ELIXIR activities.


Author(s):  
Elena Morotti ◽  
Davide Evangelista ◽  
Elena Loli Piccolomini

Deep Learning is developing interesting tools which are of great interest for inverse imaging applications. In this work, we consider a medical imaging reconstruction task from subsampled measurements, which is an active research field where Convolutional Neural Networks have already revealed their great potential. However, the commonly used architectures are very deep and, hence, prone to overfitting and unfeasible for clinical usages. Inspired by the ideas of the green-AI literature, we here propose a shallow neural network to perform an efficient Learned Post-Processing on images roughly reconstructed by the filtered backprojection algorithm. The results obtained on images from the training set and on unseen images, using both the non-expensive network and the widely used very deep ResUNet show that the proposed network computes images of comparable or higher quality in about one fourth of time.


Author(s):  
Ravishankar Palaniappan

Data visualization has the potential to aid humanity not only in exploring and analyzing large volume datasets but also in identifying and predicting trends and anomalies/outliers in a “simple and consumable” approach. These are vital to good and timely decisions for business advantage. Data Visualization is an active research field, focusing on the different techniques and tools for qualitative exploration in conjunction with quantitative analysis of data. However, an increase in volume, multivariate, frequency, and interrelationships of data will make the data visualization process notoriously difficult. This necessitates “innovative and iterative” display techniques. Either overlooking any dimensions/relationships of data structure or choosing an unfitting visualization method will quickly lead to a humanitarian uninterpretable “junk chart,” which leads to incorrect inferences or conclusions. The purpose of this chapter is to introduce the different phases of data visualization and various techniques which help to connect and empower data to mine insights. It exemplifies on how “data visualization” helps to unravel the important, meaningful, and useful insights including trends and outliers from real world datasets, which might otherwise be unnoticed. The use case in this chapter uses both simulated and real-world datasets to illustrate the effectiveness of data visualization.


Author(s):  
Stylianos Asteriadis ◽  
Stylianos Asteriadis ◽  
Nikos Nikolaidis ◽  
Nikos Nikolaidis ◽  
Ioannis Pitas ◽  
...  

Facial feature localization is an important task in numerous applications of face image analysis that include face recognition and verification, facial expression recognition, driver‘s alertness estimation, head pose estimation etc. Thus, the area has been a very active research field for many years and a multitude of methods appear in the literature. Depending on the targeted application, the proposed methods have different characteristics and are designed to perform in different setups. Thus, a method of general applicability seems to be away from the current state of the art. This chapter intends to offer an up-to-date literature review of facial feature detection algorithms. A review of the image databases and performance metrics that are used to benchmark these algorithms is also provided.


Author(s):  
Ruth Aguilar-Ponce ◽  
J. Luis Tecpanecatl-Xihuitl ◽  
Alfonso Alba-Cadena

Wireless Sensor Network future direction is going towards more complex sensor such as camera sensor. Therefore, a very active research field is Visual Sensor Network. This type of network brings new challenges such as processing and transmitting a massive amount of data generated by the camera sensor. The efforts into decreasing the amount of data to be transmitted are going towards two directions: data encoding and data filtering. This chapter introduces an algorithm for each direction. Visual data encoding is performed by means of Predictive Video Encoding using Phase-Only Correlation function to achieve motion estimation. Visual data filtering is done at the lowest level of abstraction and is performed in three phases: pixel classification, background update and detection. The algorithms involved in each phase are light in terms of complexity and memory resources.


Author(s):  
Abdelhamid Bouchachia

Recently the field of machine learning, pattern recognition, and data mining has witnessed a new research stream that is <i>learning with partial supervisio</i>n -LPS- (known also as <i>semi-supervised learning</i>). This learning scheme is motivated by the fact that the process of acquiring the labeling information of data could be quite costly and sometimes prone to mislabeling. The general spectrum of learning from data is envisioned in Figure 1. As shown, in many situations, the data is neither perfectly nor completely labeled.<div><br></div><div>LPS aims at using available labeled samples in order to guide the process of building classification and clustering machineries and help boost their accuracy. Basically, LPS is a combination of two learning paradigms: supervised and unsupervised where the former deals exclusively with labeled data and the latter is concerned with unlabeled data. Hence, the following questions:</div><div><br></div><div><ul><li>Can we improve supervised learning with unlabeled data?&nbsp;<br></li><li>Can we guide unsupervised learning by incorporating few labeled samples?<br></li></ul></div><div><br></div><div>Typical LPS applications are medical diagnosis (Bouchachia &amp; Pedrycz, 2006a), facial expression recognition (Cohen et al., 2004), text classification (Nigam et al., 2000), protein classification (Weston et al., 2003), and several natural language processing applications such as word sense disambiguation (Niu et al., 2005), and text chunking (Ando &amp; Zhangz, 2005).</div><div><br></div><div>Because LPS is still a young but active research field, it lacks a survey outlining the existing approaches and research trends. In this chapter, we will take a step towards an overview. We will discuss (i) the background of LPS, (iii) the main focus of our LPS research and explain the underlying assumptions behind LPS, and (iv) future directions and challenges of LPS research. </div>


Author(s):  
Marco-Antonio Balderas Cepeda

Association rule mining has been a highly active research field over the past decade. Extraction of frequency-related patterns has been applied to several domains. However, the way association rules are defined has limited people’s ability to obtain all the patterns of interest. In this chapter, the authors present an alternative approach that allows us to obtain new kinds of association rules that represent deviations from common behaviors. These new rules are called anomalous rules. To obtain such rules requires that we extract all the most frequent patterns together with certain extension patterns that may occur very infrequently. An approach that relies on anomalous rules has possible application in the areas of counterterrorism, fraud detection, pharmaceutical data analysis and network intrusion detection. They provide an adaption of measures of interest to our anomalous rule sets, and we propose an algorithm that can extract anomalous rules as well. Their experiments with benchmark and real-life datasets suggest that the set of anomalous rules is smaller than the set of association rules. Their work also provides evidence that our proposed approach can discover hidden patterns with good reliability.


Sign in / Sign up

Export Citation Format

Share Document