Assessing wetland degradation and loss of ecosystem services in the Niger Delta, Nigeria

2016 ◽  
Vol 67 (6) ◽  
pp. 828 ◽  
Author(s):  
Ayansina Ayanlade ◽  
Ulrike Proske

The Niger Delta, being the most extensive freshwater wetland and aquatic ecosystem in West Africa, provides numerous services both to local people and to the West African economy. Ongoing environmental pressure exerted by large-scale oil extraction and illegal timber logging, however, are suspected to have had a substantial impact on the Delta’s ecosystems over the last decades. Knowledge on impact of these activities on the region’s wetlands now or in the past is scarce and patchy. To address this lack of knowledge, this study assesses spatiotemporal changes in two wetlands in the region by using satellite data from 1984 to 2011 and GIS methods. The results show that both wetlands have experienced substantial degradation, particularly with respect to the area of forest lost. Although comprehensive environmental protection laws were introduced in 1988, ecosystem services of up to US$65 million in value were lost over the study period. The introduction of new legislation in 2007, however, is potentially a first step towards a more ‘wise use’ of wetlands in Nigeria.

2018 ◽  
Vol 93 (5) ◽  
Author(s):  
Hualei Wang ◽  
Gary Wong ◽  
Wenjun Zhu ◽  
Shihua He ◽  
Yongkun Zhao ◽  
...  

ABSTRACT Ebola virus (EBOV) infections result in aggressive hemorrhagic fever in humans, with fatality rates reaching 90% and with no licensed specific therapeutics to treat ill patients. Advances over the past 5 years have firmly established monoclonal antibody (MAb)-based products as the most promising therapeutics for treating EBOV infections, but production is costly and quantities are limited; therefore, MAbs are not the best candidates for mass use in the case of an epidemic. To address this need, we generated EBOV-specific polyclonal F(ab′)2 fragments from horses hyperimmunized with an EBOV vaccine. The F(ab′)2 was found to potently neutralize West African and Central African EBOV in vitro. Treatment of nonhuman primates (NHPs) with seven doses of 100 mg/kg F(ab′)2 beginning 3 or 5 days postinfection (dpi) resulted in a 100% survival rate. Notably, NHPs for which treatment was initiated at 5 dpi were already highly viremic, with observable signs of EBOV disease, which demonstrated that F(ab′)2 was still effective as a therapeutic agent even in symptomatic subjects. These results show that F(ab′)2 should be advanced for clinical testing in preparation for future EBOV outbreaks and epidemics. IMPORTANCE EBOV is one of the deadliest viruses to humans. It has been over 40 years since EBOV was first reported, but no cure is available. Research breakthroughs over the past 5 years have shown that MAbs constitute an effective therapy for EBOV infections. However, MAbs are expensive and difficult to produce in large amounts and therefore may only play a limited role during an epidemic. A cheaper alternative is required, especially since EBOV is endemic in several third world countries with limited medical resources. Here, we used a standard protocol to produce large amounts of antiserum F(ab′)2 fragments from horses vaccinated with an EBOV vaccine, and we tested the protectiveness in monkeys. We showed that F(ab′)2 was effective in 100% of monkeys even after the animals were visibly ill with EBOV disease. Thus, F(ab′)2 could be a very good option for large-scale treatments of patients and should be advanced to clinical testing.


2020 ◽  
Author(s):  
Lungwani Muungo

The purpose of this review is to evaluate progress inmolecular epidemiology over the past 24 years in canceretiology and prevention to draw lessons for futureresearch incorporating the new generation of biomarkers.Molecular epidemiology was introduced inthe study of cancer in the early 1980s, with theexpectation that it would help overcome some majorlimitations of epidemiology and facilitate cancerprevention. The expectation was that biomarkerswould improve exposure assessment, document earlychanges preceding disease, and identify subgroupsin the population with greater susceptibility to cancer,thereby increasing the ability of epidemiologic studiesto identify causes and elucidate mechanisms incarcinogenesis. The first generation of biomarkers hasindeed contributed to our understanding of riskandsusceptibility related largely to genotoxic carcinogens.Consequently, interventions and policy changes havebeen mounted to reduce riskfrom several importantenvironmental carcinogens. Several new and promisingbiomarkers are now becoming available for epidemiologicstudies, thanks to the development of highthroughputtechnologies and theoretical advances inbiology. These include toxicogenomics, alterations ingene methylation and gene expression, proteomics, andmetabonomics, which allow large-scale studies, includingdiscovery-oriented as well as hypothesis-testinginvestigations. However, most of these newer biomarkershave not been adequately validated, and theirrole in the causal paradigm is not clear. There is a needfor their systematic validation using principles andcriteria established over the past several decades inmolecular cancer epidemiology.


1987 ◽  
Vol 19 (5-6) ◽  
pp. 701-710 ◽  
Author(s):  
B. L. Reidy ◽  
G. W. Samson

A low-cost wastewater disposal system was commissioned in 1959 to treat domestic and industrial wastewaters generated in the Latrobe River valley in the province of Gippsland, within the State of Victoria, Australia (Figure 1). The Latrobe Valley is the centre for large-scale generation of electricity and for the production of pulp and paper. In addition other industries have utilized the brown coal resource of the region e.g. gasification process and char production. Consequently, industrial wastewaters have been dominant in the disposal system for the past twenty-five years. The mixed industrial-domestic wastewaters were to be transported some eighty kilometres to be treated and disposed of by irrigation to land. Several important lessons have been learnt during twenty-five years of operating this system. Firstly the composition of the mixed waste stream has varied significantly with the passage of time and the development of the industrial base in the Valley, so that what was appropriate treatment in 1959 is not necessarily acceptable in 1985. Secondly the magnitude of adverse environmental impacts engendered by this low-cost disposal procedure was not imagined when the proposal was implemented. As a consequence, clean-up procedures which could remedy the adverse effects of twenty-five years of impact are likely to be costly. The question then may be asked - when the total costs including rehabilitation are considered, is there really a low-cost solution for environmentally safe disposal of complex wastewater streams?


2019 ◽  
Vol 19 (1) ◽  
pp. 4-16 ◽  
Author(s):  
Qihui Wu ◽  
Hanzhong Ke ◽  
Dongli Li ◽  
Qi Wang ◽  
Jiansong Fang ◽  
...  

Over the past decades, peptide as a therapeutic candidate has received increasing attention in drug discovery, especially for antimicrobial peptides (AMPs), anticancer peptides (ACPs) and antiinflammatory peptides (AIPs). It is considered that the peptides can regulate various complex diseases which are previously untouchable. In recent years, the critical problem of antimicrobial resistance drives the pharmaceutical industry to look for new therapeutic agents. Compared to organic small drugs, peptide- based therapy exhibits high specificity and minimal toxicity. Thus, peptides are widely recruited in the design and discovery of new potent drugs. Currently, large-scale screening of peptide activity with traditional approaches is costly, time-consuming and labor-intensive. Hence, in silico methods, mainly machine learning approaches, for their accuracy and effectiveness, have been introduced to predict the peptide activity. In this review, we document the recent progress in machine learning-based prediction of peptides which will be of great benefit to the discovery of potential active AMPs, ACPs and AIPs.


Author(s):  
Jeasik Cho

This book provides the qualitative research community with some insight on how to evaluate the quality of qualitative research. This topic has gained little attention during the past few decades. We, qualitative researchers, read journal articles, serve on masters’ and doctoral committees, and also make decisions on whether conference proposals, manuscripts, or large-scale grant proposals should be accepted or rejected. It is assumed that various perspectives or criteria, depending on various paradigms, theories, or fields of discipline, have been used in assessing the quality of qualitative research. Nonetheless, until now, no textbook has been specifically devoted to exploring theories, practices, and reflections associated with the evaluation of qualitative research. This book constructs a typology of evaluating qualitative research, examines actual information from websites and qualitative journal editors, and reflects on some challenges that are currently encountered by the qualitative research community. Many different kinds of journals’ review guidelines and available assessment tools are collected and analyzed. Consequently, core criteria that stand out among these evaluation tools are presented. Readers are invited to join the author to confidently proclaim: “Fortunately, there are commonly agreed, bold standards for evaluating the goodness of qualitative research in the academic research community. These standards are a part of what is generally called ‘scientific research.’ ”


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Mateusz Taszarek ◽  
John T. Allen ◽  
Mattia Marchio ◽  
Harold E. Brooks

AbstractGlobally, thunderstorms are responsible for a significant fraction of rainfall, and in the mid-latitudes often produce extreme weather, including large hail, tornadoes and damaging winds. Despite this importance, how the global frequency of thunderstorms and their accompanying hazards has changed over the past 4 decades remains unclear. Large-scale diagnostics applied to global climate models have suggested that the frequency of thunderstorms and their intensity is likely to increase in the future. Here, we show that according to ERA5 convective available potential energy (CAPE) and convective precipitation (CP) have decreased over the tropics and subtropics with simultaneous increases in 0–6 km wind shear (BS06). Conversely, rawinsonde observations paint a different picture across the mid-latitudes with increasing CAPE and significant decreases to BS06. Differing trends and disagreement between ERA5 and rawinsondes observed over some regions suggest that results should be interpreted with caution, especially for CAPE and CP across tropics where uncertainty is the highest and reliable long-term rawinsonde observations are missing.


2021 ◽  
Vol 7 ◽  
pp. 237802312110201
Author(s):  
Thomas A. DiPrete ◽  
Brittany N. Fox-Williams

Social inequality is a central topic of research in the social sciences. Decades of research have deepened our understanding of the characteristics and causes of social inequality. At the same time, social inequality has markedly increased during the past 40 years, and progress on reducing poverty and improving the life chances of Americans in the bottom half of the distribution has been frustratingly slow. How useful has sociological research been to the task of reducing inequality? The authors analyze the stance taken by sociological research on the subject of reducing inequality. They identify an imbalance in the literature between the discipline’s continual efforts to motivate the plausibility of large-scale change and its lesser efforts to identify feasible strategies of change either through social policy or by enhancing individual and local agency with the potential to cumulate into meaningful progress on inequality reduction.


2021 ◽  
Vol 54 (3) ◽  
pp. 1-33
Author(s):  
Blesson Varghese ◽  
Nan Wang ◽  
David Bermbach ◽  
Cheol-Ho Hong ◽  
Eyal De Lara ◽  
...  

Edge computing is the next Internet frontier that will leverage computing resources located near users, sensors, and data stores to provide more responsive services. Therefore, it is envisioned that a large-scale, geographically dispersed, and resource-rich distributed system will emerge and play a key role in the future Internet. However, given the loosely coupled nature of such complex systems, their operational conditions are expected to change significantly over time. In this context, the performance characteristics of such systems will need to be captured rapidly, which is referred to as performance benchmarking, for application deployment, resource orchestration, and adaptive decision-making. Edge performance benchmarking is a nascent research avenue that has started gaining momentum over the past five years. This article first reviews articles published over the past three decades to trace the history of performance benchmarking from tightly coupled to loosely coupled systems. It then systematically classifies previous research to identify the system under test, techniques analyzed, and benchmark runtime in edge performance benchmarking.


GigaScience ◽  
2020 ◽  
Vol 9 (11) ◽  
Author(s):  
Alexandra J Lee ◽  
YoSon Park ◽  
Georgia Doing ◽  
Deborah A Hogan ◽  
Casey S Greene

Abstract Motivation In the past two decades, scientists in different laboratories have assayed gene expression from millions of samples. These experiments can be combined into compendia and analyzed collectively to extract novel biological patterns. Technical variability, or "batch effects," may result from combining samples collected and processed at different times and in different settings. Such variability may distort our ability to extract true underlying biological patterns. As more integrative analysis methods arise and data collections get bigger, we must determine how technical variability affects our ability to detect desired patterns when many experiments are combined. Objective We sought to determine the extent to which an underlying signal was masked by technical variability by simulating compendia comprising data aggregated across multiple experiments. Method We developed a generative multi-layer neural network to simulate compendia of gene expression experiments from large-scale microbial and human datasets. We compared simulated compendia before and after introducing varying numbers of sources of undesired variability. Results The signal from a baseline compendium was obscured when the number of added sources of variability was small. Applying statistical correction methods rescued the underlying signal in these cases. However, as the number of sources of variability increased, it became easier to detect the original signal even without correction. In fact, statistical correction reduced our power to detect the underlying signal. Conclusion When combining a modest number of experiments, it is best to correct for experiment-specific noise. However, when many experiments are combined, statistical correction reduces our ability to extract underlying patterns.


Sign in / Sign up

Export Citation Format

Share Document