scholarly journals Potato Pathogens in Russia’s Regions: An Instrumental Survey with the Use of Real-Time PCR/RT-PCR in Matrix Format

Pathogens ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 18 ◽  
Author(s):  
Alexander Malko ◽  
Pavel Frantsuzov ◽  
Maksim Nikitin ◽  
Natalia Statsyuk ◽  
Vitaly Dzhavakhiya ◽  
...  

Viral and bacterial diseases of potato cause significant yield loss worldwide. The current data on the occurrence of these diseases in Russia do not provide comprehensive understanding of the phytosanitary situation. Diagnostic systems based on disposable stationary open qPCR micromatrices intended for the detection of eight viral and seven bacterial/oomycetal potato diseases have been used for wide-scale screening of target pathogens to estimate their occurrence in 11 regions of Russia and to assess suitability of the technology for high-throughput diagnostics under conditions of field laboratories. Analysis of 1025 leaf and 725 tuber samples confirmed the earlier reported data on the dominance of potato viruses Y, S, and M in most regions of European Russia, as well as relatively high incidences of Clavibacter michiganensis subsp. sepedonicus, Pectobacterium atrosepticum, and P. carotovorum subsp. carotovorum, and provided detailed information on the phytosanitary status of selected regions and geographical spread of individual pathogens. Information on the occurrence of mixed infections, including their composition, was the first data set of this kind for Russia. The study is the first large-scale screening of a wide range of potato pathogens conducted in network mode using unified methodology and standardized qPCR micromatrices. The data represent valuable information for plant pathologists and potato producers and indicate the high potential of the combined use of matrix PCR technology and network approaches to data collection and analysis with the view to rapidly and accurately assess the prevalence of certain pathogens, as well as the phytosanitary state of large territories.

Author(s):  
Eun-Young Mun ◽  
Anne E. Ray

Integrative data analysis (IDA) is a promising new approach in psychological research and has been well received in the field of alcohol research. This chapter provides a larger unifying research synthesis framework for IDA. Major advantages of IDA of individual participant-level data include better and more flexible ways to examine subgroups, model complex relationships, deal with methodological and clinical heterogeneity, and examine infrequently occurring behaviors. However, between-study heterogeneity in measures, designs, and samples and systematic study-level missing data are significant barriers to IDA and, more broadly, to large-scale research synthesis. Based on the authors’ experience working on the Project INTEGRATE data set, which combined individual participant-level data from 24 independent college brief alcohol intervention studies, it is also recognized that IDA investigations require a wide range of expertise and considerable resources and that some minimum standards for reporting IDA studies may be needed to improve transparency and quality of evidence.


2017 ◽  
Vol 44 (2) ◽  
pp. 203-229 ◽  
Author(s):  
Javier D Fernández ◽  
Miguel A Martínez-Prieto ◽  
Pablo de la Fuente Redondo ◽  
Claudio Gutiérrez

The publication of semantic web data, commonly represented in Resource Description Framework (RDF), has experienced outstanding growth over the last few years. Data from all fields of knowledge are shared publicly and interconnected in active initiatives such as Linked Open Data. However, despite the increasing availability of applications managing large-scale RDF information such as RDF stores and reasoning tools, little attention has been given to the structural features emerging in real-world RDF data. Our work addresses this issue by proposing specific metrics to characterise RDF data. We specifically focus on revealing the redundancy of each data set, as well as common structural patterns. We evaluate the proposed metrics on several data sets, which cover a wide range of designs and models. Our findings provide a basis for more efficient RDF data structures, indexes and compressors.


2009 ◽  
Vol 83 (3) ◽  
pp. 563-589 ◽  
Author(s):  
David T. Merrett ◽  
Simon Ville

An expanding economy, new technologies, and changing consumer preferences provided growth opportunities for firms in interwar Australia. This period saw an increase in the number of large-scale firms in mining, manufacturing, and a wide range of service industries. Firms unable to rely solely on retained earnings to fund expansion turned to the domestic stock exchanges. A new data set of capital raisings constructed from reports of prospectuses published in the financial press forms the basis for the conclusion that many firms used substantial injections of equity finance to augment internally generated sources of funds. That they were able to do so indicates a strong increase in the capacity of local stock exchanges and a greater willingness of individuals to hold part of their wealthin transferable securities.


2021 ◽  
Author(s):  
Abigail Z. Jacobs ◽  
Duncan J. Watts

Theories of organizations are sympathetic to long-standing ideas from network science that organizational networks should be regarded as multiscale and capable of displaying emergent properties. However, the historical difficulty of collecting individual-level network data for many (N ≫ 1) organizations, each of which comprises many (n ≫ 1) individuals, has hobbled efforts to develop specific, theoretically motivated hypotheses connecting micro- (i.e., individual-level) network structure with macro-organizational properties. In this paper we seek to stimulate such efforts with an exploratory analysis of a unique data set of aggregated, anonymized email data from an enterprise email system that includes 1.8 billion messages sent by 1.4 million users from 65 publicly traded U.S. firms spanning a wide range of sizes and 7 industrial sectors. We uncover wide heterogeneity among firms with respect to all measured network characteristics, and we find robust network and organizational variation as a result of size. Interestingly, we find no clear associations between organizational network structure and firm age, industry, or performance; however, we do find that centralization increases with geographical dispersion—a result that is not explained by network size. Although preliminary, these results raise new questions for organizational theory as well as new issues for collecting, processing, and interpreting digital network data. This paper was accepted by David Simchi-Levi, Special Issue of Management Science: 65th Anniversary.


Author(s):  
Erwin Sutanto ◽  
Hammam Abror Ali ◽  
Yhosep Gita Yhun Yhuwana ◽  
Muhammad Aziz

The article describes a new way to define the threshold voltage for Machine Learning-based Digital Residual Current Circuit Breaker (RCCB), enabling the right cut-off point. Using the described methods, the authors obtained a gap to common mid voltage being around 0.5 V. The proposed technique is illustrated with three different loads of 3W, 5W, and 9W as the scope of this work. The authors try to apply it in Residual Current Circuit Breaker (RCCB). It could be useful in a hospital with a limited number of technicians to maintain various machines quickly. This work tries to realize a machine that could find out the best condition to cut off the electricity when there is any leakage current but keep the supply if it is still under tolerance. This allows improving the mistake of the midpoint about 16.97% over its wide range. The effectiveness of Python libraries usage realized the Artificial Neural Network (ANN) implementation as one of machine-learning algorithms. The learning process is applied to the measured leakage current data set. It goes with input preprocessing, training, testing, and data analysis. From all of those steps, it is possible to determine the induction voltage threshold at 1.080 from 3.3V as its maximum value with a negligible loss value of 0.0006. By comparing the value with a reference, it can be concluded that this method could be used in a real situation.


2017 ◽  
Author(s):  
Frank Oppermann ◽  
Thomas Günther

Abstract. We present a new versatile datalogger that can be used for a wide range of possible applications in geosciences. It is adjustable in signal strength and sampling frequency, battery-saving and can remotely be controlled over Global System for Mobile Communication (GSM) connection so that it saves running costs, particulaly in monitoring experiments. Internet connection allows for checking functionality, controlling schedules and optimizing preamplification. We mainly use it for large-scale Electrical Resistivity Tomography (ERT), where it independently registers voltage time series on three channels while a square wave current is injected. For the analysis of this time series we present a new approach that is based on the Lock-In (LI) method, mainly known from electronic circuits. The method searches the working point (phase) using three different functions based on a mask signal, and determines the amplitude using a direct current (DC) correlation function. We use synthetic data with different types of noise to compare the new method with existing approaches, i.e. selective stacking and a modified Fast Fourier Transformation (FFT) based approach that assumes a 1/f noise characteristics. All methods give comparable results, the LI being better than the well established stacking method. The FFT approach can be even better but only if the noise strictly follows the assumed characteristics. If overshoots are present in the data, which is typical in the field, FFT performs worse even with good data which is why we conclude that the new LI approach is the most robust solution. This is also proved by a field data set from a long 2D ERT profile.


2008 ◽  
Vol 53 (2) ◽  
Author(s):  
D. Littlewood ◽  
Andrea Waeschenbach ◽  
Pavel Nikolov

AbstractThe most species rich order of tapeworms is the Cyclophyllidea and prior to wide-scale sampling of these worms for phylogenetics, we wished to develop reliable PCR primers that would capture fragments of mitochondrial (mt) DNA with phylogenetic utility across the order. Nuclear ribosomal RNA gene sequences are well-established and valuable markers for resolving flatworm interrelationships spanning a wide range of taxonomic divergences, but fail to provide resolution amongst recently diverged lineages. Entire mt genomes of selected cyclophyllidean tapeworms are available on GenBank, and we used these to design PCR primers to amplify mtDNA from cox1, rrnL and nad1 for a range of cyclophyllideans (7 davaineids, 1 hymenolepidid and 1 dilepidid) and selected outgroups (Tetrabothrius sp. and Mesocestoides sp.). A combined nuclear and mt gene data set was used to estimate a reference phylogeny and the performance of the individual genes was compared to this. Although nuclear and mt genes each contributed to the structure and stability of the phylogenetic estimate, strongest nodal support was provided by nuclear data amongst the basal lineages and by mt data amongst the most recently diverged lineages. The apparent complementarity afforded by combining nuclear and mt data was compromised by these data partitions providing conflicting signal at poorly supported nodes. Nevertheless, we argue for a combined evidence approach. PCR primers that amplify rrnL were designed and tested successfully against a diversity of cyclophyllideans; rrnL and nad1 appeared to be more informative than the fragment of cox1. The genus Raillietina was not supported by molecular evidence. The new primers will likely provide considerable resolution to estimates of cyclophyllidean interrelationships in future studies.


2011 ◽  
Vol 48 (5) ◽  
pp. 793-800 ◽  
Author(s):  
Christopher N. Jass ◽  
James A. Burns ◽  
Peter J. Milot

Significant work has gone into describing Ice Age faunas from Alberta, but relatively little work has been dedicated to understanding the actual structure of Quaternary faunal assemblages in the province. Development of such a data set is necessary to fully understand differences in faunal assemblages that existed before and after the last glacial maximum, and may eventually provide an important historical perspective for understanding the impact of large-scale ecosystem disturbance. Muskoxen fossils from central Alberta were examined to differentiate specimens of Bootherium and Ovibos . Those remains, along with other fossils of Pleistocene megafauna collected from gravel deposits near Edmonton, were used to examine patterns of relative abundance from both pre- and postglacial maximum time periods. Relative abundance for genera of Pleistocene megafauna was calculated using the number of individual specimens (NISP) from 11 individual localities (i.e., gravel pits) in central Alberta. Preglacial localities with statistically significant numbers of specimens (n ≥ 30) are dominated by horse ( Equus ). Mammoth ( Mammuthus ) and bison ( Bison ) are common, but other megafauna, such as Jefferson’s Ground Sloth ( Megalonyx jeffersoni ) and Yesterday’s Camel ( Camelops hesternus ), are comparatively rare. Current data for the postglacial fauna indicate a shift in which Bison becomes the most abundant large herbivore on the landscape, a pattern observed in other parts of North America.


2018 ◽  
Author(s):  
Enoch Ng’oma ◽  
Elizabeth G. King ◽  
Kevin M. Middleton

AbstractThe ability to quantify fecundity is critically important to a wide range of experimental applications, particularly in widely-used model organisms such as Drosophila melanogaster. However, the standard method of manually counting eggs is time consuming and limits the feasibility of large-scale experiments. We develop a predictive model to automate the counting of eggs from images of eggs removed from the media surface and washed onto dark filter paper. A cross-validation approach demonstrates our method performs well, with a correlation between predicted and manually counted values of 0.88. We show how this method can be applied to a large data set where egg densities vary widely.


2019 ◽  
Vol 34 (4) ◽  
pp. 335-348
Author(s):  
Do Quoc Truong ◽  
Pham Ngoc Phuong ◽  
Tran Hoang Tung ◽  
Luong Chi Mai

Automatic Speech Recognition (ASR) systems convert human speech into the corresponding transcription automatically. They have a wide range of applications such as controlling robots, call center analytics, voice chatbot. Recent studies on ASR for English have achieved the performance that surpasses human ability. The systems were trained on a large amount of training data and performed well under many environments. With regards to Vietnamese, there have been many studies on improving the performance of existing ASR systems, however, many of them are conducted on a small-scaled data, which does not reflect realistic scenarios. Although the corpora used to train the system were carefully design to maintain phonetic balance properties, efforts in collecting them at a large-scale are still limited. Specifically, only a certain accent of Vietnam was evaluated in existing works. In this paper, we first describe our efforts in collecting a large data set that covers all 3 major accents of Vietnam located in the Northern, Center, and Southern regions. Then, we detail our ASR system development procedure utilizing the collected data set and evaluating different model architectures to find the best structure for Vietnamese. In the VLSP 2018 challenge, our system achieved the best performance with 6.5% WER and on our internal test set with more than 10 hours of speech collected real environments, the system also performs well with 11% WER


Sign in / Sign up

Export Citation Format

Share Document