scholarly journals Brief Review of Minimum or No-Till Seeders in China

2021 ◽  
Vol 3 (3) ◽  
pp. 605-621
Author(s):  
Shan Jiang ◽  
Qingjie Wang ◽  
Guangyuan Zhong ◽  
Zhenwei Tong ◽  
Xiuhong Wang ◽  
...  

Minimum or no-till seeding technology is the core of conservation tillage, which can effectively reduce soil degradation by water and wind erosion. It is an essential part of agricultural modernization. The anti-blocking technology is the key to realize minimum or no-till seeding technology. According to the principle, it can be divided into three types: straw-flowing type, gravity-cutting stubble type, and power-driven type. Emphasis is placed on the anti-blocking principle, technical characteristics, and development trends of minimum or no till seeders based on three different anti-blocking principles. In view of analyzing and summarizing the advantages and disadvantages of three technologies and typical machines, the future development trends of minimum or no-till seeders were prospected as follows: (1) strengthening research on basic theories and integration mechanisms; (2) building a big data-sharing platform for seeding operations; (3) establishing and improving specific systems of minimum and no-till seeders with China character.

Author(s):  
León Darío Parra ◽  
Milenka Linneth Argote Cusi

Modern society generates about 7 Zetabytes each year, of which 75% comes from the connectivity of individuals to social networks. In this regard, the chapter presents a case study of the application of big data technologies for entrepreneurial analysis using global entrepreneurship monitor (GEM) data as a new tool of analysis. Therefore, the core of this chapter is to present the methodology that was used to develop and implement the big data app of GEM as well as the main results of project. On the other hand, the chapter remarks the advantages and disadvantages of this kind of technology for the case of GEM data. Finally, it presents the respective dashboards that interrelate the gem data with Word Bank indicators as a case study of the application of big data for entrepreneurship research.


2017 ◽  
Vol 2 (1) ◽  
pp. 301-308 ◽  
Author(s):  
Daniel Paschek ◽  
Anca Mocan ◽  
Corina-Monica Dufour ◽  
Anca Draghici

Abstract In the following paper the relevance of Knowledge Management (KM) as a foundation of Artificial Intelligence (AI) systems will be analyzed. The purpose of the work is the presentation of mandatory framework conditions for using AI with a special view on knowledge management for Big Data. Therefore the mandatory definitions of the core components will be described theoretically supported by practical examples. Based on literature, there will be done research and presentation of existing applications the relation between the knowledge management in the organization and big data as core component. To identify the relevant topics of using Big Data for knowledge management an analysis will be held up with digital companies. In addition, the main advantages and disadvantages will be depicted. The finding of the paper will be a recommendation of the developed Artificial Intelligence Knowledge Model for using Knowledge Management and Big Data for Artificial Intelligence decisions within the company.


2018 ◽  
Author(s):  
Tuba Kiyan ◽  
Heiko Lohrke ◽  
Christian Boit

Abstract This paper compares the three major semi-invasive optical approaches, Photon Emission (PE), Thermal Laser Stimulation (TLS) and Electro-Optical Frequency Mapping (EOFM) for contactless static random access memory (SRAM) content read-out on a commercial microcontroller. Advantages and disadvantages of these techniques are evaluated by applying those techniques on a 1 KB SRAM in an MSP430 microcontroller. It is demonstrated that successful read out depends strongly on the core voltage parameters for each technique. For PE, better SNR and shorter integration time are to be achieved by using the highest nominal core voltage. In TLS measurements, the core voltage needs to be externally applied via a current amplifier with a bias voltage slightly above nominal. EOFM can use nominal core voltages again; however, a modulation needs to be applied. The amplitude of the modulated supply voltage signal has a strong effect on the quality of the signal. Semi-invasive read out of the memory content is necessary in order to remotely understand the organization of memory, which finds applications in hardware and software security evaluation, reverse engineering, defect localization, failure analysis, chip testing and debugging.


Weed Science ◽  
1987 ◽  
Vol 35 (5) ◽  
pp. 695-699 ◽  
Author(s):  
Steven M. Brown ◽  
James M. Chandler ◽  
John E. Morrison

A field experiment was conducted to evaluate weed control systems in a conservation tillage rotation of grain sorghum [Sorghum bicolor(L.) Moench.] – cotton (Gossypium hirsutumL.) – wheat (Triticum aestivumL.). Herbicide systems included fall and spring/summer inputs of high and low intensity. Tillage regimes were no-till (NT) and reduced-till (RT) systems; the latter included fall primary tillage followed by spring stale seedbed planting. Both tillage systems utilized controlled traffic lanes and wide, raised beds. Effective johnsongrass [Sorghum halepense(L.) Pers. # SORHA] control required intense herbicide inputs at one or both application periods, i.e., in the fall and/or spring/summer. Grain sorghum and cotton yields for the most intense weed control system, which included high inputs in both the fall and spring/summer, were not superior to systems that included high inputs in only one of the two application periods. Seedling johnsongrass emergence occurred before spring planting in RT (but not in NT) in 2 of 3 yr, and control measures were ineffective. After 3 yr, the predominant weeds were johnsongrass and browntop panicum (Panicum fasciculatumSw. # PANFA).


Agronomy ◽  
2020 ◽  
Vol 10 (10) ◽  
pp. 1552 ◽  
Author(s):  
Igor Dekemati ◽  
Barbara Simon ◽  
Igor Bogunovic ◽  
Ivica Kisic ◽  
Katalin Kassai ◽  
...  

In addition to the dry (D) and rainy (R) seasons, a combination of the two i.e., rainy-dry (RD) and dry-rainy (DR), can also be observed in one year. The effects of the dry (D) and rainy (R) on soil are known, hence we hypothesized that the effects of the rainy-dry (RD) and dry-rainy (DR) periods on soil may differ from the former assessments. The aim of the study is to investigate the effect of six tillage treatments (ploughing—P, disk tillage—DT, loosening—L, tine tillage (a deeper—T and a shallower—ST) and no-till—NT) on earthworm abundance and crumb ratio during a long-term research (16 years) on Chernozems. The results related to the four year-groups (D, R, RD, and DR) with different residue cover. Seven degrees of cover ratio (between 12.5% and 62.5%) were selected on stubbles. Higher cover ratio (≥52.5%) improved water conservation, increased earthworm abundance (31 and 41 ind m–2) and crumb (78 and 82%) ratio (p < 0.01). R year came first in the rank of water content and earthworm abundance and DR proved to be more favorable for crumb formation. Considering the rank of soil tillage treatments, ST takes first place in evaluation of soil water content (SWC) and crumb ratio, and NT for earthworm abundance.


2018 ◽  
Vol 11 (1) ◽  
pp. 90
Author(s):  
Sara Alomari ◽  
Mona Alghamdi ◽  
Fahd S. Alotaibi

The auditing services of the outsourced data, especially big data, have been an active research area recently. Many schemes of remotely data auditing (RDA) have been proposed. Both categories of RDA, which are Provable Data Possession (PDP) and Proof of Retrievability (PoR), mostly represent the core schemes for most researchers to derive new schemes that support additional capabilities such as batch and dynamic auditing. In this paper, we choose the most popular PDP schemes to be investigated due to the existence of many PDP techniques which are further improved to achieve efficient integrity verification. We firstly review the work of literature to form the required knowledge about the auditing services and related schemes. Secondly, we specify a methodology to be adhered to attain the research goals. Then, we define each selected PDP scheme and the auditing properties to be used to compare between the chosen schemes. Therefore, we decide, if possible, which scheme is optimal in handling big data auditing.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Lin Yang

In recent years, people have paid more and more attention to cloud data. However, because users do not have absolute control over the data stored on the cloud server, it is necessary for the cloud storage server to provide evidence that the data are completely saved to maintain their control over the data. Give users all management rights, users can independently install operating systems and applications and can choose self-service platforms and various remote management tools to manage and control the host according to personal habits. This paper mainly introduces the cloud data integrity verification algorithm of sustainable computing accounting informatization and studies the advantages and disadvantages of the existing data integrity proof mechanism and the new requirements under the cloud storage environment. In this paper, an LBT-based big data integrity proof mechanism is proposed, which introduces a multibranch path tree as the data structure used in the data integrity proof mechanism and proposes a multibranch path structure with rank and data integrity detection algorithm. In this paper, the proposed data integrity verification algorithm and two other integrity verification algorithms are used for simulation experiments. The experimental results show that the proposed scheme is about 10% better than scheme 1 and about 5% better than scheme 2 in computing time of 500 data blocks; in the change of operation data block time, the execution time of scheme 1 and scheme 2 increases with the increase of data blocks. The execution time of the proposed scheme remains unchanged, and the computational cost of the proposed scheme is also better than that of scheme 1 and scheme 2. The scheme in this paper not only can verify the integrity of cloud storage data but also has certain verification advantages, which has a certain significance in the application of big data integrity verification.


1989 ◽  
Vol 42 (2) ◽  
pp. 199-214 ◽  
Author(s):  
Richard Bauckham

Jürgen Moltmann's first major work, Theology of Hope, first published in 1964, is arguably one of the truly great theological works of the last few decades, and indisputably one of the most influential. Though Moltmann's own theology has developed considerably in many subsequent works since Theology of Hope, it remains one of his greatest achievements, rivalled only by his second major work, The Crucified God. These two books, which constitute the core of Moltmann's early theology, have, it seems to me, a concentrated power of argument, focused on their central integrating ideas, which is lacking in the more diffuse structure and argument of the later works, significant though these are in their own way. The two early books also have a certain polemical extremeness, which, by contrast with the more balanced and rounded quality of the later works, gives them the sort of impact which one also finds in the passionate extremism of the early Luther or the dialectical rhetoric of the early Barth. The comparison is appropriate, not only because the influence of these two predecessors on Moltmann's work is very evident, but also because, in adopting something of their dialectical and prophetic style of theology, Moltmann had a parallel purpose: that of redirecting theological work. If, having accomplished this, Moltmann has subsequently become more and more like the older Barth of the Church Dogmatics, this is understandable and brings both advantages and disadvantages with it.


Author(s):  
Javier Conejero ◽  
Sandra Corella ◽  
Rosa M Badia ◽  
Jesus Labarta

Task-based programming has proven to be a suitable model for high-performance computing (HPC) applications. Different implementations have been good demonstrators of this fact and have promoted the acceptance of task-based programming in the OpenMP standard. Furthermore, in recent years, Apache Spark has gained wide popularity in business and research environments as a programming model for addressing emerging big data problems. COMP Superscalar (COMPSs) is a task-based environment that tackles distributed computing (including Clouds) and is a good alternative for a task-based programming model for big data applications. This article describes why we consider that task-based programming models are a good approach for big data applications. The article includes a comparison of Spark and COMPSs in terms of architecture, programming model, and performance. It focuses on the differences that both frameworks have in structural terms, on their programmability interface, and in terms of their efficiency by means of three widely known benchmarking kernels: Wordcount, Kmeans, and Terasort. These kernels enable the evaluation of the more important functionalities of both programming models and analyze different work flows and conditions. The main results achieved from this comparison are (1) COMPSs is able to extract the inherent parallelism from the user code with minimal coding effort as opposed to Spark, which requires the existing algorithms to be adapted and rewritten by explicitly using their predefined functions, (2) it is an improvement in terms of performance when compared with Spark, and (3) COMPSs has shown to scale better than Spark in most cases. Finally, we discuss the advantages and disadvantages of both frameworks, highlighting the differences that make them unique, thereby helping to choose the right framework for each particular objective.


Sign in / Sign up

Export Citation Format

Share Document