scholarly journals Modeling and Fault Tolerance Analysis of ZigBee Protocol in IoT Networks

Energies ◽  
2021 ◽  
Vol 14 (24) ◽  
pp. 8264
Author(s):  
Paweł Dymora ◽  
Mirosław Mazurek ◽  
Krzysztof Smalara

This paper presents the essence of IoT (Internet of Things) works and design challenges, discusses its principles of operation, and presents IoT development concepts. WSN (Wireless Sensor Network) was characterized in detail as an essential component of IoT infrastructure. The various faults that can occur at all levels of the IoT architecture, such as sensor nodes, actuators, network links, as well as processing and storage components clearly demonstrate that fault-tolerance (FT) has become a key issue for IoT systems. A properly applied routing algorithm has a direct impact on the power consumption of sensors, which in extreme cases is the reason why nodes shut down due to battery degradation. To study the fault tolerance of IoT infrastructure, a ZigBee network topology was created, and various node failure scenarios were simulated. Furthermore, the results presented showed the impact and importance of choosing the right routing scheme, based on the correlation of throughput to the number of rejected packets, as well as the proportionality of the value of management traffic to the other including the ratio of rejected packets.

2018 ◽  
Vol 2 (95) ◽  
pp. 69-72
Author(s):  
Yu.A. Tarariko ◽  
L.V. Datsko ◽  
M.O. Datsko

The aim of the work is to assess the existing and prospective models for the development of agricultural production in Central Polesie on the basis of economic feasibility and ecological balance. The evaluation of promising agricultural production systems was carried out with the help of simulation modeling of various infrastructure options at the levels of crop and multisectoral specialization of agroecosystems. The agro-resource potential of Central Polesie is better implemented in the rotation with lupine, corn and flax dolguntsem with well-developed infrastructure, including crop, livestock units, grain processing and storage systems, feed, finished products and waste processing in the bioenergetic station. The expected income for the formation of such an infrastructure is almost 8 thousand dollars. / with a payback period of capital investments of 2-3 years.


Toxins ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 158
Author(s):  
Colin Eady

For 30 years, forage ryegrass breeding has known that the germplasm may contain a maternally inherited symbiotic Epichloë endophyte. These endophytes produce a suite of secondary alkaloid compounds, dependent upon strain. Many produce ergot and other alkaloids, which are associated with both insect deterrence and livestock health issues. The levels of alkaloids and other endophyte characteristics are influenced by strain, host germplasm, and environmental conditions. Some strains in the right host germplasm can confer an advantage over biotic and abiotic stressors, thus acting as a maternally inherited desirable ‘trait’. Through seed production, these mutualistic endophytes do not transmit into 100% of the crop seed and are less vigorous than the grass seed itself. This causes stability and longevity issues for seed production and storage should the ‘trait’ be desired in the germplasm. This makes understanding the precise nature of the relationship vitally important to the plant breeder. These Epichloë endophytes cannot be ‘bred’ in the conventional sense, as they are asexual. Instead, the breeder may modulate endophyte characteristics through selection of host germplasm, a sort of breeding by proxy. This article explores, from a forage seed company perspective, the issues that endophyte characteristics and breeding them by proxy have on ryegrass breeding, and outlines the methods used to assess the ‘trait’, and the application of these through the breeding, production, and deployment processes. Finally, this article investigates opportunities for enhancing the utilisation of alkaloid-producing endophytes within pastures, with a focus on balancing alkaloid levels to further enhance pest deterrence and improving livestock outcomes.


Author(s):  
Sejal Atit Bhavsar ◽  
Kirit J Modi

Fog computing is a paradigm that extends cloud computing services to the edge of the network. Fog computing provides data, storage, compute and application services to end users. The distinguishing characteristics of fog computing are its proximity to the end users. The application services are hosted on network edges like on routers, switches, etc. The goal of fog computing is to improve the efficiency and reduce the amount of data that needs to be transported to cloud for analysis, processing and storage. Due to heterogeneous characteristics of fog computing, there are some issues, i.e. security, fault tolerance, resource scheduling and allocation. To better understand fault tolerance, we highlighted the basic concepts of fault tolerance by understanding different fault tolerance techniques i.e. Reactive, Proactive and the hybrid. In addition to the fault tolerance, how to balance resource utilization and security in fog computing are also discussed here. Furthermore, to overcome platform level issues of fog computing, Hybrid fault tolerance model using resource management and security is presented by us.


Author(s):  
Naureen Naqvi ◽  
Sabih Ur Rehman ◽  
Zahidul Islam

Recent technological advancements have given rise to the concept of hyper-connected smart cities being adopted around the world. These cities aspire to achieve better outcomes for citizens by improving the quality of service delivery, information sharing, and creating a sustainable environment. A smart city comprises of a network of interconnected devices also known as IoT (Internet of Things), which captures data and transmits it to a platform for analysis. This data covers a variety of information produced in large volumes also known as Big Data. From data capture to processing and storage, there are several stages where a breach in security and privacy could result in catastrophic impacts. Presently there is a gap in the centralization of knowledge to implement smart city services with a secure architecture. To bridge this gap, we present a framework that highlights challenges within the smart city applications and synthesizes the techniques feasible to solve them. Additionally, we analyze the impact of a potential breach on smart city applications and state-of-the-art architectures available. Furthermore, we identify the stakeholders who may have an interest in learning about the relationships between the significant aspects of a smart city. We demonstrate these relationships through force-directed network diagrams. They will help raise the awareness amongst the stakeholders for planning the development of a smart city. To complement our framework, we designed web-based interactive resources that are available from http://ausdigitech.com/smartcity/.


Author(s):  
Goran Djukanovic ◽  
Goran Popovic ◽  
Dimitris Kanellopoulos

This paper proposes a routing method that is based on an Ant Colony Algorithm (ACO) for minimizing energy consumption in Wireless Sensor Networks (WSNs). The routing method is used as the backbone of the Internet of Things (IoT) platform. It also considers the critical design issues of a WSN, such as the energy constraint of sensor nodes, network load balancing, and sensor density in the field. Special attention is paid to the impact of network scaling on the performance of the ACO-based routing algorithm.


Author(s):  
Sejal Atit Bhavsar ◽  
Kirit J Modi

Fog computing is a paradigm that extends cloud computing services to the edge of the network. Fog computing provides data, storage, compute and application services to end users. The distinguishing characteristics of fog computing are its proximity to the end users. The application services are hosted on network edges like on routers, switches, etc. The goal of fog computing is to improve the efficiency and reduce the amount of data that needs to be transported to cloud for analysis, processing and storage. Due to heterogeneous characteristics of fog computing, there are some issues, i.e. security, fault tolerance, resource scheduling and allocation. To better understand fault tolerance, we highlighted the basic concepts of fault tolerance by understanding different fault tolerance techniques i.e. Reactive, Proactive and the hybrid. In addition to the fault tolerance, how to balance resource utilization and security in fog computing are also discussed here. Furthermore, to overcome platform level issues of fog computing, Hybrid fault tolerance model using resource management and security is presented by us.


2012 ◽  
Vol 546-547 ◽  
pp. 892-897
Author(s):  
Xiang Sun ◽  
Hua Rui Wu ◽  
Hua Ji Zhu

According to the demands of organic viticulture for small environment regulation and management automation, a grape production monitoring system based on wireless sensor networks (WSN for short) was designed, so as to solve the low management automation level, great workload and other problems that exist in traditional viticulture. The system includes a WSN and a planting management-oriented viticulture management system. The WSN consists of twelve environment monitoring nodes equipped with 168 sensors and 12 video capture cards, a sink node and a monitoring center. And it is also free to customize the functions of acquisition, processing, transmission and storage the data, such as soil moisture, soil temperature, air temperature and humidity, rainfall, solar radiation, wind direction, wind speed and so on. The communication between two nodes follows ZigBee protocol. GPRS is used to realize the communication between sink node and monitoring center. Production data collection and analysis, farming management, production decision, insect pest and disease warming are also achieved through GPRS. An experiment on data package transfer rate in grape veraison and mature stages is carried out. Six out of ten sensor nodes have transfer accuracy above 90%, and two below 35%. The reason is that the solar power supply circuit of one node has excessive energy consumption, and deployment location of the other node is influenced by environment which leads to the network instability. Through analysis, results can be got that the power supply and network environment are important factors for performance of WSN in the fields. The design and development of this viticulture monitoring system provide an effective tool for the production information monitoring and analysis decision of organic vineyard.


2021 ◽  
pp. 308-318
Author(s):  
Hadeel T. Rajab ◽  
Manal F. Younis

 Internet of Things (IoT) contributes to improve the quality of life as it supports many applications, especially healthcare systems. Data generated from IoT devices is sent to the Cloud Computing (CC) for processing and storage, despite the latency caused by the distance. Because of the revolution in IoT devices, data sent to CC has been increasing. As a result, another problem added to the latency was increasing congestion on the cloud network. Fog Computing (FC) was used to solve these problems because of its proximity to IoT devices, while filtering data is sent to the CC. FC is a middle layer located between IoT devices and the CC layer. Due to the massive data generated by IoT devices on FC, Dynamic Weighted Round Robin (DWRR) algorithm was used, which represents a load balancing (LB) algorithm that is applied to schedule and distributes data among fog servers by reading CPU and memory values of these servers in order to improve system performance. The results proved that DWRR algorithm provides high throughput which reaches 3290 req/sec at 919 users. A lot of research is concerned with distribution of workload by using LB techniques without paying much attention to Fault Tolerance (FT), which implies that the system continues to operate even when fault occurs. Therefore, we proposed a replication FT technique called primary-backup replication based on dynamic checkpoint interval on FC. Checkpoint was used to replicate new data from a primary server to a backup server dynamically by monitoring CPU values of primary fog server, so that checkpoint occurs only when the CPU value is larger than 0.2 to reduce overhead. The results showed that the execution time of data filtering process on the FC with a dynamic checkpoint is less than the time spent in the case of the static checkpoint that is independent on the CPU status.


Author(s):  
Christopher R. Hannemann ◽  
Van P. Carey ◽  
Amip J. Shah ◽  
Chandrakant Patel

As the use of information technology becomes more ubiquitous, the need for data processing and storage capabilities increases. This results in the construction and operation of large data centers—facilities that house thousands of servers and serve as the backbone for all types of computational processes. Unfortunately, as processing power and storage capacity increases, so does the corresponding power and cooling requirements of the data centers. Several studies have examined the efficiency of data centers by focusing on server and cooling power inputs, but this fails to capture the data center’s entire impact. To accomplish this, the use of a lifetime exergy (available energy) analysis is proposed. This study first details the development of a lifetime exergy consumption model designed specifically for data center analysis. To create a database of computer components, a disassembly analysis was performed, and the results are detailed. By combining the disassembly analysis of a server with the aggregation of energy and material data, a more rigorous and useful assessment of the server’s overall impact is demonstrated. The operation of the lifetime exergy consumption model is demonstrated by case studies examining the effects of variance in transportation and cooling strategies. The importance of transportation modes and material mass, which are greatly affected by supply chain parameters, is shown. The impact of static and dynamic cooling within data centers is also demonstrated.


Sign in / Sign up

Export Citation Format

Share Document