scholarly journals Resilience and Situational Awareness in Critical Infrastructure Protection: An Indicator-Based Approach

2021 ◽  
Author(s):  
Aleksandar S. Jovanović ◽  
M. Jelic ◽  
S. Chakravarty

The paper proposes a concept enabling quantitative assessment of resilience in critical entities developed in the European projects SmartResilience and InfraStress. The concept aims at combining simple communication-related advantages of simplified assessments results (such as “resilience very high” or “resilience very low”) with the advantages of the in-depth assessments (e.g. analysis of multiple sensor data). The paper describes the main elements of the innovative, indicator-based concept, starting with the “resilience cube” at the top, and continuing with the multi-level, hierarchical, indicator-based assessment methodology. The concept allows analyzing and assessing different aspects of practical resilience management. One can assess the resilience level of an entity at a given point in time, monitor their resilience level over time and benchmark it. One can also model and analyze the functionality of a system during a particular (threat) scenario, as well as stress-test it. The same methodology allows to optimize investment in improving resilience (e.g. in further training, in equipment, etc.), in a transparent and intuitive way. A resilience indicator database (over 4,000 indicators available) and a suite of tools (primarily developed within SmartResilience and InfraStress projects) and a repository of over 20 application cases and 300 scenarios, support application of the methodology. The concept has been discussed and agreed with over 50 different organizational stakeholders and is being embedded into the new ISO 31050 standard currently under development. Its “life-after-the-project” will be ensured by the dedicated “resilience rating initiative (ERRA)”. Although the concept and the tool in the form of the “ResilienceTool” were developed primarily for the resilience assessment of critical infrastructure (the “smart” ones in particular), they can be used for resilience assessment of other systems and through the extension of the, already initiated, implementation of AI techniques (machine learning) to make the ResilienceTool even more versatile and easier to use in the future.

2021 ◽  
Author(s):  
Vidya Samadi ◽  
Rakshit Pally

<p>Floods are among the most destructive natural hazard that affect millions of people across the world leading to severe loss of life and damage to property, critical infrastructure, and agriculture. Internet of Things (IoTs), machine learning (ML), and Big Data are exceptionally valuable tools for collecting the catastrophic readiness and countless actionable data. The aim of this presentation is to introduce Flood Analytics Information System (FAIS) as a data gathering and analytics system.  FAIS application is smartly designed to integrate crowd intelligence, ML, and natural language processing of tweets to provide warning with the aim to improve flood situational awareness and risk assessment. FAIS has been Beta tested during major hurricane events in US where successive storms made extensive damage and disruption. The prototype successfully identifies a dynamic set of at-risk locations/communities using the USGS river gauge height readings and geotagged tweets intersected with watershed boundary. The list of prioritized locations can be updated, as the river monitoring system and condition change over time (typically every 15 minutes).  The prototype also performs flood frequency analysis (FFA) using various probability distributions with the associated uncertainty estimation to assist engineers in designing safe structures. This presentation will discuss about the FAIS functionalities and real-time implementation of the prototype across south and southeast USA. This research is funded by the US National Science Foundation (NSF).</p>


Author(s):  
Bethany K. Bracken ◽  
Noa Palmon ◽  
David Koelle ◽  
Mike Farry

For teams to perform effectively, individuals must focus on their own tasks, while simultaneously maintaining awareness of other team members. Researchers studying and attempting to optimize performance of teams as well as individual team members use assessments of behavioral, neurophysiological, and physiological signals that correlate with individual and team performance. However, synchronizing data from multiple sensor devices can be difficult, and building and using models to assess human states of interest can be time-consuming and non-intuitive. To assist researchers, we built an Adaptable Toolkit for the Assessment and Augmentation of Performance by Teams in Real Time (ADAPTER), which provides a framework that flexibly integrates sensors and fuses sensor data to assess performance. ADAPTER flexibly integrates current and emerging sensors; assists researchers in creating and implementing models that support research on performance and the development of augmentation strategies; and enables comprehensive and holistic characterization of team member performance during real-time experimental protocols.


2015 ◽  
Vol 13 ◽  
pp. 9-18 ◽  
Author(s):  
S. Sandmann ◽  
S. Divanbeigi ◽  
H. Garbe

Abstract. Die hier behandelte Untersuchung befasst sich mit den Störungen des elektrischen Feldes einer Doppler Very High Frequency Omnidirectional Radio Range Navigationsanlage (DVOR) in der Gegenwart von Windenergieanlagen (WEA). Hierfür wird die Feldstärke auf 25 konzentrischen Kreisbahnen, sog. Orbit Flights verschiedener Höhen und mit verschiedenen Radien rund um die DVOR-Anlage numerisch simuliert. Insbesondere werden die Einflüsse diverser Parameter der WEA wie deren Anzahl, Position, Rotorwinkel, Turmhöhe und Rotordurchmesser auf die Feldverteilung herausgestellt, sowie die Anwendbarkeit der Simulationsmethode Physical Optics (PO) durch Vergleich der Simulationsergebnisse mit denen der Multi Level Fast Multipol Method (MLFMM) untersucht.


Author(s):  
S. Danilov ◽  
M. Kozyrev ◽  
M. Grechanichenko ◽  
L. Grodzitskiy ◽  
V. Mizginov ◽  
...  

Abstract. Situational awareness of the crew is critical for the safety of the air flight. Head-up display allows providing all required flight information in front of the pilot over the cockpit view visible through the cockpit’s front window. This device has been created for solving the problem of informational overload during piloting of an aircraft. While computer graphics such as scales and digital terrain model can be easily presented on the such display, errors in the Head-up display alignment for correct presenting of sensor data pose challenges. The main problem arises from the parallax between the pilot’s eyes and the position of the camera. This paper is focused on the development of an online calibration algorithm for conform projection of the 3D terrain and runway models on the pilot’s head-up display. The aim of our algorithm is to align the objects visible through the cockpit glass with their projections on the Head-up display. To improve the projection accuracy, we use an additional optical sensor installed on the aircraft. We combine classical photogrammetric techniques with modern deep learning approaches. Specifically, we use an object detection neural network model to find the runway area and align runway projection with its actual location. Secondly, we re-project the sensor’s image onto the 3D model of the terrain to eliminate errors caused by the parallax. We developed an environment simulator to evaluate our algorithm. Using the simulator we prepared a large training dataset. The dataset includes 2000 images of video sequences representing aircraft’s motion during takeoff, landing and taxi. The results of the evaluation are encouraging and demonstrate both qualitatively and quantitatively that the proposed algorithm is capable of precise alignment of the 3D models projected on a Head-up display.


Author(s):  
O. Sekkas ◽  
S. Hadjiefthymiades ◽  
E. Zervas

During the past few years, several location systems have been proposed that use multiple technologies simultaneously in order to locate a user. One such system is described in this article. It relies on multiple sensor readings from Wi-Fi access points, IR beacons, RFID tags, and so forth to estimate the location of a user. This technique is known better as sensor information fusion, which aims to improve accuracy and precision by integrating heterogeneous sensor observations. The proposed location system uses a fusion engine that is based on dynamic Bayesian networks (DBNs), thus substantially improving the accuracy and precision.


Sign in / Sign up

Export Citation Format

Share Document