Special Issue on Uncertainties in Tsunami Effects

2016 ◽  
Vol 11 (4) ◽  
pp. 613-614
Author(s):  
Harry Yeh ◽  
◽  
Shinji Sato

The 2011 Heisei tsunami far exceeded the level previously anticipated, resulting in devastating impacts in Japan. This event made it clear that preparation for tsunami hazards, based on past historical data alone, is inadequate. It is because tsunami hazards are characterized by a lack of historical data – due to the fact tsunamis are rare, high impact phenomena. Hence, it is important to populate a dataset with more data by including events that might have occurred outside the recorded historical timeframe, such as those inferred from geologic evidence. The dataset can also be expanded with “imaginary” experiments performed numerically using proper models. Unlike historical data that directly represent actual tsunami events as fact, geologic evidence (for example, sediment deposits) remains a conjecture for tsunami occurrences, and tsunami runup conditions evaluated using geologic data are uncertain. Theoretical approaches require making hypotheses, assumptions, and approximations. Numerical simulations require not only the accurate initial and boundary conditions but also adequate modeling techniques and computational capacity. Therefore, it is crucial to quantify the uncertainties involved in geologic, theoretical, and modeling approaches. Approximately 30 years ago, research on paleo-tsunamis based on geologic evidence was initiated and has been significantly advanced in the intervening years. During the same period, substantial advances in computational modeling used to predict tsunami propagation and runup processes were made. Understanding tsunami behavior, characteristics, and physics have resulted primarily from the well-organized international effort of field surveys initiated by the 1992 Nicaragua Tsunami event. Such rapidly advancing knowledge and technologies were unfortunately not sufficiently implemented in practice in a timely manner. Had this been the case, the disaster of the 2011 event would have been reduced, possibly avoiding the infamous nuclear meltdown at the Fukushima Dai-ichi Nuclear Power Plant. Having learned lessons from the 2011 Heisei Tsunami, Japan is now attempting to develop a robust tsunami-mitigation strategy that consists of two-tier criteria: Level 1 Tsunami for structure-based tsunami protection and Level 2 Tsunami for evacuation-based disaster reduction. Tsunami intensities of Levels 1 and 2 are determined by experts’ analysis and judgments. In the United States, a probabilistic tsunami hazard analysis is now widely adopted: for example, the latest ASCE-7 inundation maps are based on the hazard level of a 2,500-year return period. But again, due to the lack of data, the probabilistic analysis must rely mainly on imaginary experiments and experts’ judgments. The topic of this special issue focuses on the theme of uncertainty involved in tsunami hazard prediction. We review and examine uncertainties associated with tsunami simulations, near-shore effects, flow velocities, tsunami effects on buildings, coastal infrastructure, and sediment transport and deposits. Substantial uncertainty regarding tsunami hazards is likely the result of tsunami generation processes. This component, however, is not discussed here because it is closely related to the topic of probabilistic ‘seismic’ hazard analysis. This special issue is a compilation of seven papers addressing the current status of predictabilities, and will hopefully stimulate continual research that will lead to further improvements. Presenting numerically simulated examples, the paper by Lynett shows that the accurate prediction of tsunami-induced currents are much more difficult to achieve than the prediction of inundation depths. A small difference in an input parameter in the numerical model results in a very large difference in currents, especially the currents associated with the eddy formations. Keon, Yeh, Pancake and Steinberg demonstrate that significant temporal and spatial variations in tsunami effects are exhibited in the GIS-based IT tool called the Data Explorer. The Data Explorer provides the means to explore and extract pre-computed numerical time-series data at any grid point specified by the user. The concept is simple, but it has the unique ability to retrieve the data extremely quickly from massive datasets. This capability allows us to directly analyze the time-series data and to perform comprehensive sensitivity analysis. In order to generate realistic tsunami waveforms in the laboratory, Hiraishi describes a novel laboratory apparatus equipped with a hybrid wavemaker system capable of producing a combination of currents, a large heave of water, and waves. With the use of this apparatus, the tsunami waveform observed off Japan’s Kamaishi coast is modeled in the laboratory tank. To attempt to numerically simulate the local effects, Arikawa and Tominta present their hybrid numerical model, combining a depth-averaged 2D model and a fully 3D hydrodynamic model with the use of a multi-grid numerical scheme. This approach is crucial because tsunamis are multi-scale phenomena. A typical tsunami wavelength in deep water is on the order of several tens to hundreds of kilometers. When a tsunami approaches the shore, it may break, so refinement of the grid size is necessary, and three-dimensional flows become important when evaluating the local effects. Jaffe, Goto, Sugawara, Gelfenbaum, and LaSelle provide a comprehensive review of the models used to estimate tsunami sediment/boulder transport and deposits, thereby inferring the tsunami runup conditions (inundation depths and flow speeds) based on the tsunami deposits. They suggest that techniques for uncertainty quantification are crucial to advance the science of tsunami sediment transport modeling. Yeh and Sato analyze the failure mechanisms of buildings and coastal protective structures observed following the 2011 tsunami. Revealing the mechanisms, some engineering considerations to achieve resiliency are proposed to cope with the so-called “beyond-the-design-basis” tsunami hazards, in which its uncertainty is uncertain. Manawasekara, Mizutani, and Aoki investigate the effects of buildings’ openings and orientations on tsunami loading by performing laboratory experiments. This paper is complementary with the one by Yeh and Sato in demonstrating that the detailed changes in structure design could make a significant difference in tsunami loading on the buildings. We express our sincere appreciation to the authors for their contributions, and to the reviewers for their time-consuming efforts. We hope you find the papers in this special issue interesting and useful.

Computing ◽  
2020 ◽  
Vol 102 (3) ◽  
pp. 741-743
Author(s):  
Lizhe Wang ◽  
Dan Chen ◽  
Albert Zomaya ◽  
Laurence T. Yang ◽  
Dimitrios Georgakopoulos

Author(s):  
Ya Ju Fan ◽  
Chandrika Kamath

Wind energy is scheduled on the power grid using 0–6 h ahead forecasts generated from computer simulations or historical data. When the forecasts are inaccurate, control room operators use their expertise, as well as the actual generation from previous days, to estimate the amount of energy to schedule. However, this is a challenge, and it would be useful for the operators to have additional information they can exploit to make better informed decisions. In this paper, we use techniques from time series analysis to determine if there are motifs, or frequently occurring diurnal patterns in wind generation data. We compare two different representations of the data and four different ways of identifying the number of motifs. Using data from wind farms in Tehachapi Pass and mid-Columbia Basin, we describe our findings and discuss how these motifs can be used to guide scheduling decisions.


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 457 ◽  
Author(s):  
Mohamed Abdel-Basset ◽  
Victor Chang ◽  
Mai Mohamed ◽  
Florentin Smarandche

This research introduces a neutrosophic forecasting approach based on neutrosophic time series (NTS). Historical data can be transformed into neutrosophic time series data to determine their truth, indeterminacy and falsity functions. The basis for the neutrosophication process is the score and accuracy functions of historical data. In addition, neutrosophic logical relationship groups (NLRGs) are determined and a deneutrosophication method for NTS is presented. The objective of this research is to suggest an idea of first-and high-order NTS. By comparing our approach with other approaches, we conclude that the suggested approach of forecasting gets better results compared to the other existing approaches of fuzzy, intuitionistic fuzzy, and neutrosophic time series.


2020 ◽  
Vol 36 (2) ◽  
pp. 119-137
Author(s):  
Nguyen Duy Hieu ◽  
Nguyen Cat Ho ◽  
Vu Nhu Lan

Dealing with the time series forecasting problem attracts much attention from the fuzzy community. Many models and methods have been proposed in the literature since the publication of the study by Song and Chissom in 1993, in which they proposed fuzzy time series together with its fuzzy forecasting model for time series data and the fuzzy formalism to handle their uncertainty. Unfortunately, the proposed method to calculate this fuzzy model was very complex. Then, in 1996, Chen proposed an efficient method to reduce the computational complexity of the mentioned formalism. Hwang et al. in 1998 proposed a new fuzzy time series forecasting model, which deals with the variations of historical data instead of these historical data themselves. Though fuzzy sets are concepts inspired by fuzzy linguistic information, there is no formal bridge to connect the fuzzy sets and the inherent quantitative semantics of linguistic words. This study proposes the so-called linguistic time series, in which words with their own semantics are used instead of fuzzy sets. By this, forecasting linguistic logical relationships can be established based on the time series variations and this is clearly useful for human users. The effect of the proposed model is justified by applying the proposed model to forecast student enrollment historical data.


2018 ◽  
Vol 58 ◽  
pp. 01009
Author(s):  
Ludmiła Filina-Dawidowicz ◽  
Izabela Kotowska ◽  
Marta Mańkowska ◽  
Michał Pluciński

The aim of the research described in this article is to work out a method to estimate the demand for freight transport in a situation when no historical data are available, thus rendering it impossible to apply methods based on time series data. The method presented in this article was developed and verified on the basis of an analysis of potential inland shipping operations on the Oder Waterway to/from the seaports in Szczecin and Świnoujście, assuming that the waterway has been upgraded to navigability class III. The analysis was predicated on a demand survey performed among cargo shippers. The obtained research results made it possible to specify the advantages and drawbacks of forecasting based on qualitative methods, and to identify the factors which significantly reduce the reliability of freight transport forecasts.


2020 ◽  
Vol 12 (17) ◽  
pp. 2815
Author(s):  
Gwanggil Jeon ◽  
Valerio Bellandi ◽  
Abdellah Chehri

This Special Issue intended to probe the impact of the adoption of advanced machine learning methods in remote sensing applications including those considering recent big data analysis, compression, multichannel, sensor and prediction techniques. In principal, this edition of the Special Issue is focused on time series data processing for remote sensing applications with special emphasis on advanced machine learning platforms. This issue is intended to provide a highly recognized international forum to present recent advances in time series remote sensing. After review, a total of eight papers have been accepted for publication in this issue.


Author(s):  
Milad Afzalan ◽  
Farrokh Jazizadeh ◽  
Mehdi Ahmadian

Condition monitoring of rail infrastructure is an important task to ensure the safety and ride quality. The increasing travel demands of the rail network due to higher miles traveled requires regular monitoring of the infrastructure and efficient processing of the data for timely decision-making. Despite the regular data collection on different parameters such as acceleration and track geometry, the data processing is commonly performed to document the track performance and maintenance without further knowledge discovery to realize all the potential from historical data. Motivated by the wealth of historical track data in practice, this paper investigates the feasibility of using onboard data that is repeatedly collected over a period of time on a segment of track to potentially identify changes to the track. The proposed approach has been envisioned to learn from repeated historical time-series data to identify both the location and timing of unexpected changes to the track system. To account for stochastic nature of the collected data, associated with the temporal mismatch between the time-series across different inspection runs, we propose a framework by adopting the concept of Matrix Profile without relying on time series synchronization. The approach divides the entire data into smaller track segments, performs extensive similarity search of time-series signatures, and associate locations with higher dissimilarity to changes of the track either due to maintenance or a potential defect. To demonstrate the efficacy and potential of the method, evaluation on both synthetic data and the field geometry data from a revenue-service Class I railroad has been conducted. The findings provide promising results in predicting the location of track changes with a reasonably high degree of certainty, with an automated offline analysis.


2013 ◽  
Author(s):  
Stephen J. Tueller ◽  
Richard A. Van Dorn ◽  
Georgiy Bobashev ◽  
Barry Eggleston

Author(s):  
Rizki Rahma Kusumadewi ◽  
Wahyu Widayat

Exchange rate is one tool to measure a country’s economic conditions. The growth of a stable currency value indicates that the country has a relatively good economic conditions or stable. This study has the purpose to analyze the factors that affect the exchange rate of the Indonesian Rupiah against the United States Dollar in the period of 2000-2013. The data used in this study is a secondary data which are time series data, made up of exports, imports, inflation, the BI rate, Gross Domestic Product (GDP), and the money supply (M1) in the quarter base, from first quarter on 2000 to fourth quarter on 2013. Regression model time series data used the ARCH-GARCH with ARCH model selection indicates that the variables that significantly influence the exchange rate are exports, inflation, the central bank rate and the money supply (M1). Whereas import and GDP did not give any influence.


Sign in / Sign up

Export Citation Format

Share Document