scholarly journals Novel Features for Binary Time Series Based on Branch Length Similarity Entropy

Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 480
Author(s):  
Sang-Hee Lee ◽  
Cheol-Min Park

Branch length similarity (BLS) entropy is defined in a network consisting of a single node and branches. In this study, we mapped the binary time-series signal to the circumference of the time circle so that the BLS entropy can be calculated for the binary time-series. We obtained the BLS entropy values for “1” signals on the time circle. The set of values are the BLS entropy profile. We selected the local maximum (minimum) point, slope, and inflection point of the entropy profile as the characteristic features of the binary time-series and investigated and explored their significance. The local maximum (minimum) point indicates the time at which the rate of change in the signal density becomes zero. The slope and inflection points correspond to the degree of change in the signal density and the time at which the signal density changes occur, respectively. Moreover, we show that the characteristic features can be widely used in binary time-series analysis by characterizing the movement trajectory of Caenorhabditis elegans. We also mention the problems that need to be explored mathematically in relation to the features and propose candidates for additional features based on the BLS entropy profile.

2012 ◽  
Vol 468-471 ◽  
pp. 2019-2023
Author(s):  
Yan Ling Li ◽  
Gang Li

Mean shift, like other gradient ascent optimization methods, is susceptible to local maximum/minimum, and hence often fails to find the desired global maximum/minimum. For this reason, mean shift segmentation algorithm based on hybridized bacterial chemotaxis (HBC) is proposed in this paper. In HBC, particle swarm operation algorithm(PSO) is introduced before bacterial chemotaxis(BC) works. And PSO is firstly introduced to execute the global search, and then stochastic local search works by BC. Meanwhile, elitism preservation is used in the paper in order to improve the efficiency of the new algorithm. After mean shift vector is optimized using HBC algorithm, the optimal mean shift vector is updated using mean shift procedure. Experimental results show that new algorithm not only has higher convergence speed, but also can achieve more robust segmentation results.


2010 ◽  
Vol 18 (3) ◽  
pp. 293-294 ◽  
Author(s):  
Nathaniel Beck

Carter and Signorino (2010) (hereinafter “CS”) add another arrow, a simple cubic polynomial in time, to the quiver of the binary time series—cross-section data analyst; it is always good to have more arrows in one's quiver. Since comments are meant to be brief, I will discuss here only two important issues where I disagree: are cubic duration polynomials the best way to model duration dependence and whether we can substantively interpret duration dependence.


2021 ◽  
Vol 11 (5) ◽  
pp. e612-e619
Author(s):  
Ali G. Hamedani ◽  
Leah Blank ◽  
Dylan P. Thibault ◽  
Allison W. Willis

ObjectiveTo determine the effect of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) to International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) coding transition on the point prevalence and longitudinal trends of 16 neurologic diagnoses.MethodsWe used 2014–2017 data from the National Inpatient Sample to identify hospitalizations with one of 16 common neurologic diagnoses. We used published ICD-9-CM codes to identify hospitalizations from January 1, 2014, to September 30, 2015, and used the Agency for Healthcare Research and Quality's MapIt tool to convert them to equivalent ICD-10-CM codes for October 1, 2015–December 31, 2017. We compared the prevalence of each diagnosis before vs after the ICD coding transition using logistic regression and used interrupted time series regression to model the longitudinal change in disease prevalence across time.ResultsThe average monthly prevalence of subarachnoid hemorrhage was stable before the coding transition (average monthly increase of 4.32 admissions, 99.7% confidence interval [CI]: −8.38 to 17.01) but increased after the coding transition (average monthly increase of 24.32 admissions, 99.7% CI: 15.71–32.93). Otherwise, there were no significant differences in the longitudinal rate of change in disease prevalence over time between ICD-9-CM and ICD-10-CM. Six of 16 neurologic diagnoses (37.5%) experienced significant changes in cross-sectional prevalence during the coding transition, most notably for status epilepticus (odds ratio 0.30, 99.7% CI: 0.26–0.34).ConclusionsThe transition from ICD-9-CM to ICD-10-CM coding affects prevalence estimates for status epilepticus and other neurologic disorders, a potential source of bias for future longitudinal neurologic studies. Studies should limit to 1 coding system or use interrupted time series models to adjust for changes in coding patterns until new neurology-specific ICD-9 to ICD-10 conversion maps can be developed.


2021 ◽  
Author(s):  
Alberto Jose Ramirez ◽  
Jessica Graciela Iriarte

Abstract Breakdown pressure is the peak pressure attained when fluid is injected into a borehole until fracturing occurs. Hydraulic fracturing operations are conducted above the breakdown pressure, at which the rock formation fractures and allows fluids to flow inside. This value is essential to obtain formation stress measurements. The objective of this study is to automate the selection of breakdown pressure flags on time series fracture data using a novel algorithm in lieu of an artificial neural network. This study is based on high-frequency treatment data collected from a cloud-based software. The comma separated (.csv) files include treating pressure (TP), slurry rate (SR), and bottomhole proppant concentration (BHPC) with defined start and end time flags. Using feature engineering, the model calculates the rate of change of treating pressure (dtp_1st) slurry rate (dsr_1st), and bottomhole proppant concentration (dbhpc_1st). An algorithm isolates the initial area of the treatment plot before proppant reaches the perforations, the slurry rate is constant, and the pressure increases. The first approach uses a neural network trained with 872 stages to isolate the breakdown pressure area. The expert rule-based approach finds the highest pressure spikes where SR is constant. Then, a refining function finds the maximum treating pressure value and returns its job time as the predicted breakdown pressure flag. Due to the complexity of unconventional reservoirs, the treatment plots may show pressure changes while the slurry rate is constant multiple times during the same stage. The diverse behavior of the breakdown pressure inhibits an artificial neural network's ability to find one "consistent pattern" across the stage. The multiple patterns found through the stage makes it difficult to select an area to find the breakdown pressure value. Testing this complex model worked moderately well, but it made the computational time too high for deployment. On the other hand, the automation algorithm uses rules to find the breakdown pressure value with its location within the stage. The breakdown flag model was validated with 102 stages and tested with 775 stages, returning the location and values corresponding to the highest pressure point. Results show that 86% of the predicted breakdown pressures are within 65 psi of manually picked values. Breakdown pressure recognition automation is important because it saves time and allows engineers to focus on analytical tasks instead of repetitive data-structuring tasks. Automating this process brings consistency to the data across service providers and basins. In some cases, due to its ability to zoom-in, the algorithm recognized breakdown pressures with higher accuracy than subject matter experts. Comparing the results from two different approaches allowed us to conclude that similar or better results with lower running times can be achieved without using complex algorithms.


Sign in / Sign up

Export Citation Format

Share Document