scholarly journals Partition Learning for Multiagent Planning

2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Jared Wood ◽  
J. Karl Hedrick

Automated surveillance of large geographic areas and target tracking by a team of autonomous agents is a topic that has received significant research and development effort. The standard approach is to decompose this problem into two steps. The first step is target track estimation and the second step is path planning by optimizing directly over target track estimation. This standard approach works well in many scenarios. However, an improved approach is needed for the scenario when general, nonparametric estimation is required, and the number of targets is unknown. The focus of this paper is to present a new approach that inherently handles the task to search for and track anunknownnumber of targets within alargegeographic area. This approach is designed for the case when the search is performed by a team of autonomous agents and target estimation requires general, nonparametric methods. There are consequently very few assumptions made. The only assumption made is that a time-changing target track estimation is available and shared between the agents. This estimation is allowed to be general and nonparametric. Results are provided that compare the performance of this new approach with the standard approach. From these results it is concluded that this new approach improves search and tracking when the number of targets is unknown and target track estimation is general and nonparametric.

Author(s):  
Maxime Schmitt ◽  
Cédric Bastoul ◽  
Philippe Helluy

A large part of the development effort of compute-intensive applications is devoted to optimization, i.e., achieving the computation within a finite budget of time, space or energy. Given the complexity of modern architectures, writing simulation applications is often a two-step workflow. Firstly, developers design a sequential program for algorithmic tuning and debugging purposes. Secondly, experts optimize and exploit possible approximations of the original program to scale to the actual problem size. This second step is a tedious, time-consuming and error-prone task. In this paper we investigate language extensions and compiler tools to achieve that task semi-automatically in the context of approximate computing. We identified the semantic and syntactic information necessary for a compiler to automatically handle approximation and adaptive techniques for a particular class of programs. We propose a set of language extensions generic enough to provide the compiler with the useful semantic information when approximation is beneficial. We implemented the compiler infrastructure to exploit these extensions and to automatically generate the adaptively approximated version of a program. We provide an experimental study of the impact and expressiveness of our language extension set on various applications.


2020 ◽  
Vol 34 (1) ◽  
pp. 73-86
Author(s):  
F. Neisskenwirth

Abstract Different procedures are proposed in the literature for the rehydration of dried-out specimens. These procedures vary greatly in their efficiency and application. This work describes a new procedure that is inspired by the literature but that avoids heating the specimens. This method was applied to reconditioning dried-out specimens from a historical collection (Swiss freshwater fishes, bird brains, and bird eyes), stored at the Naturhistorisches Museum Bern in Switzerland. The procedure consists of five steps. The first step is the softening of hardened soft tissue with benzaldehyde and demineralized water. The second step is an indirect rehydration with water vapor. The third step is a chemically induced direct hydration using a trisodium phosphate solution that allows the specimen to swell in size before being washed with water to remove all additives. Finally, the rehydrated specimen is transferred into new preserving fluid. Because the dehydrating properties of ethanol as a preservative are problematic, this paper presents the results of an experimental case study using a glycerol solution as a preservation fluid.


Author(s):  
I. Yu. Drachev ◽  
V. Yu. Shilo ◽  
G. S. Dzhulay

The aim of the study was to evaluate the efficacy of various approaches to correcting and preventing hypotension episodes in patients on maintenance hemodialysis (HD).Material and methods. The study included 35 patients on maintenance hemodialysis in the Dialysis Center "MCHTP No. 1", which is a part of "B. Braun Avitum" network centers in the Russian Federation. All patients underwent an automatic blood pressure (BP) measurement using a machine-inbuilt option device for noninvasive blood pressure measurement. Prior to the study, all patients underwent a clinical test with a "dry weight" assessment and a bio-impedance analysis. The study had a cross-over design: first, all patients were treated using the standard methods for correcting hypotension episodes (at 4 initial procedures). Then, in the following 4 procedures, in addition to standard methods, a computer algorithm was used to automatically regulate the ultrafiltration (UF) rate: the automatic pressure monitoring system (biologic rr comfort) with continuous monitoring of blood pressure throughout the procedure; BP was recorded before and after the HD procedure, as well as at least once every 5 minutes during 3 initial procedures; and starting from the 4th procedure, the intervals were determined by the algorithm automatically. The average blood pressure values were analyzed during the dialysis procedures for the entire observation period. The duration of the study was 3 weeks for each patient.Results. The average predialysis blood pressures in the group with the standard approach to hemodynamic correction were 124.6 ± 27.7 and 74.5 ± 21.1 mm Hg, the postdialysis blood pressures were 114.4 ± 24.4 and 71.3 ± 16.3 mm Hg. With the use of automatic pressure monitoring system, the predialysis and postdialysis blood pressures were significantly higher than those with the standard approach: 133.2 ± 21.3 and 79.3 ± 15.8 mm Hg (p < 0.001 and p = 0.009), vs. 125.7 ± 23.9 and 75.9 ± 18.3 mm Hg (p < 0.001 and p < 0.001), respectively. Upon closer examination of the intradialysis pressure variations, the intradialysis blood pressures were 110.2 ± 17.3 and 68.3 ± 13.9 mm Hg when measured by using the standard approach, and significantly higher: 124 ± 20.5 and 75.9 ± 14.2 mm Hg when the automatic pressure monitoring system was used (p = 0.03; p = 0.02). Also, higher mean arterial pressures were noted: 82.5 ± 13.9 with the standard approach vs.91.5 ± 15.6 mm Hg (p = 0.01) with the automatic pressure monitoring system. Studying UF rates, we found that the UF rate was slightly higher without using the automatic pressure monitoring system (8.0 ml/kg/h vs. 7.9 ml/kg/h). Thus, the new approach used in addition to the standard methods of correcting hypotension was effective and safe. No significant differences were seen in Kt/V values. However, when automatic pressure monitoring system was used in patients, the target phosphate levels were achieved: the inorganic phosphorus value was 1.5 mmol/L when using the UF control algorithm vs. 1.8 mmol/L with a standard dialysis program. However, these data did not reach the statistical significance (p = 0.07).Conclusion. Intradialysis hypotension and high UF rates remain frequent and potentially dangerous complications of HD procedure, which worsen the long-term prognosis of patients on HD, mainly due to the increase in cardiovascular morbidity and mortality. The new approach to the prevention and correction of hypotension by using the automatic pressure monitoring system allows one to reduce the UF rate in a timely manner, preventing the development of hypotension episodes, reducing their rates, and improving the achievement of target blood pressure values, both preand postdialysis, as well as intradialysis blood pressure variations.


2019 ◽  
Vol 19 (05) ◽  
pp. 1941010
Author(s):  
Bálint Bodor ◽  
László Bencsik ◽  
Tamás Insperger

Understanding the mechanism of human balancing is a scientifically challenging task. In order to describe the nature of the underlying control mechanism, the control force has to be determined experimentally. A main feature of balancing tasks is that the open-loop system is unstable. Therefore, reconstruction of the trajectories using the measured control force is difficult, since measurement inaccuracies, noise and numerical errors increase exponentially with time. In order to overcome this problem, a new approach is proposed in this paper. In the presented technique, first the solution of the linearized system is used. As a second step, an optimization problem is solved which is based on a variational principle. A main advantage of the method is that there is no need for the numerical differentiation of the measured data for the calculation of the control forces, which is the main source of the numerical errors. The method is demonstrated in case of a human stick balancing.


2020 ◽  
Vol 9 (3) ◽  
pp. 147 ◽  
Author(s):  
Xi Kuai ◽  
Renzhong Guo ◽  
Zhijun Zhang ◽  
Biao He ◽  
Zhigang Zhao ◽  
...  

Georeferencing by place names (known as toponyms) is the most common way of associating textual information with geographic locations. While computers use numeric coordinates (such as longitude-latitude pairs) to represent places, people generally refer to places via their toponyms. Query by toponym is an effective way to find information about a geographic area. However, segmenting and parsing textual addresses to extract local toponyms is a difficult task in the geocoding field, especially in China. In this paper, a local spatial context-based framework is proposed to extract local toponyms and segment Chinese textual addresses. We collect urban points of interest (POIs) as an input data source; in this dataset, the textual address and geospatial position coordinates correspond at a one-to-one basis and can be easily used to explore the spatial distribution of local toponyms. The proposed framework involves two steps: address element identification and local toponym extraction. The first step identifies as many address element candidates as possible from a continuous string of textual addresses for each urban POI. The second step focuses on merging neighboring candidate pairs into local toponyms. A series of experiments are conducted to determine the thresholds for local toponym extraction based on precision-recall curves. Finally, we evaluate our framework by comparing its performance with three well-known Chinese word segmentation models. The comparative experimental results demonstrate that our framework achieves a better performance than do other models.


1993 ◽  
Vol 25 (03) ◽  
pp. 702-713 ◽  
Author(s):  
P. Leguesdron ◽  
J. Pellaumail ◽  
G. Rubino ◽  
B. Sericola

A new approach is used to obtain the transient probabilities of the M/M/1 queueing system. The first step of this approach deals with the generating function of the transient probabilities of the uniformized Markov chain associated with this queue. The second step consists of the inversion of this generating function. A new analytical expression of the transient probabilities of the M/M/1 queue is then obtained.


2018 ◽  
Author(s):  
Matthias Heck ◽  
Alec van Herwijnen ◽  
Conny Hammer ◽  
Manuel Hobiger ◽  
Jürg Schweizer ◽  
...  

Abstract. We use a seismic monitoring system to automatically determine the avalanche activity at a remote field site near Davos, Switzerland. By using a recently developed approach based on hidden Markov models (HMMs), a machine learning algorithm, we were able to automatically identify avalanches in continuous seismic data by providing as little as one single training event. Furthermore, we implemented an operational method to provide near real-time classification results. For the 2016–2017 winter period 117 events were automatically identified. False classified events such as airplanes and local earthquakes were filtered using a new approach containing two additional classification steps. In a first step, we implemented a second HMM based classifier at a second array 14 km away to automatically identify airplanes and earthquakes. By cross-checking the results of both arrays we reduced the amount of false classifications by about 50 %. In a second step, we used multiple signal classifications (MUSIC), an array processing technique to determine the direction of the source. Although avalanche events have a moving source character only small changes of the source direction are common for snow avalanches whereas false classifications had large changes in the source direction and were therefore dismissed. From the 117 detected events during the 4 month period we were able to identify 90 false classifications based on these two additional steps. The obtained avalanche activity based on the remaining 27 avalanche events was in line with visual observations performed in the area of Davos.


2021 ◽  
Author(s):  
Yishu Wang ◽  
Arnaud Mary ◽  
Marie-France Sagot ◽  
Blerina Sinaimeri

Abstract Background: Cophylogeny reconciliation is a powerful method for analyzing host-parasite (or host-symbiont) co-evolution. It models co-evolution as an optimization problem where the set of all optimal solutions may represent different biological scenarios which thus need to be analyzed separately. Despite the significant research done in the area, few approaches have addressed the problem of helping the biologist deal with the often huge space of optimal solutions.Results: In this paper, we propose a new approach to tackle this problem. We introduce three different criteria under which two solutions may be considered biologically equivalent, and then we propose polynomial-delay algorithms that enumerate only one representative per equivalence class (without listing all the solutions).Conclusions: Our results are of both theoretical and practical importance. Indeed, as shown by the experiments, we are able to significantly reduce the space of optimal solutions while still maintaining important biological information about the whole space.


2021 ◽  
Vol 247 ◽  
pp. 03014
Author(s):  
Shuai Qin ◽  
Qian Zhang ◽  
Liang Liang ◽  
Qingming He ◽  
Hongchun Wu

A two-step approach is proposed to accomplish high-fidelity whole-core resonance self-shielding calculation. Direct slowing-down equation solving based on the pin-cell scale is performed as the first step to simulate different operating conditions of the reactor. Resonance database is fitted using the results from the pin-cell calculation. Several techniques are used in the generation of the resonance database to estimate multiple types of resonance effects. The second step is the calculation of practical whole-core problem using the resonance database obtained from the first step. The transport solver is embedded both at the first step and the second step to establish the equivalence relationship between the fuel rod in the practical problem and the pin-cell at the first step. The numerical results show that the new approach have capability to perform high-fidelity resonance calculations for practical problem.


2020 ◽  
Vol 10 (5) ◽  
pp. 1625
Author(s):  
Zhonggui Zhang ◽  
Yi Ming ◽  
Gangbing Song

In this paper we develop a new approach to directly detect crash hotspot intersections (CHIs) using two customized spatial weights matrices, which are the inverse network distance-band spatial weights matrix of intersections (INDSWMI) and the k-nearest distance-band spatial weights matrix between crash and intersection (KDSWMCI). This new approach has three major steps. The first step is to build the INDSWMI by forming the road network, extracting the intersections from road junctions, and constructing the INDSWMI with road network constraints. The second step is to build the KDSWMCI by obtaining the adjacency crashes for each intersection. The third step is to perform intersection hotspot analysis (IHA) by using the Getis–Ord Gi* statistic with the INDSWMI and KDSWMCI to identify CHIs and test the Intersection Prediction Accuracy Index (IPAI). This approach is validated by comparison of the IPAI obtained using open street map (OSM) roads and intersection-related crashes (2008–2017) from Spencer city, Iowa, USA. The findings of the comparison show that higher prediction accuracy is achieved by using the proposed approach in identifying CHIs.


Sign in / Sign up

Export Citation Format

Share Document