scholarly journals A Method to Locate Tree Positions Using Ultrawideband Technology

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Fangxing Yuan ◽  
Sheng Chen ◽  
Luming Fang ◽  
Siqing Zheng ◽  
Yuzhen Liu ◽  
...  

Tree position plays an important role in research on forest resources and ecological functions, and quickly and accurately obtaining tree position data has long been the focus of investigators. However, the classical method is time-consuming and laborious; thus, a convenient method of measuring tree position is needed. The primary achievements of this study include the following: (1) a device was designed for precise location of trees; (2) a new location algorithm was proposed for pentagonal localization based on the received signal strength indication and ultrawideband technology; and (3) a PC software application was developed for automatically storing and uploading tree position data. The device was applied to 10 circular plots with a diameter of 24 m to test the positioning speed and accuracy. The results showed that the tree positions could be accurately estimated. On the x - and y -axes, the biases were -3.94 and 3.36 cm, respectively, and the root mean square errors (RMSEs) were 28.39 and 28.53 cm, respectively. The mean error (Ed) between the estimated and reference distances was 36.13 cm, and the standard deviation was 16.67 cm. The device is inexpensive and easy to use and carry in the field; thus, it is suitable for locating trees in environments with complex terrain.

2019 ◽  
Vol 3 (Supplement_1) ◽  
Author(s):  
Velarie Ansu ◽  
Stephanie Dickinson ◽  
Alyce Fly

Abstract Objectives To determine which digit and hand have the highest and lowest skin carotenoid scores, to compare inter-and-intra-hand variability of digits, and to determine if results are consistent with another subject. Methods Two subjects’ first(F1), second(F2), third(F3) and fifth(F5) digits on both hands were measured for skin carotenoids with a Veggie Meter, for 3 times on each of 18 days over a 37-day period. Data were subjected to ANOVA in a factorial treatment design to determine main effects for hand (2 levels), digits (4), and days (18) along with interactions. Differences between digits were determined by Tukey's post hoc test. Results There were significant hand x digit, hand x day, digit x day, and hand x digit x day interactions and significant simple main effects for hand, digit, and day (all P < 0.001). Mean square errors were 143.67 and 195.62 for subject A and B, respectively, which were smaller than mean squares for all main effects and interactions. The mean scores ± SD for F1, F2, F3, and F5 digits for the right vs left hands for subject A were F1:357.13 ± 45.97 vs 363.74 ± 46.94, F2:403.17 ± 44.77 vs. 353.20 ± 44.13, F3:406.76 ± 43.10 vs. 357.11 ± 45.13, and F5:374.95 ± 53.00 vs. 377.90 ± 47.38. For subject B, the mean scores ± SD for digits for the right vs left hands were F1:294.72 ± 61.63 vs 280.71 ± 52.48, F2:285.85 ± 66.92 vs 252.67 ± 67.56, F3:268.56 ± 57.03 vs 283.22 ± 45.87, and F5:288.18 ± 34.46 vs 307.54 ± 40.04. The digits on the right hand of both subjects had higher carotenoid scores than those on the left hands, even though subjects had different dominant hands. Subject A had higher skin carotenoid scores on the F3 and F2 digits for the right hand and F5 on the left hand. Subject B had higher skin carotenoid scores on F5 (right) and F1 (left) digits. Conclusions The variability due to hand, digit, and day were all greater than that of the 3 replicates within the digit-day for both volunteers. This indicates that data were not completely random across the readings when remeasuring the same finger. Different fingers displayed higher carotenoid scores for each volunteer. There is a need to conduct a larger study with more subjects and a range of skin tones to determine whether the reliability of measurements among digits of both hands is similar across the population. Funding Sources Indiana University.


Author(s):  
Dhaval Gohil ◽  
Nasser Mohammed ◽  
Anita Mahadevan ◽  
Nupur Pruthi

Abstract Objective To compare the histopathology of patent and nonpatent microvascular anastomosis using rat femoral artery end-to-end anastomosis model. Materials and Methods In 15 Sprague–Dawley rats, end-to-end anastomosis was performed on the right femoral artery. The classical method was used in four cases and one-way up method in 11 cases. The animals were sacrificed after 2 weeks and the anastomosis was subjected to histopathology. The pathological changes in patent and nonpatent cases were compared. Results The immediate patency rate and delayed patency (after 2 weeks) rate was 86.7% and 66.7%, respectively. The mean follow-up was 3 months. At sacrifice, 5/15 anastomosis were not patent. Marked subintimal thickening was noted in ⅘ (80%) of the nonpatent group, which was absent in the patent group. Severe loss or fibrosis of tunica media and marked adventitial inflammation were noted in all nonpatent cases (5/5, 100%). As much as ⅘ of the nonpatent cases had poor or indeterminate apposition; in contrast, good apposition was seen in 6/10 (60%) of the patent group. The mean clamp time and mean suturing time were significantly longer in the nonpatent group (69.2 minutes and 53.8 minutes, respectively) as compared with the patent group (48.8 minutes and 31.8 minutes, respectively). A single case that was initially nonpatent was found to have recanalized at 6 months. Conclusion Minimal intimal injury and reaction, minimal thinning of tunica media, mild-to-moderate adventitial changes, good apposition, and equidistant sutures were associated with a successful microvascular anastomosis. Short duration of vessel clamping time and suturing comes with experience and dedicated practice in a skills laboratory.


Author(s):  
Barbara J. Kelso

A legibility study was performed to investigate the effects of scale factors, graduation marks, orientation of scales, and reading conditions on the speed and accuracy of reading moving-tape instruments. Each of 150 Air Force Officers made 150 self-paced readings from slides of hand drawn tape instruments. Error was expressed as the magnitude of deviation of a subjects' verbal response from the set scale value. An analysis of variance was performed on the mean error scores, standard deviations of error, mean reaction times, and standard deviations of reaction times. The results clearly favored the 1 7/8 inch scale factor over the 1 3/8 inch and the 2 3/8 scale factor. The use of 9 graduation marks was superior to either 0, 1, 3, or 4 graduation marks. Reading conditions had little effect on performance. Horizontal scales were read more rapidly but no more accurately than vertical scales.


2018 ◽  
Vol 14 (06) ◽  
pp. 191
Author(s):  
Chao Huang ◽  
Yuang Mao

<p class="0abstract"><span lang="EN-US">T</span><span lang="EN-US">o further study the basic principle and localization process of DV-Hop location algorithm, the location error reason of traditional location algorithm caused by the minimum hop number </span><span lang="EN-US">wa</span><span lang="EN-US">s analyzed and demonstrated in detail.</span><span lang="EN-US"> The RSSI ranging technology was introduced to modify the minimum hops stage, and the minimum hop number was improved by the DV-Hop algorithm. </span><span lang="EN-US">For the location error caused by the average hop distance, the hop distance of the original algorithm </span><span lang="EN-US">wa</span><span lang="EN-US">s optimized. The improved location algorithm of DV-Hop average hop distance </span><span lang="EN-US">wa</span><span lang="EN-US">s used to modify the average range calculation by introducing the proportion of beacon nodes and the optimal threshold value. The optimization algorithm of the two different stages </span><span lang="EN-US">wa</span><span lang="EN-US">s combined into an improved location algorithm based on hop distance optimization, and the advantages of the two algorithms </span><span lang="EN-US">we</span><span lang="EN-US">re taken into account.</span><span lang="EN-US">Finally, the traditional DV-Hop location algorithm and the three improved location algorithms </span><span lang="EN-US">we</span><span lang="EN-US">re simulated and analyzed by beacon node ratio and node communication radius with multi angle. The experimental results show</span><span lang="EN-US">ed</span><span lang="EN-US"> that the improved algorithm </span><span lang="EN-US">wa</span><span lang="EN-US">s better than the original algorithm in the positioning stability and positioning accuracy.</span></p>


2019 ◽  
Vol 4 (2) ◽  
pp. 363-370
Author(s):  
Romy Budhi Widodo ◽  
Mochamad Subianto ◽  
Grace Imelda

The domain of the activity is technology for the society whereas the focus is practical computer science for the society. The background of our activity is based on the needs of YPK junior high school in Malang city, Indonesia. The school need to develop computer-based school report card and also daily grade card for teachers. The method for software/application development is spiral model which consist of the cycle of system identification, risk analysis, and enhancement of the prototype to be an operational prototype. Evaluation of the product was based on the Computer System Usability Questionnaire (CSUQ) from IBM. The CSUQ using 5 scale of Likert scale contains three categories: 1) system usefulness (SYSUSE), 2) information quality (INFOQUAL), and 3) interface quality (INTERQUAL). The mean rank’s result in order from the greatest to the lowest is SYSUSE, INTERQUAL, and INFOQUAL, respectively. It was reported that SYSUSE category was superior to INFOQUAL (U = 3369.5, p = 0.0005). Overall, the user satisfaction score is 78.4%, which is in the “worthy” category


2020 ◽  
Vol 23 (04) ◽  
pp. 2050024
Author(s):  
ALEXANDER LIPTON

We use a powerful extension of the classical method of heat potentials, recently developed by the present author and his collaborators, to solve several significant problems of financial mathematics. We consider the following problems in detail: (a) calibrating the default boundary in the structural default framework to a constant default intensity; (b) calculating default probability for a representative bank in the mean-field framework; and (c) finding the hitting time probability density of an Ornstein–Uhlenbeck process. Several other problems, including pricing American put options and finding optimal mean-reverting trading strategies, are mentioned in passing. Besides, two nonfinancial applications — the supercooled Stefan problem and the integrate-and-fire neuroscience problem — are briefly discussed as well.


2014 ◽  
Vol 8 (3) ◽  
pp. 1069-1086 ◽  
Author(s):  
S. Lhermitte ◽  
J. Abermann ◽  
C. Kinnard

Abstract. Both satellite and ground-based broadband albedo measurements over rough and complex terrain show several limitations concerning feasibility and representativeness. To assess these limitations and understand the effect of surface roughness on albedo, firstly, an intrasurface radiative transfer (ISRT) model is combined with albedo measurements over different penitente surfaces on Glaciar Tapado in the semi-arid Andes of northern Chile. Results of the ISRT model show effective albedo reductions over the penitentes up to 0.4 when comparing the rough surface albedo relative to the albedo of the flat surface. The magnitude of these reductions primarily depends on the opening angles of the penitentes, but the shape of the penitentes and spatial variability of the material albedo also play a major role. Secondly, the ISRT model is used to reveal the effect of using albedo measurements at a specific location (i.e., apparent albedo) to infer the true albedo of a penitente field (i.e., effective albedo). This effect is especially strong for narrow penitentes, resulting in sampling biases of up to ±0.05. The sampling biases are more pronounced when the sensor is low above the surface, but remain relatively constant throughout the day. Consequently, it is important to use a large number of samples at various places and/or to locate the sensor sufficiently high in order to avoid this sampling bias of surface albedo over rough surfaces. Thirdly, the temporal evolution of broadband albedo over a penitente-covered surface is analyzed to place the experiments and their uncertainty into a longer temporal context. Time series of albedo measurements at an automated weather station over two ablation seasons reveal that albedo decreases early in the ablation season. These decreases stabilize from February onwards with variations being caused by fresh snowfall events. The 2009/2010 and 2011/2012 seasons differ notably, where the latter shows lower albedo values caused by larger penitentes. Finally, a comparison of the ground-based albedo observations with Landsat and MODIS (Moderate Resolution Imaging Spectroradiometer)-derived albedo showed that both satellite albedo products capture the albedo evolution with root mean square errors of 0.08 and 0.15, respectively, but also illustrate their shortcomings related to temporal resolution and spatial heterogeneity over small mountain glaciers.


1991 ◽  
Vol 127 ◽  
pp. 108-115
Author(s):  
W. Kosek ◽  
B. Kołaczek

AbstractThe PTRF is based on 43 sites with 64 SSC collocation points with the optimum geographic distribution, which were selected from all stations of the ITRF89 according to the criterion of the minimum value of the errors of 7 parameters of transformation. The ITRF89 was computed by the IERS Terrestrial Frame Section in Institut Geographique National - IGN and contains 192 VLBI and SLR stations (points) with 119 collocation ones. The PTRF has been compared with the ITRF89. The errors of the 7 parameters of transformation between the PTRF and 18 individual SSC as well as the mean square errors of station coordinates are of the same order as those for the ITRF89. The transformation parameters between the ITRF89 and the PTRF are negligible and their errors are of the order of 3 mm.


1975 ◽  
Vol 29 (2) ◽  
pp. 175-188
Author(s):  
M. Mosaad Allam

In practice, photogrammetrists use a single statistic reliability interval criterion, based on the mean square errors, to judge the accuracy of adjustment of photogrammetric blocks. Even in some cases, if the practical and theoretical distributions of frequency interval agree, such a test does not make it possible to establish the closeness of their convergence nor the degree of their difference. In other words, to get a complete picture of the character of the distribution of errors in the adjusted photogrammetric blocks, it is insufficient to investigate any single statistic. In the Research and Development Section of the Topographical Survey Directorate, a computer program (SABA) has been designed to analyze the errors of photogrammetric block adjustments, compute various statistical parameters and check the sample distribution using Kolmogorov criterion. Based on the decision taken, the correspondence between the empirical and theoretical distribution series are checked using the criterion χ2. The program divides the adjusted block to make a comparative evaluation of accuracies in the different sub-blocks. In this case, in addition to Kolmogorov and χ2 tests, the program checks the reliability intervals of the means and mean square errors of the samples and uses Fisher criterion ‘F’ to check the hypothesis of the equality of dispersion. SABA is coded in Fortran IV and Compass for the CDC CYBER 74 and requires a central memory of 28K decimal works. SABA is the acronym for Statistical Analysis of Block Adjustment.


Sign in / Sign up

Export Citation Format

Share Document