scholarly journals Random Polygons and Estimations of π

2019 ◽  
Vol 17 (1) ◽  
pp. 575-581
Author(s):  
Wen-Qing Xu ◽  
Linlin Meng ◽  
Yong Li

Abstract In this paper, we study the approximation of π through the semiperimeter or area of a random n-sided polygon inscribed in a unit circle in ℝ2. We show that, with probability 1, the approximation error goes to 0 as n → ∞, and is roughly sextupled when compared with the classical Archimedean approach of using a regular n-sided polygon. By combining both the semiperimeter and area of these random inscribed polygons, we also construct extrapolation improvements that can significantly speed up the convergence of these approximations.

2021 ◽  
Vol 9 (1) ◽  
pp. 241-249
Author(s):  
Shasha Wang ◽  
Wen-Qing Xu ◽  
Jitao Liu

We construct optimal extrapolation estimates of π based on random polygons generated by n independent points uniformly distributed on a unit circle in R2. While the semiperimeters and areas of these random n-gons converge to π almost surely and are asymptotically normal as n → ∞, in this paper we develop various extrapolation processes to further accelerate such convergence. By simultaneously considering the random n-gons and suitably constructed random 2n-gons and then optimizing over functionals of the semiperimeters and areas of these random polygons, we derive several new estimates of π with faster convergence rates. These extrapolation improvements are also shown to be asymptotically normal as n → ∞.


Author(s):  
P. Sabelnikov

Introduction. One of the directions associated with identification, analysis of the shape of objects, their size, orientation, marking and other geometric characteristics is contour analysis. Various methods for contour approximation are described in the literature. The proposed method is based on a well-known method. Its essence lies in the sequential search for possible directions and end points of approximating straight line segments belonging to the contour. The number of approximation nodes should be as small as possible. The calculation is carried out only for the next point of the contour, without returning to check the criterion of approximation to all previous points. The computational complexity of the algorithm is proportional to the number of points in the contour. The purpose of the paper to propose a method of piecewise linear approximation of the contours of objects in images, which will allow to use the parallel computations at all stages of computer processing using vector operations. Results. The paper proposes an improved method for piecewise linear approximation of a closed contour of an object in an image by a polygon, the vertices of which are directly the points of this contour. Approximation criterion: the distance from each point of the approximated section of the contour to the approximating segment should not exceed the approximation error. The method is focused on parallel computing using vector operations. A method for parallel computation of integral vectors of extreme values of a sequence of numbers for the implementation of parallel computations using vector operations at all stages of approximation is also proposed. Conclusions. Methods are proposed that are implemented using vector operations and provide an opportunity to speed up the solution of contour analysis problems, as well as other similar problems in real time. The gain in computing speed is proportional to the amount of data that a vector processor can simultaneously process. The presence of developed subsystems of vector instructions in Intel and ARM processors makes it possible to use the proposed computation methods in practice. Keywords: image, object contour, piecewise linear approximation, parallel computations, vector operations.


2021 ◽  
Vol 1 (4 (109)) ◽  
pp. 21-30
Author(s):  
Anton Chubarov

Several models of programmed flight have been constructed to perform calculations on flight path optimization in designing tactical and anti-aircraft-guided missiles. The developed models are based on the determination of interrelated programmed values of altitude and the flight path angle depending on the range which have a differential relationship. The combination of flight altitude and flight-path angle programs allows the users to simulate the steady flight of a guided missile to the calculated endpoint using the methods of proportional control. Good correspondence of the developed models to the physics of flight was shown by assessing the quality of approximation of the developed models of flight paths of anti-aircraft guided missiles obtained using other known models. The obtained approximation error was less than 5 % which indicates a good correspondence of the developed models to the physics of flight. Compliance of the developed models of programmed flight with the intended purpose and the advantage over the most common known models were proved by optimizing the flight paths of the anti-aircraft-guided missile. In most of the considered calculation cases, the value of the objective function was improved to 2.9 %. The flight path was optimized using a genetic algorithm. The developed models have a simple algebraic form and a small number of control parameters are presented in a ready-to-use form and do not require refinement for a concrete task. This allows them to be implemented in design practice without spending much time to speed up the calculation of optimal design variables and optimal flight paths of tactical and anti-aircraft-guided missiles


Author(s):  
Brian Cross

A relatively new entry, in the field of microscopy, is the Scanning X-Ray Fluorescence Microscope (SXRFM). Using this type of instrument (e.g. Kevex Omicron X-ray Microprobe), one can obtain multiple elemental x-ray images, from the analysis of materials which show heterogeneity. The SXRFM obtains images by collimating an x-ray beam (e.g. 100 μm diameter), and then scanning the sample with a high-speed x-y stage. To speed up the image acquisition, data is acquired "on-the-fly" by slew-scanning the stage along the x-axis, like a TV or SEM scan. To reduce the overhead from "fly-back," the images can be acquired by bi-directional scanning of the x-axis. This results in very little overhead with the re-positioning of the sample stage. The image acquisition rate is dominated by the x-ray acquisition rate. Therefore, the total x-ray image acquisition rate, using the SXRFM, is very comparable to an SEM. Although the x-ray spatial resolution of the SXRFM is worse than an SEM (say 100 vs. 2 μm), there are several other advantages.


Author(s):  
A. G. Jackson ◽  
M. Rowe

Diffraction intensities from intermetallic compounds are, in the kinematic approximation, proportional to the scattering amplitude from the element doing the scattering. More detailed calculations have shown that site symmetry and occupation by various atom species also affects the intensity in a diffracted beam. [1] Hence, by measuring the intensities of beams, or their ratios, the occupancy can be estimated. Measurement of the intensity values also allows structure calculations to be made to determine the spatial distribution of the potentials doing the scattering. Thermal effects are also present as a background contribution. Inelastic effects such as loss or absorption/excitation complicate the intensity behavior, and dynamical theory is required to estimate the intensity value.The dynamic range of currents in diffracted beams can be 104or 105:1. Hence, detection of such information requires a means for collecting the intensity over a signal-to-noise range beyond that obtainable with a single film plate, which has a S/N of about 103:1. Although such a collection system is not available currently, a simple system consisting of instrumentation on an existing STEM can be used as a proof of concept which has a S/N of about 255:1, limited by the 8 bit pixel attributes used in the electronics. Use of 24 bit pixel attributes would easily allowthe desired noise range to be attained in the processing instrumentation. The S/N of the scintillator used by the photoelectron sensor is about 106 to 1, well beyond the S/N goal. The trade-off that must be made is the time for acquiring the signal, since the pattern can be obtained in seconds using film plates, compared to 10 to 20 minutes for a pattern to be acquired using the digital scan. Parallel acquisition would, of course, speed up this process immensely.


2004 ◽  
Vol 63 (1) ◽  
pp. 17-29 ◽  
Author(s):  
Friedrich Wilkening ◽  
Claudia Martin

Children 6 and 10 years of age and adults were asked how fast a toy car had to be to catch up with another car, the latter moving with a constant speed throughout. The speed change was required either after half of the time (linear condition) or half of the distance (nonlinear condition), and responses were given either on a rating scale (judgment condition) or by actually producing the motion (action condition). In the linear condition, the data patterns for both judgments and actions were in accordance with the normative rule at all ages. This was not true for the nonlinear condition, where children’s and adults’ judgment and also children’s action patterns were linear, and only adults’ action patterns were in line with the nonlinearity principle. Discussing the reasons for the misconceptions and for the action-judgment dissociations, a claim is made for a new view on the development of children’s concepts of time and speed.


Nature ◽  
2020 ◽  
Vol 584 (7820) ◽  
pp. 192-192 ◽  
Author(s):  
Lucila Ohno-Machado ◽  
Hua Xu
Keyword(s):  

Nature ◽  
2005 ◽  
Author(s):  
David Cyranoski
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document