scholarly journals SHAPE FROM TEXTURE USING LOCALLY SCALED POINT PROCESSES

2015 ◽  
Vol 34 (3) ◽  
pp. 161 ◽  
Author(s):  
Eva-Maria Didden ◽  
Thordis Thorarinsdottir ◽  
Alex Lenkoski ◽  
Christoph Schnörr

Shape from texture refers to the extraction of 3D information from 2D images with irregular texture. This paper introduces a statistical framework to learn shape from texture where convex texture elements in a 2D image are represented through a point process. In a first step, the 2D image is preprocessed to generate a probability map corresponding to an estimate of the unnormalized intensity of the latent point process underlying the texture elements. The latent point process is subsequently inferred from the probability map in a non-parametric, model free manner. Finally, the 3D information is extracted from the point pattern by applying a locally scaled point process model where the local scaling function represents the deformation caused by the projection of a 3D surface onto a 2D image.

2014 ◽  
Vol 26 (2) ◽  
pp. 237-263 ◽  
Author(s):  
Luca Citi ◽  
Demba Ba ◽  
Emery N. Brown ◽  
Riccardo Barbieri

Likelihood-based encoding models founded on point processes have received significant attention in the literature because of their ability to reveal the information encoded by spiking neural populations. We propose an approximation to the likelihood of a point-process model of neurons that holds under assumptions about the continuous time process that are physiologically reasonable for neural spike trains: the presence of a refractory period, the predictability of the conditional intensity function, and its integrability. These are properties that apply to a large class of point processes arising in applications other than neuroscience. The proposed approach has several advantages over conventional ones. In particular, one can use standard fitting procedures for generalized linear models based on iteratively reweighted least squares while improving the accuracy of the approximation to the likelihood and reducing bias in the estimation of the parameters of the underlying continuous-time model. As a result, the proposed approach can use a larger bin size to achieve the same accuracy as conventional approaches would with a smaller bin size. This is particularly important when analyzing neural data with high mean and instantaneous firing rates. We demonstrate these claims on simulated and real neural spiking activity. By allowing a substantive increase in the required bin size, our algorithm has the potential to lower the barrier to the use of point-process methods in an increasing number of applications.


2010 ◽  
Vol 42 (02) ◽  
pp. 347-358 ◽  
Author(s):  
Jesper Møller ◽  
Frederic Paik Schoenberg

In this paper we describe methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and, thus, can be used as a graphical exploratory tool for inspecting the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Devan G. Becker ◽  
Douglas G. Woolford ◽  
Charmaine B. Dean

AbstractSpatial point processes have been successfully used to model the relative efficiency of shot locations for each player in professional basketball games. Those analyses were possible because each player makes enough baskets to reliably fit a point process model. Goals in hockey are rare enough that a point process cannot be fit to each player’s goal locations, so novel techniques are needed to obtain measures of shot efficiency for each player. A Log-Gaussian Cox Process (LGCP) is used to model all shot locations, including goals, of each NHL player who took at least 500 shots during the 2011–2018 seasons. Each player’s LGCP surface is treated as an image and these images are then used in an unsupervised statistical learning algorithm that decomposes the pictures into a linear combination of spatial basis functions. The coefficients of these basis functions are shown to be a very useful tool to compare players. To incorporate goals, the locations of all shots that resulted in a goal are treated as a “perfect player” and used in the same algorithm (goals are further split into perfect forwards, perfect centres and perfect defence). These perfect players are compared to other players as a measure of shot efficiency. This analysis provides a map of common shooting locations, identifies regions with the most goals relative to the number of shots and demonstrates how each player’s shot location differs from scoring locations.


2010 ◽  
Vol 42 (2) ◽  
pp. 347-358 ◽  
Author(s):  
Jesper Møller ◽  
Frederic Paik Schoenberg

In this paper we describe methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and, thus, can be used as a graphical exploratory tool for inspecting the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered.


2010 ◽  
Vol 29 (3) ◽  
pp. 133 ◽  
Author(s):  
Michaela Prokešová

In the literature on point processes the by far most popular option for introducing inhomogeneity into a point process model is the location dependent thinning (resulting in a second-order intensity-reweighted stationary point process). This produces a very tractable model and there are several fast estimation procedures available. Nevertheless, this model dilutes the interaction (or the geometrical structure) of the original homogeneous model in a special way. When concerning the Markov point processes several alternative inhomogeneous models were suggested and investigated in the literature. But it is not so for the Cox point processes, the canonical models for clustered point patterns. In the contribution we discuss several other options how to define inhomogeneous Cox point process models that result in point patterns with different types of geometric structure. We further investigate the possible parameter estimation procedures for such models.


2020 ◽  
Author(s):  
Wei Zhang ◽  
Joseph D. Chipperfield ◽  
Janine B. Illian ◽  
Pierre Dupont ◽  
Cyril Milleret ◽  
...  

AbstractSpatial capture-recapture (SCR) is a popular method for estimating the abundance and density of wildlife populations. A standard SCR model consists of two sub-models: one for the activity centers of individuals and the other for the detections of each individual conditional on its activity center. So far, the detection sub-model of most SCR models is designed for sampling situations where fixed trap arrays are used to detect individuals.Non-invasive genetic sampling (NGS) is widely applied in SCR studies. Using NGS methods, one often searches the study area for potential sources of DNA such as hairs and faeces, and records the locations of these samples. To analyse such data with SCR models, investigators usually impose an artificial detector grid and project detections to the nearest detector. However, there is a trade-off between the computational efficiency (fewer detectors) and the spatial accuracy (more detectors) when using this method.Here, we propose a point process model for the detection process of SCR studies using NGS. The model better reflects the spatially continuous detection process and allows all spatial information in the data to be used without approximation error. As in many SCR models, we also use a point process model for the activity centers of individuals. The resulting hierarchical point process model enables estimation of total population size without imputing unobserved individuals via data augmentation, which can be computationally cumbersome. We write custom distributions for those spatial point processes and fit the SCR model in a Bayesian framework using Markov chain Monte Carlo in the R package nimble.Simulations indicate good performance of the proposed model for parameter estimation. We demonstrate the application of the model in a real-life scenario by fitting it to NGS data of female wolverines (Gulo gulo) collected in three counties of Norway during the winter of 2018/19. Our model estimates that the density of female wolverines is 9.53 (95% CI: 8–11) per 10,000km2 in the study area.


1996 ◽  
Vol 28 (2) ◽  
pp. 339-339
Author(s):  
Francisco Montes ◽  
Jorge Mateu

Parameter estimation for a two-dimensional point pattern is difficult because most of the available stochastic models have intractable likelihoods ([2]). An exception is the class of Gibbs or Markov point processes ([1], [5]), where the likelihood typically forms an exponential family and is given explicitly up to a normalising constant. However, the latter is not known analytically, so parameter estimates must be based on approximations ([3], [6], [7]). In this paper we present comparisons amongst the different techniques available in the literature to obtain an approximation of the maximum likelihood estimate (MLE). Two stochastic methods are specifically illustrated: a Newton-Raphson algorithm ([7]) and the Robbins-Monro procedure ([8]). We use a very simple point process model, the Strauss process ([4]), to test and compare those approximations.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Iswar Das ◽  
Alfred Stein

Landslides are common but complex natural hazards. They occur on the Earth’s surface following a mass movement process. This study applies the multitype Strauss point process model to analyze the spatial distributions of small and large landslides along with geoenvironmental covariates. It addresses landslides as a set of irregularly distributed point-type locations within a spatial region. Their intensity and spatial interactions are analyzed by means of the distance correlation functions, model fitting, and simulation. We use as a dataset the landslide occurrences for 28 years from a landslide prone road corridor in the Indian Himalayas. The landslides are investigated for their spatial character, that is, whether they show inhibition or occur as a regular or a clustered point pattern, and for their interaction with landslides in the neighbourhood. Results show that the covariates lithology, land cover, road buffer, drainage density, and terrain units significantly improved model fitting. A comparison of the output made with logistic regression model output showed a superior prediction performance for the multitype Strauss model. We compared results of this model with the multitype/hard core Strauss point process model that further improved the modeling. Results from the study can be used to generate landslide susceptibility scenarios. The paper concludes that a multitype Strauss point process model enriches the set of statistical tools that can comprehensively analyze landslide data.


2001 ◽  
Vol 13 (4) ◽  
pp. 717-749 ◽  
Author(s):  
M. R. Jarvis ◽  
P. P. Mitra

The spectrum and coherency are useful quantities for characterizing the temporal correlations and functional relations within and between point processes. This article begins with a review of these quantities, their interpretation, and how they may be estimated. A discussion of how to assess the statistical significance of features in these measures is included. In addition, new work is presented that builds on the framework established in the review section. This work investigates how the estimates and their error bars are modified by finite sample sizes. Finite sample corrections are derived based on a doubly stochastic inhomogeneous Poisson process model in which the rate functions are drawn from a low-variance gaussian process. It is found that in contrast to continuous processes, the variance of the estimators cannot be reduced by smoothing beyond a scale set by the number of point events in the interval. Alternatively, the degrees of freedom of the estimators can be thought of as bounded from above by the expected number of point events in the interval. Further new work describing and illustrating a method for detecting the presence of a line in a point process spectrum is also presented, corresponding to the detection of a periodic modulation of the underlying rate. This work demonstrates that a known statistical test, applicable to continuous processes, applies with little modification to point process spectra and is of utility in studying a point process driven by a continuous stimulus. Although the material discussed is of general applicability to point processes, attention will be confined to sequences of neuronal action potentials (spike trains), the motivation for this work.


2021 ◽  
Author(s):  
Edith Gabriel ◽  
Francisco Rodriguez-Cortes ◽  
Jérôme Coville ◽  
Jorge Mateu ◽  
Joël Chadoeuf

Abstract Seismic networks provide data that are used as basis both for public safety decisions and for scientific research. Their configuration affects the data completeness, which in turn, critically affects several seismological scientific targets (e.g., earthquake prediction, seismic hazard...). In this context, a key aspect is how to map earthquakes density in seismogenic areas from censored data or even in areas that are not covered by the network. We propose to predict the spatial distribution of earthquakes from the knowledge of presence locations and geological relationships, taking into account any interactions between records. Namely, in a more general setting, we aim to estimate the intensity function of a point process, conditional to its censored realization, as in geostatistics for continuous processes. We define a predictor as the best linear unbiased combination of the observed point pattern. We show that the weight function associated to the predictor is the solution of a Fredholm equation of second kind. Both the kernel and the source term of the Fredholm equation are related to the first- and second-order characteristics of the point process through the intensity and the pair correlation function. Results are presented and illustrated on simulated non-stationary point processes and real data for mapping Greek Hellenic seismicity in a region with unreliable and incomplete records.


Sign in / Sign up

Export Citation Format

Share Document