A general-purpose framework for parallel processing of large-scale LiDAR data

2017 ◽  
Vol 11 (1) ◽  
pp. 26-47 ◽  
Author(s):  
Zhenlong Li ◽  
Michael E. Hodgson ◽  
Wenwen Li
Impact ◽  
2019 ◽  
Vol 2019 (10) ◽  
pp. 44-46
Author(s):  
Masato Edahiro ◽  
Masaki Gondo

The pace of technology's advancements is ever-increasing and intelligent systems, such as those found in robots and vehicles, have become larger and more complex. These intelligent systems have a heterogeneous structure, comprising a mixture of modules such as artificial intelligence (AI) and powertrain control modules that facilitate large-scale numerical calculation and real-time periodic processing functions. Information technology expert Professor Masato Edahiro, from the Graduate School of Informatics at the Nagoya University in Japan, explains that concurrent advances in semiconductor research have led to the miniaturisation of semiconductors, allowing a greater number of processors to be mounted on a single chip, increasing potential processing power. 'In addition to general-purpose processors such as CPUs, a mixture of multiple types of accelerators such as GPGPU and FPGA has evolved, producing a more complex and heterogeneous computer architecture,' he says. Edahiro and his partners have been working on the eMBP, a model-based parallelizer (MBP) that offers a mapping system as an efficient way of automatically generating parallel code for multi- and many-core systems. This ensures that once the hardware description is written, eMBP can bridge the gap between software and hardware to ensure that not only is an efficient ecosystem achieved for hardware vendors, but the need for different software vendors to adapt code for their particular platforms is also eliminated.


1983 ◽  
Vol 38 ◽  
pp. 1-9
Author(s):  
Herbert F. Weisberg

We are now entering a new era of computing in political science. The first era was marked by punched-card technology. Initially, the most sophisticated analyses possible were frequency counts and tables produced on a counter-sorter, a machine that specialized in chewing up data cards. By the early 1960s, batch processing on large mainframe computers became the predominant mode of data analysis, with turnaround time of up to a week. By the late 1960s, turnaround time was cut down to a matter of a few minutes and OSIRIS and then SPSS (and more recently SAS) were developed as general-purpose data analysis packages for the social sciences. Even today, use of these packages in batch mode remains one of the most efficient means of processing large-scale data analysis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Seyed Hossein Jafari ◽  
Amir Mahdi Abdolhosseini-Qomi ◽  
Masoud Asadpour ◽  
Maseud Rahgozar ◽  
Naser Yazdani

AbstractThe entities of real-world networks are connected via different types of connections (i.e., layers). The task of link prediction in multiplex networks is about finding missing connections based on both intra-layer and inter-layer correlations. Our observations confirm that in a wide range of real-world multiplex networks, from social to biological and technological, a positive correlation exists between connection probability in one layer and similarity in other layers. Accordingly, a similarity-based automatic general-purpose multiplex link prediction method—SimBins—is devised that quantifies the amount of connection uncertainty based on observed inter-layer correlations in a multiplex network. Moreover, SimBins enhances the prediction quality in the target layer by incorporating the effect of link overlap across layers. Applying SimBins to various datasets from diverse domains, our findings indicate that SimBins outperforms the compared methods (both baseline and state-of-the-art methods) in most instances when predicting links. Furthermore, it is discussed that SimBins imposes minor computational overhead to the base similarity measures making it a potentially fast method, suitable for large-scale multiplex networks.


i-com ◽  
2020 ◽  
Vol 19 (2) ◽  
pp. 139-151
Author(s):  
Thomas Schmidt ◽  
Miriam Schlindwein ◽  
Katharina Lichtner ◽  
Christian Wolff

AbstractDue to progress in affective computing, various forms of general purpose sentiment/emotion recognition software have become available. However, the application of such tools in usability engineering (UE) for measuring the emotional state of participants is rarely employed. We investigate if the application of sentiment/emotion recognition software is beneficial for gathering objective and intuitive data that can predict usability similar to traditional usability metrics. We present the results of a UE project examining this question for the three modalities text, speech and face. We perform a large scale usability test (N = 125) with a counterbalanced within-subject design with two websites of varying usability. We have identified a weak but significant correlation between text-based sentiment analysis on the text acquired via thinking aloud and SUS scores as well as a weak positive correlation between the proportion of neutrality in users’ voice and SUS scores. However, for the majority of the output of emotion recognition software, we could not find any significant results. Emotion metrics could not be used to successfully differentiate between two websites of varying usability. Regression models, either unimodal or multimodal could not predict usability metrics. We discuss reasons for these results and how to continue research with more sophisticated methods.


2021 ◽  
Vol 231 ◽  
pp. 110626
Author(s):  
Marko Bizjak ◽  
Borut Žalik ◽  
Gorazd Štumberger ◽  
Niko Lukač

2021 ◽  
Vol 13 (13) ◽  
pp. 2473
Author(s):  
Qinglie Yuan ◽  
Helmi Zulhaidi Mohd Shafri ◽  
Aidi Hizami Alias ◽  
Shaiful Jahari Hashim

Automatic building extraction has been applied in many domains. It is also a challenging problem because of the complex scenes and multiscale. Deep learning algorithms, especially fully convolutional neural networks (FCNs), have shown robust feature extraction ability than traditional remote sensing data processing methods. However, hierarchical features from encoders with a fixed receptive field perform weak ability to obtain global semantic information. Local features in multiscale subregions cannot construct contextual interdependence and correlation, especially for large-scale building areas, which probably causes fragmentary extraction results due to intra-class feature variability. In addition, low-level features have accurate and fine-grained spatial information for tiny building structures but lack refinement and selection, and the semantic gap of across-level features is not conducive to feature fusion. To address the above problems, this paper proposes an FCN framework based on the residual network and provides the training pattern for multi-modal data combining the advantage of high-resolution aerial images and LiDAR data for building extraction. Two novel modules have been proposed for the optimization and integration of multiscale and across-level features. In particular, a multiscale context optimization module is designed to adaptively generate the feature representations for different subregions and effectively aggregate global context. A semantic guided spatial attention mechanism is introduced to refine shallow features and alleviate the semantic gap. Finally, hierarchical features are fused via the feature pyramid network. Compared with other state-of-the-art methods, experimental results demonstrate superior performance with 93.19 IoU, 97.56 OA on WHU datasets and 94.72 IoU, 97.84 OA on the Boston dataset, which shows that the proposed network can improve accuracy and achieve better performance for building extraction.


2006 ◽  
Vol 36 (5) ◽  
pp. 1129-1138 ◽  
Author(s):  
Jennifer L. Rooker Jensen ◽  
Karen S Humes ◽  
Tamara Conner ◽  
Christopher J Williams ◽  
John DeGroot

Although lidar data are widely available from commercial contractors, operational use in North America is still limited by both cost and the uncertainty of large-scale application and associated model accuracy issues. We analyzed whether small-footprint lidar data obtained from five noncontiguous geographic areas with varying species and structural composition, silvicultural practices, and topography could be used in a single regression model to produce accurate estimates of commonly obtained forest inventory attributes on the Nez Perce Reservation in northern Idaho, USA. Lidar-derived height metrics were used as predictor variables in a best-subset multiple linear regression procedure to determine whether a suite of stand inventory variables could be accurately estimated. Empirical relationships between lidar-derived height metrics and field-measured dependent variables were developed with training data and acceptable models validated with an independent subset. Models were then fit with all data, resulting in coefficients of determination and root mean square errors (respectively) for seven biophysical characteristics, including maximum canopy height (0.91, 3.03 m), mean canopy height (0.79, 2.64 m), quadratic mean DBH (0.61, 6.31 cm), total basal area (0.91, 2.99 m2/ha), ellipsoidal crown closure (0.80, 0.08%), total wood volume (0.93, 24.65 m3/ha), and large saw-wood volume (0.75, 28.76 m3/ha). Although these regression models cannot be generalized to other sites without additional testing, the results obtained in this study suggest that for these types of mixed-conifer forests, some biophysical characteristics can be adequately estimated using a single regression model over stands with highly variable structural characteristics and topography.


2016 ◽  
Vol 13 (4) ◽  
pp. 961-973 ◽  
Author(s):  
W. Simonson ◽  
P. Ruiz-Benito ◽  
F. Valladares ◽  
D. Coomes

Abstract. Woodlands represent highly significant carbon sinks globally, though could lose this function under future climatic change. Effective large-scale monitoring of these woodlands has a critical role to play in mitigating for, and adapting to, climate change. Mediterranean woodlands have low carbon densities, but represent important global carbon stocks due to their extensiveness and are particularly vulnerable because the region is predicted to become much hotter and drier over the coming century. Airborne lidar is already recognized as an excellent approach for high-fidelity carbon mapping, but few studies have used multi-temporal lidar surveys to measure carbon fluxes in forests and none have worked with Mediterranean woodlands. We use a multi-temporal (5-year interval) airborne lidar data set for a region of central Spain to estimate above-ground biomass (AGB) and carbon dynamics in typical mixed broadleaved and/or coniferous Mediterranean woodlands. Field calibration of the lidar data enabled the generation of grid-based maps of AGB for 2006 and 2011, and the resulting AGB change was estimated. There was a close agreement between the lidar-based AGB growth estimate (1.22 Mg ha−1 yr−1) and those derived from two independent sources: the Spanish National Forest Inventory, and a tree-ring based analysis (1.19 and 1.13 Mg ha−1 yr−1, respectively). We parameterised a simple simulator of forest dynamics using the lidar carbon flux measurements, and used it to explore four scenarios of fire occurrence. Under undisturbed conditions (no fire) an accelerating accumulation of biomass and carbon is evident over the next 100 years with an average carbon sequestration rate of 1.95 Mg C ha−1 yr−1. This rate reduces by almost a third when fire probability is increased to 0.01 (fire return rate of 100 years), as has been predicted under climate change. Our work shows the power of multi-temporal lidar surveying to map woodland carbon fluxes and provide parameters for carbon dynamics models. Space deployment of lidar instruments in the near future could open the way for rolling out wide-scale forest carbon stock monitoring to inform management and governance responses to future environmental change.


Sign in / Sign up

Export Citation Format

Share Document