Study on efficient computation and performance of AV-based reduced-rank filtering

2005 ◽  
Vol 22 (2) ◽  
pp. 153-160
Author(s):  
Bin Xu ◽  
Chenyang Yang ◽  
Shiyi Mao
2019 ◽  
Vol 47 (2) ◽  
pp. 96-105 ◽  
Author(s):  
Riccardo Pecori ◽  
Vincenzo Suraci ◽  
Pietro Ducange

Purpose Managing efficiently educational Big Data, produced by Virtual Learning Environments, is becoming a compelling necessity, especially for those universities providing distance learning. This paper aims to propose a possible framework to compute efficiently key performance indicators, summarizing the trends of students’ academic careers, by using educational Big Data. Design/methodology/approach The framework is designed and implemented in a distributed fashion. The parallel computation of the indicators through Map and Reduce nodes is carefully described, together with the workflow of data, from the educational sources to a NoSQL database and to the learning analytics engine. Findings This framework was tested at eCampus University, an Italian distance learning institution, and it was able to significantly reduce the amount of time needed to compute key performance indicators. Moreover, by implementing a proper data representation dashboard, it resulted in a useful help and support for educational decisions and performance analyses and for revealing possible criticalities. Originality/value The framework proposed integrates for the first time, to the best of the authors’ knowledge, a set of modules, designed and implemented in a distributed fashion, to compute key performance indicators for distance learning institutions. It can be used to analyze the dropouts and the outcomes of students and, therefore, to evaluate the performances of universities, which can, in turn, propose effective improvements toward enhancing the overall e-learning scenario.


2021 ◽  
Author(s):  
◽  
Phillip Boyle

<p>Gaussian processes have proved to be useful and powerful constructs for the purposes of regression. The classical method proceeds by parameterising a covariance function, and then infers the parameters given the training data. In this thesis, the classical approach is augmented by interpreting Gaussian processes as the outputs of linear filters excited by white noise. This enables a straightforward definition of dependent Gaussian processes as the outputs of a multiple output linear filter excited by multiple noise sources. We show how dependent Gaussian processes defined in this way can also be used for the purposes of system identification. Onewell known problem with Gaussian process regression is that the computational complexity scales poorly with the amount of training data. We review one approximate solution that alleviates this problem, namely reduced rank Gaussian processes. We then show how the reduced rank approximation can be applied to allow for the efficient computation of dependent Gaussian processes. We then examine the application of Gaussian processes to the solution of other machine learning problems. To do so, we review methods for the parameterisation of full covariance matrices. Furthermore, we discuss how improvements can be made by marginalising over alternative models, and introduce methods to perform these computations efficiently. In particular, we introduce sequential annealed importance sampling as a method for calculating model evidence in an on-line fashion as new data arrives. Gaussian process regression can also be applied to optimisation. An algorithm is described that uses model comparison between multiple models to find the optimum of a function while taking as few samples as possible. This algorithm shows impressive performance on the standard control problem of double pole balancing. Finally, we describe how Gaussian processes can be used to efficiently estimate gradients of noisy functions, and numerically estimate integrals.</p>


Author(s):  
Ikjin Lee ◽  
Kyung K. Choi ◽  
David Gorsich

This study presents a methodology to convert an RBDO problem requiring very high reliability to an RBDO problem requiring relatively low reliability by increasing input standard deviations for efficient computation in sampling-based RBDO. First, for linear performance functions with independent normal random inputs, an exact probability of failure is derived in terms of the ratio of the input standard deviation, which is denoted by δ. Then, the probability of failure estimation is generalized for any random input and performance functions. For the generalization of the probability of failure estimation, two coefficients need to be determined by equating the probability of failure and its sensitivity with respect to the standard deviation at the current design point. The sensitivity of the probability of failure with respect to the standard deviation is obtained using the first-order score function for the standard deviation. To apply the proposed method to an RBDO problem, a concept of an equivalent standard deviation, which is an increased standard deviation corresponding to the low reliability model, is also introduced. Numerical results indicate that the proposed method can estimate the probability of failure accurately as a function of the input standard deviation compared to the Monte Carlo simulation results. As anticipated, the sampling-based RBDO using the surrogate models and the equivalent standard deviation helps find the optimum design very efficiently while yielding relatively accurate optimum design which is close to the one obtained using the original standard deviation.


2021 ◽  
Vol 48 (3) ◽  
pp. 6-11
Author(s):  
Mohammad A. Hoque ◽  
Ashwin Rao ◽  
Sasu Tarkoma

Modern mobile systems are optimized for energy-efficient computation and communications, and these optimizations affect the way they use the network, and thus the performance of the applications. Therefore, understanding network and application performance are essential for debugging, improving user experience, and performance comparison. In recent years, several tools have emerged that analyze network performance of mobile applications in situ with the help of the VPN service. However, there is a limited understanding of how these measurement tools and system optimizations affect the network and application performance. This paper first demonstrates that mobile systems employ energy-aware system hardware tuning, affecting network latency and throughput. We next show that the VPN-based tools, such as Lumen, PrivacyGuard, and Video Optimizer, aid in ambiguous network performance measurements and degrade the application performance. Our findings suggest that sound Internet traffic measurement on Android devices requires a good understanding of the device, networks, measurement tools, and applications.


2021 ◽  
Author(s):  
◽  
Phillip Boyle

<p>Gaussian processes have proved to be useful and powerful constructs for the purposes of regression. The classical method proceeds by parameterising a covariance function, and then infers the parameters given the training data. In this thesis, the classical approach is augmented by interpreting Gaussian processes as the outputs of linear filters excited by white noise. This enables a straightforward definition of dependent Gaussian processes as the outputs of a multiple output linear filter excited by multiple noise sources. We show how dependent Gaussian processes defined in this way can also be used for the purposes of system identification. Onewell known problem with Gaussian process regression is that the computational complexity scales poorly with the amount of training data. We review one approximate solution that alleviates this problem, namely reduced rank Gaussian processes. We then show how the reduced rank approximation can be applied to allow for the efficient computation of dependent Gaussian processes. We then examine the application of Gaussian processes to the solution of other machine learning problems. To do so, we review methods for the parameterisation of full covariance matrices. Furthermore, we discuss how improvements can be made by marginalising over alternative models, and introduce methods to perform these computations efficiently. In particular, we introduce sequential annealed importance sampling as a method for calculating model evidence in an on-line fashion as new data arrives. Gaussian process regression can also be applied to optimisation. An algorithm is described that uses model comparison between multiple models to find the optimum of a function while taking as few samples as possible. This algorithm shows impressive performance on the standard control problem of double pole balancing. Finally, we describe how Gaussian processes can be used to efficiently estimate gradients of noisy functions, and numerically estimate integrals.</p>


Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1451
Author(s):  
Zois-Gerasimos Tasoulas ◽  
Iraklis Anagnostopoulos

Graphics processing units (GPUs) are extensively used as accelerators across multiple application domains, ranging from general purpose applications to neural networks, and cryptocurrency mining. The initial utilization paradigm for GPUs was one application accessing all the resources of the GPU. In recent years, time sharing is broadly used among applications of a GPU, nevertheless, spatial sharing is not fully explored. When concurrent applications share the computational resources of a GPU, performance can be improved by eliminating idle resources. Additionally, the incorporation of GPUs in embedded and mobile devices increases the demand for power efficient computation due to battery limitations. In this article, we present an allocation methodology for streaming multiprocessors (SMs). The presented methodology works for two concurrent applications on a GPU and determines an allocation scheme that will provide power efficient application execution, combined with improved GPU performance. Experimental results show that the developed methodology yields higher throughput while achieving improved power efficiency, compared to other SM power-aware and performance-aware policies. If the presented methodology is adopted, it will lead to higher performance of applications that are concurrently executing on a GPU. This will lead to a faster and more efficient acceleration of execution, even for devices with restrained energy sources.


2013 ◽  
Vol 13 (04) ◽  
pp. 1350020
Author(s):  
BASHAR HADDAD ◽  
AMIN JARRAH

The restoration process of cracked images is a challenge and an important field of image and video processing. Old images that experienced bad treatment and environmental conditions, or images of old buildings and statues have the problem of cracks. This problem restricts the ability of extracting information and processing of the image. Many algorithms have been proposed to restore cracked images but most of them failed to remove these cracks efficiently based on realistic assumptions. So, we developed and implemented a new algorithm in order to repair cracked images efficiently by proposing new techniques such as seam processing, stochastic analysis and learning process. The basic motivation of this work is to design a simple algorithm of efficient computation complexity and memory usage that can be used in an interactive fashion where a simple set of parameters is used to control behavior and performance of the algorithm. The algorithm uses seam processing to discover cracks and local spatial information to compensate and handle information shortage based on statistical analysis and data generation. In general, the algorithm can be divided into three phases: cracks detection, cracks filling, and post-processing phase to enhance the quality. The results show how the algorithm deals with two cracked images of extreme deterioration. The results are based on subjective tests where 85 persons graded and evaluated the results under different conditions through three separate sessions.


Author(s):  
H. M. Thieringer

It has repeatedly been show that with conventional electron microscopes very fine electron probes can be produced, therefore allowing various micro-techniques such as micro recording, X-ray microanalysis and convergent beam diffraction. In this paper the function and performance of an SIEMENS ELMISKOP 101 used as a scanning transmission microscope (STEM) is described. This mode of operation has some advantages over the conventional transmission microscopy (CTEM) especially for the observation of thick specimen, in spite of somewhat longer image recording times.Fig.1 shows schematically the ray path and the additional electronics of an ELMISKOP 101 working as a STEM. With a point-cathode, and using condensor I and the objective lens as a demagnifying system, an electron probe with a half-width ob about 25 Å and a typical current of 5.10-11 amp at 100 kV can be obtained in the back focal plane of the objective lens.


Sign in / Sign up

Export Citation Format

Share Document