scholarly journals Soccer Sonification: Enhancing Viewer Experience

Author(s):  
Richard Savery ◽  
Madhukesh Ayyagari ◽  
Keenan R. May ◽  
Bruce N. Walker

We present multiple approaches to soccer sonification, focusing on enhancing the experience for a general audience. For this work, we developed our own soccer data set through computer vision analysis of footage from a tactical overhead camera. This data set included X, Y, coordinates for the ball and players throughout, as well as passes, steals and goals. After a divergent creation process, we developed four main methods of sports sonification for entertainment. For the Tempo Variation and Pitch Variation methods, tempo or pitch is operationalized to demonstrate ball and player movement data. The Key Moments method features only pass, steal and goal data, while the Musical Moments method takes ex-isting music and attempts to align the track with important data points. Evaluation was done using a combination of qualitative focus groups and quantitative surveys, with 36 participants completing hour long sessions. Results indicated an overall preference for the Pitch Variation and Musical Moments methods, and revealed a robust trade-off between usability and enjoyability.

Author(s):  
Simona Babiceanu ◽  
Sanhita Lahiri ◽  
Mena Lockwood

This study uses a suite of performance measures that was developed by taking into consideration various aspects of congestion and reliability, to assess impacts of safety projects on congestion. Safety projects are necessary to help move Virginia’s roadways toward safer operation, but can contribute to congestion and unreliability during execution, and can affect operations after execution. However, safety projects are assessed primarily for safety improvements, not for congestion. This study identifies an appropriate suite of measures, and quantifies and compares the congestion and reliability impacts of safety projects on roadways for the periods before, during, and after project execution. The paper presents the performance measures, examines their sensitivity based on operating conditions, defines thresholds for congestion and reliability, and demonstrates the measures using a set of Virginia safety projects. The data set consists of 10 projects totalling 92 mi and more than 1M data points. The study found that, overall, safety projects tended to have a positive impact on congestion and reliability after completion, and the congestion variability measures were sensitive to the threshold of reliability. The study concludes with practical recommendations for primary measures that may be used to measure overall impacts of safety projects: percent vehicle miles traveled (VMT) reliable with a customized threshold for Virginia; percent VMT delayed; and time to travel 10 mi. However, caution should be used when applying the results directly to other situations, because of the limited number of projects used in the study.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 37
Author(s):  
Shixun Wang ◽  
Qiang Chen

Boosting of the ensemble learning model has made great progress, but most of the methods are Boosting the single mode. For this reason, based on the simple multiclass enhancement framework that uses local similarity as a weak learner, it is extended to multimodal multiclass enhancement Boosting. First, based on the local similarity as a weak learner, the loss function is used to find the basic loss, and the logarithmic data points are binarized. Then, we find the optimal local similarity and find the corresponding loss. Compared with the basic loss, the smaller one is the best so far. Second, the local similarity of the two points is calculated, and then the loss is calculated by the local similarity of the two points. Finally, the text and image are retrieved from each other, and the correct rate of text and image retrieval is obtained, respectively. The experimental results show that the multimodal multi-class enhancement framework with local similarity as the weak learner is evaluated on the standard data set and compared with other most advanced methods, showing the experience proficiency of this method.


Author(s):  
Sebastian Hoppe Nesgaard Jensen ◽  
Mads Emil Brix Doest ◽  
Henrik Aanæs ◽  
Alessio Del Bue

AbstractNon-rigid structure from motion (nrsfm), is a long standing and central problem in computer vision and its solution is necessary for obtaining 3D information from multiple images when the scene is dynamic. A main issue regarding the further development of this important computer vision topic, is the lack of high quality data sets. We here address this issue by presenting a data set created for this purpose, which is made publicly available, and considerably larger than the previous state of the art. To validate the applicability of this data set, and provide an investigation into the state of the art of nrsfm, including potential directions forward, we here present a benchmark and a scrupulous evaluation using this data set. This benchmark evaluates 18 different methods with available code that reasonably spans the state of the art in sparse nrsfm. This new public data set and evaluation protocol will provide benchmark tools for further development in this challenging field.


2013 ◽  
Vol 846-847 ◽  
pp. 1304-1307
Author(s):  
Ye Wang ◽  
Yan Jia ◽  
Lu Min Zhang

Mining partial orders from sequence data is an important data mining task with broad applications. As partial orders mining is a NP-hard problem, many efficient pruning algorithm have been proposed. In this paper, we improve a classical algorithm of discovering frequent closed partial orders from string. For general sequences, we consider items appearing together having equal chance to calculate the detecting matrix used for pruning. Experimental evaluations from a real data set show that our algorithm can effectively mine FCPO from sequences.


Author(s):  
Andre´ L. C. Fujarra ◽  
Rodolfo T. Gonc¸alves ◽  
Fernando Faria ◽  
Marcos Cueva ◽  
Kazuo Nishimoto ◽  
...  

A great deal of works has been developed on the Spar VIM issue. There are, however, very few published works concerning VIM of monocolumn platforms, partly due to the fact that the concept is fairly recent and the first unit was only installed last year. In this context, the present paper presents a meticulous study on VIM for this type of platform concept. Model test experiments were performed to check the influence of many factors on VIM, such as different headings, wave/current coexistence, different drafts, suppression elements, and the presence of risers. The results of the experiments presented here are inline and cross-flow motion amplitudes, ratios of actual oscillation and natural periods, and motions in the XY plane. This is, therefore, a very extensive and important data set for comparisons and validations of theoretical and numerical models for VIM prediction.


2021 ◽  
Author(s):  
Ahmed Al-Sabaa ◽  
Hany Gamal ◽  
Salaheldin Elkatatny

Abstract The formation porosity of drilled rock is an important parameter that determines the formation storage capacity. The common industrial technique for rock porosity acquisition is through the downhole logging tool. Usually logging while drilling, or wireline porosity logging provides a complete porosity log for the section of interest, however, the operational constraints for the logging tool might preclude the logging job, in addition to the job cost. The objective of this study is to provide an intelligent prediction model to predict the porosity from the drilling parameters. Artificial neural network (ANN) is a tool of artificial intelligence (AI) and it was employed in this study to build the porosity prediction model based on the drilling parameters as the weight on bit (WOB), drill string rotating-speed (RS), drilling torque (T), stand-pipe pressure (SPP), mud pumping rate (Q). The novel contribution of this study is to provide a rock porosity model for complex lithology formations using drilling parameters in real-time. The model was built using 2,700 data points from well (A) with 74:26 training to testing ratio. Many sensitivity analyses were performed to optimize the ANN model. The model was validated using unseen data set (1,000 data points) of Well (B), which is located in the same field and drilled across the same complex lithology. The results showed the high performance for the model either for training and testing or validation processes. The overall accuracy for the model was determined in terms of correlation coefficient (R) and average absolute percentage error (AAPE). Overall, R was higher than 0.91 and AAPE was less than 6.1 % for the model building and validation. Predicting the rock porosity while drilling in real-time will save the logging cost, and besides, will provide a guide for the formation storage capacity and interpretation analysis.


2018 ◽  
Vol 11 (2) ◽  
pp. 53-67
Author(s):  
Ajay Kumar ◽  
Shishir Kumar

Several initial center selection algorithms are proposed in the literature for numerical data, but the values of the categorical data are unordered so, these methods are not applicable to a categorical data set. This article investigates the initial center selection process for the categorical data and after that present a new support based initial center selection algorithm. The proposed algorithm measures the weight of unique data points of an attribute with the help of support and then integrates these weights along the rows, to get the support of every row. Further, a data object having the largest support is chosen as an initial center followed by finding other centers that are at the greatest distance from the initially selected center. The quality of the proposed algorithm is compared with the random initial center selection method, Cao's method, Wu method and the method introduced by Khan and Ahmad. Experimental analysis on real data sets shows the effectiveness of the proposed algorithm.


Geophysics ◽  
2019 ◽  
Vol 84 (1) ◽  
pp. C57-C74 ◽  
Author(s):  
Abdulrahman A. Alshuhail ◽  
Dirk J. Verschuur

Because the earth is predominately anisotropic, the anisotropy of the medium needs to be included in seismic imaging to avoid mispositioning of reflectors and unfocused images. Deriving accurate anisotropic velocities from the seismic reflection measurements is a highly nonlinear and ambiguous process. To mitigate the nonlinearity and trade-offs between parameters, we have included anisotropy in the so-called joint migration inversion (JMI) method, in which we limit ourselves to the case of transverse isotropy with a vertical symmetry axis. The JMI method is based on strictly separating the scattering effects in the data from the propagation effects. The scattering information is encoded in the reflectivity operators, whereas the phase information is encoded in the propagation operators. This strict separation enables the method to be more robust, in that it can appropriately handle a wide range of starting models, even when the differences in traveltimes are more than a half cycle away. The method also uses internal multiples in estimating reflectivities and anisotropic velocities. Including internal multiples in inversion not only reduces the crosstalk in the final image, but it can also reduce the trade-off between the anisotropic parameters because internal multiples usually have more of an imprint of the subsurface parameters compared with primaries. The inverse problem is parameterized in terms of a reflectivity, vertical velocity, horizontal velocity, and a fixed [Formula: see text] value. The method is demonstrated on several synthetic models and a marine data set from the North Sea. Our results indicate that using JMI for anisotropic inversion makes the inversion robust in terms of using highly erroneous initial models. Moreover, internal multiples can contain valuable information on the subsurface parameters, which can help to reduce the trade-off between anisotropic parameters in inversion.


2021 ◽  
Vol 50 (1) ◽  
pp. 138-152
Author(s):  
Mujeeb Ur Rehman ◽  
Dost Muhammad Khan

Recently, anomaly detection has acquired a realistic response from data mining scientists as a graph of its reputation has increased smoothly in various practical domains like product marketing, fraud detection, medical diagnosis, fault detection and so many other fields. High dimensional data subjected to outlier detection poses exceptional challenges for data mining experts and it is because of natural problems of the curse of dimensionality and resemblance of distant and adjoining points. Traditional algorithms and techniques were experimented on full feature space regarding outlier detection. Customary methodologies concentrate largely on low dimensional data and hence show ineffectiveness while discovering anomalies in a data set comprised of a high number of dimensions. It becomes a very difficult and tiresome job to dig out anomalies present in high dimensional data set when all subsets of projections need to be explored. All data points in high dimensional data behave like similar observations because of its intrinsic feature i.e., the distance between observations approaches to zero as the number of dimensions extends towards infinity. This research work proposes a novel technique that explores deviation among all data points and embeds its findings inside well established density-based techniques. This is a state of art technique as it gives a new breadth of research towards resolving inherent problems of high dimensional data where outliers reside within clusters having different densities. A high dimensional dataset from UCI Machine Learning Repository is chosen to test the proposed technique and then its results are compared with that of density-based techniques to evaluate its efficiency.


2020 ◽  
Vol 12 (2) ◽  
pp. 869-873
Author(s):  
Jari Pohjola ◽  
Jari Turunen ◽  
Tarmo Lipping

Abstract. Postglacial land uplift is a complex process related to the continental ice retreat that took place about 10 000 years ago and thus started the viscoelastic response of the Earth's crust to rebound back to its equilibrium state. To empirically model the land uplift process based on past behaviour of shoreline displacement, data points of known spatial location, elevation and dating are needed. Such data can be obtained by studying the isolation of lakes and mires from the sea. Archaeological data on human settlements (i.e. human remains, fireplaces etc.) are also very useful as the settlements were indeed situated on dry land and were often located close to the coast. This information can be used to validate and update the postglacial land uplift model. In this paper, a collection of data underlying empirical land uplift modelling in Fennoscandia is presented. The data set is available at https://doi.org/10.1594/PANGAEA.905352 (Pohjola et al., 2019).


Sign in / Sign up

Export Citation Format

Share Document