scholarly journals Pitch Spelling: A Computational Model

2003 ◽  
Vol 20 (4) ◽  
pp. 411-429 ◽  
Author(s):  
Emilios Cambouropoulos

In this article, cognitive and musicological aspects of pitch and pitch interval representations are explored via computational modeling. The specific task under investigation is pitch spelling, that is, how traditional score notation can be derived from a simple unstructured 12-tone representation (e.g., pitch-class set or MIDI pitch representation). This study provides useful insights both into the domain of pitch perception and into musicological aspects of score notation strategies. A computational model is described that transcribes polyphonic MIDI pitch files into the Western traditional music notation. Input to the proposed algorithm is merely a sequence of MIDI pitch numbers in the order they appear in a MIDI file. No a priori knowledge such as key signature, tonal centers, time signature, chords, or voice separation is required. Output of the algorithm is a sequence of "correctly" spelled pitches. The algorithm is based on an interval optimization approach that takes into account the frequency of occurrence of pitch intervals within the major-minor tonal scale framework. The algorithm was evaluated on 10 complete piano sonatas by Mozart and had a success rate of 98.8% (634 pitches were spelled incorrectly out of a total of 54,418 notes); it was tested additionally on three Chopin waltzes and had a slightly worse success rate. The proposed pitch interval optimization approach is also compared with and tested against other pitch-spelling strategies.

2018 ◽  
Vol 72 (2) ◽  
pp. 483-502
Author(s):  
Hongtao Wu ◽  
Xiubin Zhao ◽  
Chunlei Pang ◽  
Liang Zhang ◽  
Bo Feng

A priori attitude information can improve the success rate and reliability of Global Navigation Satellite System (GNSS) multi-antennae attitude determination. However, a priori attitude information is nonlinear, and integrating a priori information into the objective function rigorously will increase the complexity of an ambiguity domain search, such as the Multivariate Constrained-Least-squares Ambiguity Decorrelation Adjustment (MC-LAMBDA) method. In this paper, a new method based on attitude domain search is presented to make use of the a priori attitude angle information with high efficiency. First, the a priori information of pitch and roll is integrated into the search process to derive the analytic search step for attitude angle, and the integer candidates are determined by traversal search in the three-dimensional attitude domain. Then, the objective function is parameterised with Euler angles, and a non-iterative approximate method is utilised to simplify the iterative computation in calculating objective function values. Experimental results reveal that compared to the MC-LAMBDA method, our new method has the same success rate and reliability, but higher efficiency in making use of a priori attitude information.


2020 ◽  
Vol 54 ◽  
pp. 101998 ◽  
Author(s):  
Farah Jamalzadeh ◽  
Alireza Hajiseyed Mirzahosseini ◽  
Faramarz Faghihi ◽  
Mostafa Panahi

2020 ◽  
Vol 93 (1111) ◽  
pp. 20200010
Author(s):  
Mark Worrall ◽  
Sarah Vinnicombe ◽  
David Sutton

Objective: A computational model has been created to estimate the abdominal thickness of a patient following an X-ray examination; its intended application is assisting with patient dose audit of paediatric X-ray examinations. This work evaluates the accuracy of the computational model in a clinical setting for adult patients undergoing anteroposterior (AP) abdomen X-ray examinations. Methods: The model estimates patient thickness using the radiographic image, the exposure factors with which the image was acquired, a priori knowledge of the characteristics of the X-ray unit and detector and the results of extensive Monte Carlo simulation of patient examinations. For 20 patients undergoing AP abdominal X-ray examinations, the model was used to estimate the patient thickness; these estimates were compared against a direct measurement made at the time of the examination. Results: Estimates of patient thickness made using the model were on average within ±5.8% of the measured thickness. Conclusion: The model can be used to accurately estimate the thickness of a patient undergoing an AP abdominal X-ray examination where the patient’s size falls within the range of the size of patients used to create the computational model. Advances in knowledge: This work demonstrates that it is possible to accurately estimate the AP abdominal thickness of an adult patient using the digital X-ray image and a computational model.


2020 ◽  
Vol 48 (12) ◽  
pp. 030006052097424
Author(s):  
Eunyoung Cho ◽  
Hyun-Chang Kim ◽  
Jung-Man Lee ◽  
Ji-Hoon Park ◽  
Najeong Ha ◽  
...  

Objective When performing lightwand intubation, an improper transmitted glow position before tube advancement can cause intubation failure or laryngeal injury. This study was performed to explore the transmitted glow point corresponding to a priori chosen depth for lightwand intubation. Methods Before lightwand intubation, we marked the transmitted glow point from a bronchoscope on the neck when it reached 1 cm below the vocal cords. Lightwand intubation was then performed using this marking point. The distances from the mark to the upper border of the thyroid cartilage, upper border of the cricoid cartilage, and suprasternal notch were measured. Results In total, 107 patients were enrolled. The success rate of lightwand intubation using the mark was 93.5% (95% confidence interval, 88.7%–99.2%) at the first attempt. The marking point was placed 12.0 mm (95% confidence interval, 10.6–13.4 mm) below the upper border of the cricoid cartilage. Conclusion Anaesthesiologists should be aware of the appropriate point of the transmitted glow on the patient’s neck when performing lightwand intubation. We suggest that this point is approximately 1 cm below the upper border of the cricoid cartilage. Trial registration: ClinicalTrials.gov NCT03480035


2018 ◽  
Vol 72 (04) ◽  
pp. 965-986 ◽  
Author(s):  
Mingkui Wu ◽  
Xiaohong Zhang ◽  
Wanke Liu ◽  
Renpan Wu ◽  
Renlan Zhang ◽  
...  

This paper first investigates the influencing factors of between-receiver Differential Inter-System Bias (DISB) between overlapping frequencies of the Global Positioning System (GPS), Galileo and the Quasi-Zenith Satellite System (QZSS). It was found that the receiver reboot and the type of observations may have an impact on DISBs. The impact of receiver firmware upgrades and the activation of anti-multipath filters are also investigated and some new results are presented. Then a performance evaluation is presented of tightly combined relative positioning for a short baseline with GPS/Galileo/QZSS L1-E1-L1/L5-E5a-L5 observations with the current constellations, in which the recently launched Galileo and QZSS satellites will also be included. It is demonstrated that when DISBs are a priori calibrated and corrected, the tightly combined model can deliver a much higher empirical ambiguity resolution success rate and positioning accuracy with respect to the classical loosely combined model, especially under environments where the observed satellites for each system are limited and only single-frequency observations are available. The ambiguity dilution of precision, bootstrapping success rate, and ratio values are analysed to illustrate the benefits of the tightly combined model as well.


2020 ◽  
Vol 66 (12) ◽  
pp. 5576-5598
Author(s):  
Vishal Gupta ◽  
Brian Rongqing Han ◽  
Song-Hee Kim ◽  
Hyung Paek

Frequently, policy makers seek to roll out an intervention previously proven effective in a research study, perhaps subject to resource constraints. However, because different subpopulations may respond differently to the same treatment, there is no a priori guarantee that the intervention will be as effective in the targeted population as it was in the study. How then should policy makers target individuals to maximize intervention effectiveness? We propose a novel robust optimization approach that leverages evidence typically available in a published study. Our model can be easily optimized in minutes for realistic instances with off-the-shelf software and is flexible enough to accommodate a variety of resource and fairness constraints. We compare our approach with current practice by proving performance guarantees for both approaches, which emphasize their structural differences. We also prove an intuitive interpretation of our model in terms of regularization, penalizing differences in the demographic distribution between targeted individuals and the study population. Although the precise penalty depends on the choice of uncertainty set, we show that for special cases we can recover classical penalties from the covariate matching literature on causal inference. Finally, using real data from a large teaching hospital, we compare our approach to common practice in the particular context of reducing emergency department utilization by Medicaid patients through case management. We find that our approach can offer significant benefits over common practice, particularly when the heterogeneity in patient response to the treatment is large. This paper was accepted by Chung-Piaw Teo, optimization.


2009 ◽  
Vol 39 (2) ◽  
pp. 342-355 ◽  
Author(s):  
Cristian D. Palma ◽  
John D. Nelson

Harvest scheduling decisions are made in an uncertain environment, and current modeling techniques that consider uncertainty impose severe difficulties when solving real problems. In this paper we describe a robust optimization methodology that explicitly considers randomness in most of the model coefficients while keeping the model computationally tractable. We apply the method to schedule harvest decisions when both timber yield and demand of two products are uncertain. Since uncertain coefficients must be independent, uniform, and symmetrically distributed, we only address uncertainty attributable to estimate errors of forecast models. The methodology was applied to a 245 090 ha forest in British Columbia, Canada. We compared the change in harvest decisions and objective function when robust solutions are implemented relative to deterministic solutions. Although probability bounds can be used to a priori define the probability of constraint violations, they produce conservative solutions. We therefore tested the rates of constraint violations by simulation. While traditional deterministic decisions were always infeasible when uncertain data were simulated, robust decisions were much less sensitive to uncertainty and were, to a large extent, protected against the occurrence of infeasibilities. In exchange, reasonable reductions in the objective function were observed.


Author(s):  
H. Fahmy ◽  
D. Blostein

In image analysis, recognition of the primitives plays an important role. Subsequent analysis is used to interpret the arrangement of primitives. This subsequent analysis must make allowance for errors or ambiguities in the recognition of primitives. In this paper, we assume that the primitive recognizer produces a set of possible interpretations for each primitive. To reduce this primitive-recognition ambiguity, we use contextual information in the image, and apply constraints from the image domain. This process is variously termed constraint satisfaction, labeling or discrete relaxation. Existing methods for discrete relaxation are limited in that they assume a priori knowledge of the neighborhood model: before relaxation begins, the system is told (or can determine) which sets of primitives are related by constraints. These methods do not apply to image domains in which complex analysis is necessary to determine which primitives are related by constraints. For example, in music notation, we must recognize which notes belong to one measure, before it is possible to apply the constraint that the number of beats in the measure should match the time signature. Such constraints can be handled by our graph-rewriting paradigm for discrete relaxation: here neighborhood-construction is interleaved with constraint-application. In applying this approach to the recognition of simple music notation, we use approximately 180 graph-rewriting rules to express notational constraints and semantic-interpretation rules for music notation. The graph rewriting rules express both binary and higher-order notational constraints. As image-interpretation proceeds, increasingly abstract levels of interpretation are assigned to (groups of) primitives. This allows application of higher-level constraints, which can be formulated only after partial interpretation of the image.


1999 ◽  
Vol 55 (5) ◽  
pp. 891-900 ◽  
Author(s):  
Herbert A. Hauptman ◽  
Hongliang Xu ◽  
Charles M. Weeks ◽  
Russ Miller

The simple cosine function used in the formulation of the traditional minimal principle and the related Shake-and-Bake algorithm is here replaced by a function of exponential type and its expected value and variance are derived. These lead to the corresponding exponential minimal principle and its associated Exponential Shake-and-Bake algorithm. Recent applications of the exponential function to several protein structures within the Shake-and-Bake framework suggest that this function leads, in general, to significant improvements in the success rate (percentage of trial structures yielding solution) of the Shake-and-Bake procedure. However, only in space group P1 is it presently possible to assign optimal values a priori for the exponential-function parameters.


2021 ◽  
Author(s):  
Justin Y Lee ◽  
Mark P Styczynski

Motivation: As the large-scale study of metabolites and a direct readout of a system's metabolic state, metabolomics has significant appeal as a source of information for many metabolic modeling platforms and other metabolic analysis tools. However, metabolomics data are typically reported in terms of relative abundances, which precluding use with tools where absolute concentrations are necessary. While chemical standards can be used to determine the absolute concentrations of metabolites, they are often time-consuming to run, expensive, or unavailable for many metabolites. A computational framework that can infer absolute concentrations without the use of chemical standards would be highly beneficial to the metabolomics community. Results: We have developed and characterized MetaboPAC, a computational strategy that leverages the mass balances of a system to infer absolute concentrations in metabolomics datasets. MetaboPAC uses a kinetic equations approach and an optimization approach to predict the most likely response factors that describe the relationship between absolute concentrations and their relative abundances. We determined that MetaboPAC performed significantly better than the other approaches assessed on noiseless data when at least 60% of kinetic equations are known a priori. Under the most realistic conditions (low sampling frequency, high noise data), MetaboPAC significantly outperformed other methods in the majority of cases when 100% of the kinetic equations were known. For metabolomics datasets extracted from systems that are well-studied and have partially known kinetic structures, MetaboPAC can provide valuable insight about their absolute concentration profiles.


Sign in / Sign up

Export Citation Format

Share Document